Unlocking Revenue: Mastering Data Practices for Business Growth

Business

The Cost of Bad Data Practices

A recent report by commercial data and analytics firm Dun and Bradstreet reveals that businesses are missing out on revenue opportunities and losing customers due to ineffective data practices. The research, based on a survey of 510 business decision makers in the US and UK, highlights the significant impact of poor data management on various aspects of business operations.

Customer Retention and Acquisition

Nearly 20% of companies have lost a customer due to using incomplete or inaccurate information about them, while an additional 15% failed to sign a new contract with a customer for the same reason

These findings underscore the importance of maintaining accurate and comprehensive customer data to ensure customer satisfaction and retention.

Financial Forecasting and Credit Management

The report also found that nearly one-quarter of respondents had inaccurate financial forecasts, while 17% offered too much credit to a customer due to a lack of information, resulting in financial losses

Accurate data is crucial for making informed decisions and mitigating financial risks.

Compliance Challenges

The survey revealed stark discrepancies between the US and UK, with compliance being nearly twice as big a concern in the UK, likely due to the challenges of meeting the requirements of the General Data Protection Regulation (GDPR)

More than 10% of organizations reported being fined for data issues related to compliance

Barriers to Effective Data Utilization

The way data is structured appears to be a significant barrier at many organizations. Nearly half of the respondents (46%) said data is too siloed to make any sense of it

The biggest challenges to making use of data are protecting data privacy (34%), having accurate data (26%), and analyzing/processing data (24%)

The Need for Data Governance and Stewardship

The lack of structure in data management might reflect the fact that 41% of business leaders said no one in their organization is responsible for data management

The absence of ownership could also explain why more than half of the surveyed organizations have not had the budget needed to implement effective data management practices

Monica Richter, chief data officer at Dun and Bradstreet, emphasizes the importance of making data governance and stewardship a priority, stating that clean, defined data is key to the success of any program and essential for mitigating risk and growing the business

Business

The Future of Data Management

The survey indicates a growing recognition that responsibility for data should be a priority for C-level executives

However, business leaders are divided as to who on the leadership team actually owns responsibility for data and how that might change in the future

All business leaders agreed that the CEO has ultimate responsibility for data, more so than even technology leaders such as the CTO or CIO

A majority of organizations acknowledged that data will be vital to their future success

However, fewer than one quarter of them said they have employees dedicated to data management or the right talent to implement effective data management practices

In conclusion, the report underscores the profound effect that poor data practices can have on business performance, emphasizing the urgent necessity for companies to prioritize data governance and stewardship. Neglecting these areas can lead to missed opportunities, inefficient processes, and heightened risks.

Data silos, where information is fragmented across different departments, often hinder collaboration and lead to inconsistent insights, making it difficult for businesses to make informed, data-driven decisions. Inaccuracies within data can distort key metrics, resulting in misguided strategies that affect revenue, customer satisfaction, and operational efficiency. Additionally, the lack of clear data ownership leaves companies vulnerable to regulatory non-compliance and cybersecurity risks.

By focusing on breaking down data silos, enhancing data accuracy, and establishing clear accountability, businesses can unlock new revenue streams through deeper insights into customer behavior, better anticipate market trends, and optimize operational processes. Furthermore, improving data governance ensures that the organization remains compliant with evolving data regulations, avoiding costly penalties and reputational damage.

Ultimately, companies that adopt robust data governance and stewardship practices are not only better positioned to enhance customer retention and satisfaction but are also more likely to drive innovation and maintain a competitive edge in an increasingly data-centric business environment.

Cyber Whale is a Moldovan agency specializing in building custom Business Intelligence (BI) systems that empower businesses with data-driven insights and strategic growth.

Let us help you with our BI systems, let us know at [email protected]

How DoorDash Became the Dominant Food Delivery Service

DoorDash

DoorDash‘s journey from a small startup to the dominant player in the food delivery market is a remarkable story of strategic execution and data-driven innovation. This article delves into the three critical elements that fueled DD’s rise: a clear strategy and operating model, relentless focus on execution, and a data platform that drives intelligence and automation.

Strategy and Operating Model

Their success can be attributed to their ability to find an underserved market segment and serve it better than the competition. By focusing on suburban markets and smaller metropolitan areas, DoorDash was able to capitalize on the lack of alternatives and the convenience it provided to residents

This strategy resulted in higher order values, lower customer acquisition costs, and better customer retention

Execution: The Key to Success

DoorDash’s relentless focus on execution has been a critical factor in their success. They have developed an “operational playbook” to launch, run, and scale local markets, with a dedicated team responsible for each aspect of the business

DoorDash has also been able to increase order volume per market and customer through performance-based marketing and subscription programs like DashPass

Data: Competitive Advantage

DoorDash’s data platform is a key driver of their success, allowing them to run granular optimization experiments and make incremental improvements across the food delivery lifecycle Their proprietary technology carefully optimizes the interactions between merchants, consumers, and Dashers, making the end-to-end experience seamless and delightful DoorDash’s data platform has also enabled them to develop real-time prediction services like “Sybil,” which powers machine learning models for search, dasher assignment, and fraud prevention

Data-Driven Intelligence and Automation: The Power Behind Their Analytics Platform

They has crafted an impressive data platform that fuels intelligence and automation, enabling granular optimization across its entire food delivery process. With a laser focus on the “Get 1% better every day” mantra, DoorDash leverages data at every step to refine its operations.

Their proprietary local logistics platform optimizes the interactions between merchants, consumers, and Dashers. This constant flow of data is fed into machine learning algorithms, which drive improvements. Whether it’s personalized content for consumers based on preferences or helping Dashers optimize earnings, DD’s data-driven approach ensures that every aspect of the platform becomes more efficient with each order.

Data collection is key in any industry, and DD takes this to the next level. Just as airlines track ticket sales or brokerages monitor stock trades, DD meticulously collects and analyzes food delivery transactions. These analytics aren’t limited to simple queries like “How many orders did we process yesterday?”—they delve deeper into customer behavior, marketing channels, and transaction methods. For example, they might track which ad prompted a customer to sign up or analyze the device or payment method used for an order.

DoorDash

This granular data gives them the ability to conduct A/B testing, experimenting with elements as specific as the order of menu items. Through continuous experimentation,the company fine-tunes everything from ad imagery to Dasher pick-up times, ensuring a highly optimized experience.

Their data-driven experimentation platform is a competitive advantage, turning their logistics engine into an intelligent, self-improving system. Whether optimizing Dasher dispatch or enhancing menu layouts, DD’s commitment to data and automation is a perfect example of how technology can fuel business growth.

Data is truly the heart of DoorDash’s innovation.

Conclusion

DoorDash’s success is a testament to the power of a clear strategy, relentless execution, and data-driven innovation. By finding an underserved market segment, developing a repeatable operating model, and building an economic moat with data, DD has emerged as the dominant player in the food delivery market

Their story serves as an inspiration for startups looking to disrupt established industries and build lasting

Cyber Whale is a Moldovan agency specializing in building custom Business Intelligence (BI) systems that empower businesses with data-driven insights and strategic growth.

Let us help you with our BI systems, let us know at [email protected]

Riding the Data Wave: How Uber Transforms Transportation with Data Science

Data science

Uber leverages data science and BD to revolutionize transportation and logistics on a global scale. With over 8 million users, 1 billion trips, and 160,000 drivers across 449 cities in 66 countries, Uber has become a leading force in the ride-sharing industry. The company addresses various challenges such as inadequate transportation infrastructure, inconsistent customer experiences, and driver-related issues through innovative data-driven solutions.

Big Data Infrastructure

At the core of Uber’s operations is its extensive data collection system, which is essential for making informed decisions. Uber utilizes a Hadoop data lake for storage and employs Apache Spark for processing vast amounts of data. This infrastructure allows Uber to handle diverse data types from various sources, including:

  • SOA database tables
  • Schema-less data stores
  • Event messaging systems like Apache Kafka

Uber’s ability to collect detailed GPS data from every trip enables it to analyze historical patterns and optimize its services continuously.

Data Collection and Analysis

Uber’s data scientists utilize the collected information to address several key functions:

  • Demand Prediction: By analyzing trip data, Uber can forecast demand for rides in different areas, allowing for better resource allocation.
  • Surge Pricing: The company implements dynamic pricing models based on real-time demand and supply conditions. This algorithm adjusts fares during peak times to ensure availability while maximizing profits.
  • Matching Algorithms: Uber employs sophisticated algorithms to match riders with the nearest available drivers efficiently. This involves calculating estimated arrival times based on various factors such as location and traffic conditions.

Data Science Applications

Data science plays a crucial role in enhancing user experiences at Uber. The company uses predictive models for:

  • Fare Estimation: Fares are calculated using a combination of internal algorithms and external data sources, including street traffic patterns and public transport routes.
  • Driver Behavior Analysis: Data collected from drivers even when they are not carrying passengers helps Uber analyze traffic patterns and driver performance metrics.
  • Fraud Detection: Machine learning techniques are employed to identify fraudulent activities such as fake rides or payment methods.
Data Science

Tools and Technologies

Uber’s team primarily utilizes Python, supported by libraries like NumPy, SciPy, Matplotlib, and Pandas. For visualization needs, they prefer using D3.js, while PostgreSQL serves as their main SQL framework. Occasionally, R or Matlab is used for specific projects or prototypes.

Future Prospects

Looking ahead, Uber aims to expand its services beyond ride-sharing into areas like grocery delivery (UberFresh), package courier services (UberRush), and even helicopter rides (UberChopper). By integrating personal customer data with their existing datasets, Uber plans to enhance service personalization further.In summary, the success of Uber hinges on its ability to harness BD and apply sophisticated data science techniques to create a seamless user experience in transportation and data science.

Cyber Whale is a Moldovan agency specializing in building custom Business Intelligence (BI) systems that empower businesses with data-driven insights and strategic growth.

Let us help you with our BI systems, let us know at [email protected]

How Netflix Leveraged Big Data to Boost Revenue by Billions

netflix big data

Netflix‘s remarkable success in the entertainment industry can be largely attributed to its strategic use of big data and analytics. With a market valuation exceeding $164 billion, Netflix has outpaced competitors such as Disney, thanks in part to a customer retention rate of 93%, significantly higher than Hulu’s 64% and Amazon Prime’s 75%. This retention is not only due to their ability to keep subscribers but also their success in producing popular original content, such as “House of Cards,” “Orange Is The New Black,” and “Bird Box,” which have attracted substantial viewership and subscription growth.

Data-Driven Decision Making

Subscriber Data Collection

Netflix employs advanced data analytics to gather insights from its 151 million subscribers. By analyzing customer behavior and purchasing patterns, Netflix creates personalized recommendations that drive viewer engagement. Approximately 75% of viewer activity on the platform stems from these tailored suggestions.The data collection process is extensive, encompassing:

  • Viewing habits: Time and date of viewing, device used, and whether shows are paused or resumed.
  • Engagement metrics: Completion rates for shows, time taken to finish a series, and repeated scene views.
  • User interaction: Ratings provided by users, search queries, and the frequency of specific searches.

Recommendation Algorithms

To leverage this wealth of data, Netflix utilizes sophisticated recommendation algorithms that analyze user preferences. These algorithms are crucial for maintaining high engagement levels, with estimates suggesting that the recommendation system contributes to over 80% of the content streamed on the platform. This capability not only enhances user experience but also generates significant revenue through customer retention.

Content Development Strategy

Netflix’s approach to greenlighting original content is heavily influenced by data analytics. The company does not randomly invest in new projects; instead, it relies on insights derived from user engagement with existing content. For instance, the decision to produce “Orange Is The New Black” was informed by the success of Jenji Kohan’s previous series “Weeds,” which had performed well on the platform.

Content Development Strategy

Netflix’s approach to greenlighting original content is heavily influenced by data analytics. The company does not randomly invest in new projects; instead, it relies on insights derived from user engagement with existing content. For instance, the decision to produce “Orange Is The New Black” was informed by the success of Jenji Kohan’s previous series “Weeds,” which had performed well on the platform.

Targeted Marketing

In addition to content creation, Netflix employs big data for targeted marketing strategies. For example, when promoting “House of Cards,” Netflix crafted over ten different trailers tailored to specific audience segments based on their viewing history. This personalized marketing approach minimizes costs while maximizing viewer interest.

A/B Testing

Netflix also employs A/B testing extensively in its marketing campaigns. By presenting different promotional materials or thumbnails to various audience segments, they can measure engagement levels and determine which creative approaches yield the best results. This iterative process ensures that marketing efforts are continually optimized for maximum impact.

Feedback Mechanisms

Netflix actively encourages user feedback through systems like the thumbs up/thumbs down rating system. This method has significantly improved audience engagement and allows Netflix to further customize user homepages. According to Joris Evers, Director of Global Communications at Netflix, there are approximately 33 million unique versions of Netflix’s homepage tailored to individual user preferences.

Conclusion

The strategic application of BD and analytics is central to Netflix’s business model, positioning it as an analytics-driven company rather than just a media provider. By effectively processing vast amounts of data and deriving actionable insights, Netflix not only enhances user satisfaction but also ensures a high return on investment for its content decisions. This case exemplifies how powerful analytics can transform user engagement into substantial financial success.

Cyber Whale is a Moldovan agency specializing in building custom Business Intelligence (BI) systems that empower businesses with data-driven insights and strategic growth.

Let us help you with our BI systems, let us know at [email protected]

The Future of the Modern Data Stack: Insights and Innovations

Data Stack

In the rapidly evolving landscape of data management, understanding the modern data stack is crucial for organizations aiming to leverage their data effectively. This blog explores the past, present, and future of the modern data stack, focusing on key innovations and trends that are shaping the industry.

The Evolution of the Modern Data Stack

Cambrian Explosion I: 2012 – 2016

The modern data stack began to take shape with the launch of Amazon Redshift in 2012, which revolutionized data warehousing by providing a cloud-native solution that was both powerful and affordable. This period saw a surge in innovation, with tools like Fivetran for ingestion, Looker for business intelligence, and dbt for transformation emerging to meet the growing demands for efficient data processing.

  • Key Developments:
    • Introduction of cloud-native MPP databases.
    • Significant performance improvements in data processing.
    • Emergence of new vendors focused on solving BI challenges.
Data Stack

Deployment Phase: 2016 – 2020

Following this initial explosion of innovation, the industry entered a deployment phase where organizations began adopting these new tools. This period was marked by a maturation of existing technologies, leading to improved reliability and user experiences across the stack.

  • Highlights:
    • Enhanced reliability and connector coverage in tools like Fivetran and Stitch.
    • dbt underwent significant rearchitecture to improve modularity and performance.
    • The stack became more accessible to a broader audience as technologies matured.

Cambrian Explosion II: 2021 – 2025

As we look to the future, we anticipate another wave of innovation driven by advancements in governance, real-time analytics, and democratized data exploration. The modern data stack is poised for transformative changes that will enhance its capabilities and usability.

  • Emerging Trends:
    • Governance Solutions: Increased focus on data governance tools to provide context and trust within organizations.
    • Real-Time Analytics: A shift towards real-time data processing enabling more responsive decision-making.
    • Democratized Data Access: Development of user-friendly interfaces that empower non-technical users to engage with data effectively.

Key Innovations Shaping the Future

  1. Governance: As organizations ingest more data, effective governance becomes essential. Tools that provide lineage tracking and metadata management will be critical for maintaining trust in data-driven decisions.
  2. Real-Time Capabilities: The integration of real-time data processing will unlock new use cases, allowing businesses to respond swiftly to changing conditions and customer needs.
  3. User Empowerment: The future will see an emphasis on creating intuitive interfaces that allow all employees, regardless of technical expertise, to explore and analyze data seamlessly.
  4. Vertical Analytical Experiences: There is a growing need for specialized analytical tools tailored to specific business functions, which will enhance the depth of insights derived from data.

Conclusion

The modern data stack is at a pivotal point in its evolution. With foundational technologies now firmly established, we are entering a phase ripe for innovation. By focusing on governance, real-time analytics, and user empowerment, organizations can harness the full potential of their data. As we move forward, staying abreast of these developments will be essential for any business looking to thrive in a data-driven world.Embrace these changes and prepare your organization for the future of data management!

Cyber Whale is a Moldovan agency specializing in building custom Business Intelligence (BI) systems that empower businesses with data-driven insights and strategic growth.

Let us help you with our BI systems, let us know at [email protected]

Transforming Data Integration: The Shift from ETL to ELT in the Cloud Era

Data integration

What You’ll Learn in This Blog

  1. The difference between ETL and ELT
  2. The benefits of using an ELT over ETL or “hand-cranked” code
  3. How the Cloud, with the next generation of tools, can simplify the data integration landscape
  4. Key data integration terms

ETL vs ELT

Let’s start by understanding the difference between ETL (Extract, Transform, Load) and ELT (Extract, Load, Transform).

ETL

ETL emerged in the 90s with the rise of data warehousing. The process involved:

  1. Extracting data from source systems
  2. Transforming the data integration process
  3. Loading the transformed data into a database for analysis and reporting

Before ETL tools existed, this was done using hand-coded scripts, which was time-consuming and lacked lineage and maintainability. ETL tools like OWB, DataStage, and Informatica simplified the process by performing transformations on application servers rather than source systems or target databases.

The benefits of ETL tools include:

  • Lineage tracking
  • Logging and metadata
  • Simplified slowly changing dimensions (SCD)
  • Graphical user interface (GUI)
  • Improved collaboration between business and IT1

ELT

ELT tools leverage the power of the underlying data warehouse by performing transformations within the database itself. This minimizes the need for excessive data movement and reduces the latency that typically accompanies traditional ETL processes.

With the rise of Hadoop during the “Big Data” era, computation was pushed closer to the data, leading to a more siloed approach between traditional data warehouses and big data systems. This shift increased the need for specialized programming skills, complicating data accuracy, lineage tracking, and overall management in complex environments.

The Next Generation of ELT Tools

Cloud-based data warehouses like Snowflake, Google BigQuery, and AWS Redshift have enabled the resurgence of ELT. Next-generation ELT tools like Matillion fully utilize the underlying cloud databases for computations, eliminating the need for data to leave the database.

Modern analytical platforms like Snowflake can satisfy both data lake and enterprise data warehouse requirements, allowing the use of a single ELT tool for transformations. This reduces the total cost of ownership (TCO) and development time while improving maintainability and impact assessment.

Streaming and Governance

Streaming enables real-time analytics by combining data sources to help businesses make quick decisions. Tools like HVR can replicate data cost-effectively, blending replication with ELT (RLT).

Governance is crucial for ensuring data lineage, metadata, audit, and log information, especially for compliance with regulations like GDPR. ELT tools like Matillion provide this information easily through their GUI, generated documentation, or APIs to connect with data governance tools.

DataOps and Migration

The rise of DataOps emphasizes the need for easy deployment of changes using tools like Git. Modern ELT tools support agile working by building deployment pipelines and regression testing capabilities, allowing regular changes to accommodate source system updates or new data sources while ensuring data integrity.

Migrating to a modern analytical platform can be achieved by transitioning from a legacy analytics platform. Leading Edge IT can assist with this process.

data integration

Conclusion

Cloud-based platforms such as Snowflake offer immense scalability for compute tasks, making them ideal for modern data platforms. Incorporating ELT tools like Matillion further optimizes these setups by streamlining workflows and reducing the total cost of ownership (TCO). By integrating replication solutions such as HVR, you can automate data synchronization across environments. When paired with ELT and cloud-based data warehouses, these tools enable efficient, reusable templates with shared components, eliminating manual coding and fostering agility in data management. This combined approach drives efficiency, scalability, and flexibility in your data architecture.

Cyber Whale is a Moldovan agency specializing in building custom Business Intelligence (BI) systems that empower businesses with data-driven insights and strategic growth.

Let us help you with our BI systems, let us know at [email protected]

The Data Revolution: Transitioning from Warehouses to Lakehouses for Enhanced Analytics

Analytics

The evolution of data analytics platforms has seen a significant shift from traditional data warehouses to modern data lakehouses, driven by the need for more flexible and scalable data management solutions.

The Shift in Data Management

Historically, organizations relied heavily on data warehouses for structured data analysis. These systems excelled at executing specific queries, particularly in business intelligence (BI) and reporting environments. However, as data volumes grew and diversified—encompassing structured, semi-structured, and unstructured data—the limitations of traditional data warehouses became apparent.In the mid-2000s, businesses began to recognize the potential of harnessing vast amounts of data from various sources for analytics and monetization. This led to the emergence of the “data lake,” designed to store raw data without enforcing strict quality controls. While data lakes provided a solution for storing diverse data types, they fell short in terms of data governance and transactional capabilities.

The Role of Object Storage

The introduction of object storage, particularly with the standardization of the S3 API, has transformed the landscape of data analytics. Object storage allows organizations to store a wide array of data types efficiently, making it an ideal foundation for modern analytics platforms.Today, many analytics solutions, such as Greenplum, Vertica, and SQL Server 2022, have integrated support for object storage through the S3 API. This integration enables organizations to utilize object storage not just for backups but as a primary data repository, facilitating a more comprehensive approach to data analytics.

The Benefits of Data Lakehouses

The modern data lakehouse architecture combines the best features of data lakes and data warehouses. It allows for the decoupling of storage and compute resources, supporting a variety of analytical workloads. This flexibility means that organizations can access and analyze their entire data set efficiently using standard S3 API calls.

Key Advantages:

  • Scalability: Object storage can grow with the organization’s data needs without the constraints of traditional storage solutions.
  • Versatility: Supports diverse data types and analytics use cases, making it suitable for various business applications.
  • Cost-Effectiveness: Provides a more affordable storage solution, particularly for large volumes of data.

Conclusion

The evolution from data warehouses to data lakehouses represents a significant advancement in data analytics capabilities. By leveraging object storage and the S3 API, organizations can now manage their data more effectively, enabling deeper insights and better decision-making. For more detailed insights and use cases, explore Cloudian’s resources on hybrid cloud storage for data analytics.

Cyber Whale is a Moldovan agency specializing in building custom Business Intelligence (BI) systems that empower businesses with data-driven insights and strategic growth.

Let us help you with our BI systems, let us know at [email protected]

Mastering Java: Essential Code Techniques for Modern Development

Java

Java Roadmap

Mastering Java requires a step-by-step approach, moving from the basics to advanced topics. Here’s a streamlined roadmap to guide your journey:

1. Setup and Tools

  • Linux: Learn basic commands.
  • Git: Master version control for collaboration.
  • IDEs: Familiarize yourself with:
    • IntelliJ IDEA, Eclipse, or VSCode.

2. Core Java Concepts

  • OOP: Understand classes, objects, inheritance, and polymorphism.
  • Arrays & Strings: Work with data structures and string manipulation.
  • Loops: Control flow with for, while, and do-while.
  • Interfaces & Packages: Organize and structure code.

3. File I/O and Collections

  • File Handling: Learn file operations using I/O Streams.
  • Collections Framework: Work with Lists, Maps, Stacks, and Queues.
  • Optionals: Avoid null pointer exceptions with Optional.

4. Advanced Java Concepts

  • Dependency Injection: Understand DI patterns.
  • Design Patterns: Learn common patterns like Singleton and Factory.
  • JVM Internals: Learn memory management and garbage collection.
  • Multi-Threading: Handle concurrency and threads.
  • Generics & Exception Handling: Write type-safe code and handle errors gracefully.
  • Streams: Work with functional programming using Streams.

5. Testing and Debugging

  • Unit & Integration Testing: Use JUnit/TestNG for testing.
  • Debugging: Learn debugging techniques.
  • Mocking: Use libraries like Mockito for test isolation.

6. Databases

  • Database Design: Learn to design schemas and write efficient queries.
  • SQL & NoSQL: Work with relational (JDBC) and non-relational databases.
  • Schema Migration Tools: Use Flyway or Liquibase for migrations.
Java

7. Clean Code Practices

  • SOLID Principles: Write maintainable and scalable code.
  • Immutability: Ensure thread-safe and predictable objects.
  • Logging: Implement effective logging for debugging.

8. Build Tools

  • Learn to use Maven, Gradle, or Bazel for project builds.

9. HTTP and APIs

  • HTTP Protocol & REST API: Design scalable APIs.
  • GraphQL: Explore efficient querying with GraphQL.

10. Frameworks

  • Spring Boot: Build production-ready applications.
  • Play & Quarkus: Learn lightweight, cloud-native frameworks.

Let us develop your Java application!

Let us know at [email protected]

Head of data – Job

Job

Job Description

Because SaaS does not satisfy most of specific needs, we need to market new kind of CDP to empower data management.

Requirements:

  • Experience with ETL, data pipelines.
  • Knowledge of SQL
  • Knowledge of GenAI, LLMs, a bit of MLOps skills to deploy LLMs.
  • At least basic: Python, Javascript
  • English leve – B1+
  • Experience with Docker, Git-actions, Gitflow, Terraform, Terraform-cloud
  • Ability to grasp new concepts fast.

We can consider someone junior, but you really should have at least academic experience with the technologies mentioned above.

What you’ll get:

  • Pleasant atmosphere for personal and professional growth
  • Good salary and flexible hours
  • Employees Stock Options Program
  • Flexible hours
  • Fun when working and responsible attitude

Visit us to learn more!

How to parse dynamic HTML content using Python

In the previous tutorial we learning how to parse HTML in Python. In the Python tutorial we are going to learn to to parse dynamic HTML content generated by JavaScript, jQuery, Ajax, Angular or other dynamic pages technology.

What’s the problem with parsing dynamic HTML content in Python and in general?

The problem is that when you request contents of a HTML page, you are presented HTML, CSS and scripts returned from the server. If the page is dynamic, what you get is only a couple of scripts that are meant to be interpreted by your browser that, in its turn, will eventually display HTML content for a user.

That leads us to the idea that we should first render the page and then grab its HTML. Also it should take some time to render the page since sometimes the content is quite “heavy” and it takes some time to load it.

So, along with pure Python we should use some kind of UI component and in particular a Web View or some kind of Web frame.

One of the options is to use Qt for Python and to handle page rendering events and another one (which I honestly prefer more) is to use selenium for python.

So, let’s get down to writing some code but before that let’s elaborate and approach.

  1. Open web view with URL.
  2. Wait untill the page is loaded. Often the criteria here is a loaded div of some class.
  3. Grab the rendered HTML.
  4. Process it further using beautiful soup

You will need Chrome Web Driver to run the web view.

Also you will have to install selenium as well as libs from previous tutorial:

pip install selenium

So here is the Python code to parse dynamic content:

#import selenium compnents, urllib, beautiful soup
from bs4 import BeautifulSoup
from selenium import webdriver
from urllib import urlopen
from selenium.webdriver.support.ui import WebDriverWait
from selenium.webdriver.support import expected_conditions as EC
from selenium.common.exceptions import TimeoutException
from selenium.webdriver.common.by import By


#url - the url to fetch dynamic content from.
#delay - second for web view to wait
#block_name - id of the tag to be loaded as criteria for page loaded state.
def fetchHtmlForThePage(url, delay, block_name):
	#supply the local path of web driver.
	#in this example we use chrome driver
	browser = webdriver.Chrome('/Applications/chromedriver')
	#open the browser with the URL
	#a browser windows will appear for a little while
	browser.get(url)
	try:
	#check for presence of the element you're looking for
		element_present = EC.presence_of_element_located((By.ID, block_name))
		WebDriverWait(browser, delay).until(element_present)

	#unless found, catch the exception
	except TimeoutException:
		print "Loading took too much time!"	

	#grab the rendered HTML
	html = browser.page_source
	#close the browser
	browser.quit()
	#return html
	return html


#call the fetching function we created
html = fetchHtmlForThePage(url, 5, 're-Searchresult')
#grab HTML document
soup = BeautifulSoup(html)
#process it further as you wish.....
#.....
processFetchedUrls(soup, path)
	

So here how to parse dynamic HTML content generated with JavaScript with the of Python.

Visit us to get help with your Python challenge of let us know if can help you with your digital needs.