Full Throttle Data: How Data Observability Can Help Your Business Achieve Maximum Velocity!

George Karapalidis
4 min readApr 16, 2023

--

A complex data lineage — thousands of connected data points

On average, data engineers spend 55% of their time on data maintenance tasks. Some may think this is quite high, but if you want your data warehouse to be top-notch, it’s going to take effort from you and your team.

These days, more and more businesses are identifying as data-driven and investing in expanding their data teams to collect, store, manage, analyse, and use data for day-to-day business decisions. To achieve ambitious objectives, millions of data points need to be accessible and usable by almost every professional in the organisation. Because everyone has a problem to solve and data can help them solve it.

Data is a product.

Data is not just a set of numbers on a spreadsheet or a fancy visual; it’s a product that needs to be designed according to user needs, serve a purpose, and help solve problems. Products need to be reliable, as high reliability leads to increased usability and trust. The more we use a trusted product, the more we depend on it and gradually start to trust it blindly. In a fast-paced environment where there is little time to think, decisions need to be made quickly, and small details may be overlooked. This is when things may start going wrong.

I like to think of a data product as an analogue watch. It consists of many mechanical parts, all of which work together in perfect synchronicity to deliver a very important data point on user’s request: time. If the watch starts skipping a second or two, you might not notice it, but sooner or later, you will because the time on the watch won’t match your reality. You will then realise that you are late at attending a very important event. But also at that particular point in time, you will lose trust at that watch.

You may experience the same with data. This is where data observability comes in. It’s a relatively new concept that is rapidly gaining popularity among data professionals and businesses with data engineering teams. It involves monitoring data processes end-to-end and at scale. When something goes wrong, you have the systems in place to inform you of the issue.

Here is a simple analogy. A data pipeline is similar to a supply chain model. There is a supplier that provides the goods. A logistics company that is responsible for transporting the goods on schedule between locations, a warehouse that receives, sorts and stores the goods. The store executes the orders, the courier delivers, and finally the user consumes the goods. If something goes wrong in any of those steps of the supply chain process, it will result in poor consumer experience. And many things can go wrong!

A data pipeline has as many steps, and an equal number of vulnerabilities.

Think innovation, not operation.

Back to our original stat. Data engineers spend 55% their time on maintenance tasks. A high performing engineering team needs to spend more time on innovation and less time on operational tasks. Maintaining a data engineering team that focuses on maintenance and is not forward looking can quickly become expensive. For businesses to go faster and faster, data engineers are the ones who can enable that by providing richer, relevant, accurate, and reliable data products to the business.

As data warehouses grow in size, they become labyrinths that few can find their way out fast. Data lineage becomes a complicated problem. Especially when there are more downstream than upstream dependancies. Finding your way around the labyrinth is not a skill that someone needs to master. It’s unnecessary. The skill that is required is developing foresight.

You go faster when you know the shortcuts.

Every step of a data pipeline gets progressively more challenging to monitor. The path looks linear for the first few steps but very quickly it can end up having hundreds of different paths that you need to follow. Like the supply chain process. There is one supplier delivering goods to a single warehouse but the store may deliver the goods to thousands of consumers.

Knowing what to expect.

Foresight is key. There is an expectation upon completion of each step of the process, and that expectation can be monitored. Validation tasks can report if the expectation was met or failed. If it failed, then you know which and why. There is no second guessing or backwards investigation. You know exactly where to look at.

Find the shortcuts.

Some steps may require more validations or the expectations may be more difficult to define. In some cases, it may require expertise in the domain. Avoid overthinking the problem. You don’t need to know every street of the city you leave in. You want to know those that are popular and the best alternatives in case of congestion.

Beat the traffic.

As a data engineer, one of worst things that can happen is to start your day with a delayed critical pipeline that hundreds of subsequent tasks rely upon. It’s inevitable, it will happen more times than we would like to. But if you plan correctly, you can still reach your destination on time.

Data is King, but Observability is Queen.

Data Observability requires time to implement and adopt across every single pipeline and task. It’s a valuable ally to every Data Engineer who strives to increase efficiency and reduce time to issue resolution.

Software Engineers use APMs or Applications Performance Management systems to detect and diagnose complex application performance problems to maintain an expected level of service.

Data Observability is the APM for Data Engineers and it’s an area that is growing fast.

Be innovators, not operators.

Next article: Enhance Data Observability with Automated Workflows

In the next article, we will discuss how you can integrate the 4Rs framework to your processes to stay on top of failed validations.

--

--

George Karapalidis
George Karapalidis

Written by George Karapalidis

0 Followers

Head of Data Engineering @Checkatrade. Passionate about data, processes and frameworks. Advocate of incrementalism and an ethical data-driven world.

No responses yet