5 Data Tasks That Keep Data Engineers Awake at Night
When it comes to integrating and managing data, there are quite a few tasks that are downright tedious. Data engineering is a tough job, but somebody's gotta do it!
Join the DZone community and get the full member experience.
Join For Freeas a data engineer, i enjoy my work. but when it comes to integrating and managing data, there are quite a few tasks that are downright tedious. from data remodeling to juggling a hodgepodge of software to address disaster recovery, sometimes, we are simply drowning. as they say, it’s a tough job, but somebody's gotta do it!
1. data remodeling
when you start building a solution from scratch, you are dealing with a clean slate. the infrastructure is built, all the data has been modeled perfectly in an almost artistic manner, and it is now flowing in an elegant and clean fashion.
however, problems start real soon — basically, the minute those damn analysts and data scientists get their hands on your magnificent creation. they keep asking for data augmentation, integration of new sources of data, and different data models for asking different kinds of questions. it falls on your shoulders, and you feel like you are somehow personally at fault for preventing your company from data-driven value creation, while those damn analysts hold no appreciation for the remarkable work of art that you’ve created.
data modeling progresses from conceptual model to logical model to physical schema, establishing a robust world of data objects and their relationships to other data objects. mainly, it provides the analysts the ability to ask the questions they need in a fast cost-effective and simple manner. it is precisely its robust nature, combined with the frequency of changes needed, that creates the never ending cycle of integration, data preparation, and modeling.
2. scaling clusters
practical clustering algorithms require multiple data scans to achieve convergence. for large databases, these scans become prohibitively expensive, cumbersome, and time-consuming. they also stretch physical infrastructure capacity, creating risk for under-provisioned resources.
the problematic nature of scaling clusters is even greater with rdbms, given their architecture is run on a single server. in practical terms, this means that when an rdbms needs to scale, you must buy bigger, more complex, and expensive proprietary hardware with more processing power, memory, and storage capacity. all this involves lengthy downtime and configurations to make the change.
scaling clusters in cloud infrastructure generally solves the scalability issue. bigquery’s most significant advantage, for instance, is the seamless and fast resizing of a cluster, up to petabyte scale. unlike redshift, there is no need to constantly track and analyze the cluster size and growth in an effort to optimize its size to correspond to the current dataset requirements.
redshift scalability allows users to enjoy an increase in performance when resources including memory and i/o capacity increase. to get the best for their buck, panoply seamlessly scales redshift users’ cloud footprints according to the amount of data and the number and complexity of queries. it scales the cluster on demand — ensuring the data warehouse performance is well balanced with costs.
3. backup and disaster recovery
traditionally, businesses have cobbled together multiple software solutions to address disaster recovery (dr), backup, and archival as a part of the larger data protection practices. this approach is mind-bogglingly inefficient for the data engineer tasked with managing each of these solutions. it is also a relatively expensive approach. a different way of dealing with dr is to leverage cloud architecture for secondary workflows such as backup, archival and disaster recovery (dr).
bigquery automatically replicates data to ensure its availability and durability. however, complete loss of data due to disaster is less common than the need for fast, instant restoration of, for example, a specific table or even a specific record. for both purposes, redshift automatically stores backups to s3 and enables you to revisit data at any point in time over the last ninety days. in all cases, retrieval includes a series of actions that can make the instant recovery a cumbersome, lengthy operation.
since panoply is powered by redshift, backup to s3 is obvious, but it takes it one step further. leveraging panoply’s revision history tables, users keep track of every change to any of their database rows, right within their data warehouse, making it immediately available to analysts with simple sql queries. this makes file uploading to s3 and database extraction redundant when the need to travel back to any point in time and quickly see how the data has changed arises.
4. etl and data prep
loading your data in a clean and logical way to the analytics architecture is not only the first step in data management but is also definitely one of its more important stages. the difference in loading methodologies can affect the scalability of the process and the richness of the data loaded, and in most cases, parts of raw data are actually lost.
when loading data, we usually think to load only the important data — meaning the actual data that we want to analyze. the problem with this is that in most cases (most cases being all), you never know on day one which data is going to be important for analysis in the future, and thus long-thought experiments and tests of how and which data to load become worthless the minute after analysis begins.
essentially, the etl and data warehousing must fulfill the function of being a screening protocol that ultimately produces clean data sets that are easy for analytics programs to scrutinize.
very frequently, you end up running supporting software on large numbers of servers so that they can warehouse data from multiple sources, including different oltps.
due to the wide range of possible data inconsistencies and the sheer data volume, data cleaning is considered to be one of the biggest challenges in data warehousing. a number of tools of varying functionality are available to support etl tasks . even with these tools, a significant portion of the cleaning and transformation work has to be done manually or by low-level programs.
snowflake supports simultaneous users’ queries through different virtual warehouses. you can run your overnight etl processes run on slower and less expensive warehouse resources, and then enable real-time ad-hoc queries through a more powerful warehouse during business hours. however, with redshift scale and operational efficiency, etl can be referred to as a rigid and outdated paradigm.
leveraging redshift scalability with panoply, you can forget about overnight etl processes. panoply follows the elt process whereby all of the raw data is instantly available in real-time and all transformations happen asynchronously on query time. this makes panoply both a data lake and a data warehouse, allowing users to have constant and real-time access to their raw data. this means they can iterate their transformations in real-time, with updates instantly applied on newly inserted data as well. finally, customized, advanced transformations are also possible via the panoply ui console and take just minutes to set up and run.
5. performance troubleshooting
a bottomless pit, an abyss, the void…should i keep going? if the system is slow, just about anything could be the problem. it could be poor data modeling, improper indexing, network issues, storage hardware — basically anything.
then, once you begin (especially in the case of an issue with a private cloud or unmanaged solutions), the data engineer clears one bottleneck only to bump into another one. that troubleshooting the data warehouse can be a bit intimidating. sometimes, even figuring out where to begin to look can be difficult.
more often than not, and more frustrating than anything (at least from my personal experience), is when the trouble lays in the type of queries being run by irresponsible analysts, like some select
*
over an enormous table. i try raising knowledge and awareness of these as part of my tasks. other frequent issues i see are around the way data is structured in the warehouse — for instance, in a case where data is modeled with many subtables, analysis on even a small set of data can be lengthy.
Published at DZone with permission of Yaniv Leven. See the original article here.
Opinions expressed by DZone contributors are their own.
Comments