Market instability —increasingly volatile due to globalisation, diversification, expansion, etc.— makes ETL processes more necessary than ever. Why? The answer is simple: ETL processes are essential for businesses to make data-driven decisions instead of basing their business strategies on arbitrary decisions or assumptions. Thus, applying an ETL process means gaining control over data.
ETL processes have become increasingly important in the business world as Big Data has become increasingly important. Companies today accumulate large amounts of data that are necessary to carry out business operations, but which, if not managed well, can end up causing an overabundance of useless information or an overload for those in charge of analysing the information. In addition, unintegrated and poorly managed data can be a cost overhead for the business, while the opposite turns it into a business asset.
If you work in a modern company, you have probably heard of ETL, although perhaps you are still not clear about what exactly it is. ETL (Extract, Transform, Load) is, in short, a process of storing, processing and managing data. The process allows the transfer of data from multiple sources to a single place, whether it is a data warehouse, a data lake, a relational or non-relational database...
The acronym stands for Extract, Transform and Load, which are basically the phases of the process.
It is common for people to confuse the concepts ETL and SSIS thinking that they fulfil different functions or that they are similar processes but with differences. In reality, SSIS is a Microsoft SQL Server ETL tool that in addition to the ETL process allows other functions such as merging, cleansing or aggregating data and includes design tools, workflow functions, graphical tools, etc. In addition, SSIS simplifies the transition of data from one database to another and can extract data from a large number of data sources: SQL Server databases, Excel files, Oracle and DB2 databases, etc. As a Microsoft Power BI partner, Bismart works with this tool.
While ETL refers to the process of extracting, transforming and loading data (Extract, Transform and Load), SSIS is a tool that does just that: extracting, transforming and loading data from multiple sources into a single database such as a data warehouse.
More precisely, what ETL actually does is to read data from another database or from multiple data sources and convert it through lookup tables, applying rules or combining existing data with other data. Thus, the process ensures that the data meets the requirements set by the company or by the client of the process, to finally store it in a data warehouse. The process thus ensures the relevance, usefulness, accuracy, quality and accessibility of the data by its consumers.
As we have already seen, SSIS refers to SQL Server Integration Services and its use extends to a large number of procedures related to the movement and migration of data assets, beyond the ETL process. SSIS is basically a data integration tool designed to solve business needs and workflow applications. Among its multiple systems, SSIS has a specific tool to perform ETL processes.
To achieve data integration and carry out the ETL process, SISS follows these steps:
Beyond data migration, SSIS has graphical tools and can perform workflows such as FTP operations, send e-mails, etc.
It is clear, then, that talking about the differences between ETL and SSIS is not entirely accurate, since ETL is a process and SSIS is a Microsoft tool used to perform that process.
1. Extraction: Extraction is the first phase of an ETL process and consists of extracting the data from the systems and applications where it is located. In this phase, the data are converted into a single format and cleansed. In other words, all data are made compatible with each other so that they can be interpreted as a whole and those that are unnecessary are eliminated and errors are corrected.
2. Transformation: In this stage the data is transformed to match the structure we have determined in our data warehouse. That is to say, in a data warehouse we can group the data by freely chosen concepts -according to our business areas, the departments of the company or the use we are going to give them, for example-. In addition, in this phase the data is technically validated, normalised and filters, crosses and aggregates are applied.
3. Load: As its name suggests, in this phase the transformed data are loaded into the data warehouse where they are recorded forever - the data warehouse is a non-volatile system that prevents the loss or alteration of the information. In addition, the data warehouse is updated as new data is added and is the best way to obtain a historical record of all the company's information.
Carrying out an ETL process is not an easy task and, in fact, it can be very complex. Therefore, it is necessary to have the right knowledge to choose the right tool and type of processing depending on the amount of data available, the size and sector of the organisation, the operations to be performed, etc.
There are three possible ETL processing methods that are not mutually exclusive, i.e. they can be combined in one and the same ETL.
An ETL process should meet a number of requirements in order to work properly:
Today, virtually every company requires an optimal data integration process such as the ETL process. This process establishes a business decision support system and makes data accessible from anywhere and at any time.
Furthermore, the great advantage of the ETL process is that it ensures that data is always clean, consolidated and ready for use. The data warehouse where the data is stored after the ETL process, usually a data warehouse, is automatically updated. In addition, the process provides a historical data repository, as it stores all historical data, stores it without the possibility of loss and allows analysts to be able to make full temporal comparisons, analyse different time periods, discover temporal trends and even predict future trends.
ETL maximises the value of our data assets, promotes data quality and assists decision makers in getting accurate answers to key business questions. In addition, by integrating a wide variety of data from multiple sources, ETL minimises production system processing, reducing the turnaround time required for analysis and reporting.
The primary purpose of an ETL process is to make data ready to guide the decision-making process. ETL builds and loads a data warehouse with consolidated, complete, reliable and useful data. Unnecessary, erroneous or ineffective data is not stored in the data warehouse, which is also a significant cost saving for the organisation.
In addition to cost savings, we can even use ETL to generate revenue from our data assets. How? Let's look at an example. Let's imagine that a hotel manager needs to collect data on average occupancy and room rates for each room in his hotel. Using the ETL process and other business intelligence tools, we can discover the aggregated revenue of each room and, for example, find statistics on the overall market share. By gathering this data, the hotel manager can gauge his position with respect to various markets in the sector, analyse the evolution of the trend over time and, consequently, decide to offer strategic discounts on his room rates.
Today, there are a large number of tools specifically dedicated to implementing ETL processes, but obviously not all of them have the same capabilities or are equally efficient for all companies. To ensure that the ETL process operates effectively, it is essential to know how to choose the right ETL tool for your company.
First of all, when choosing an ETL tool, we must ensure that the tool meets all the requirements —explained above— that an ETL tool should have according to Gartner.
Once we are clear about the capabilities that the tool should have, we can move on to consider which of the different types of ETL tools that exist. It is essential to bear in mind that the effectiveness of the tool will depend on the characteristics of the company, the amount of data it works with, and the uses it wants to make of that data. In other words, all types of ETL tools have positive aspects, the key is to choose the type of tool that best fits our organisation and the purpose of our data.
ETL tools can be classified into the following categories:
Once we are familiar with the different types of ETL tools, we can begin to assess the characteristics of each one and reflect on what we should base our choice of the optimal tool for us.
When choosing an ETL tool, it is essential to look at least at the following aspects:
The technology consultancy Gartner publishes a report every year —the Gartner Magic Quadrant— in which the best technological tools are ranked according to different functionalities and areas of action. The quadrant classifies tools, platforms and APIs into four categories: Challengers, Leaders, Niche players and Visionaries.
Looking at the latest report on data integration tools published by Gartner in 2020, we see that the top 10 ETL tools according to the multinational are: Informatica, IBM, SAP, Oracle, SAS, Microsoft Azure, Qlik, Talend and TIBCO. See the full Gartner Magic Quadrant for Data Integration Tools below:
Having an ETL process in place is a competitive advantage for any company, as it allows a company's employees and managers to access data quickly and easily and to handle it even for people who are not necessarily experts or who do not have the technical skills.
Furthermore, ETL is a basic element to ensure data integration, data management and data governance which, in turn, contribute to better strategic decision making.
Likewise, the ETL process is closely related to data quality, that is, to the quality analysis of our data, since when data goes through the process it is validated, cleaned, cleaned, errors are corrected, etc. In other words, implementing an ETL process ensures that our data has the desired quality, which will allow us to make better business decisions, avoid operational errors, reduce the cost of data repair and free the data management team from unnecessary tasks.
The following is an overview of the main advantages of implementing an ETL process in a company:
One of the newest trends related to ETL is its progressive development in the cloud: ETL Cloud Service. More and more companies are opting to carry out their ETL processes in cloud environments rather than on local servers. This is confirmed by an IDG study published in 2020, which shows that 81% of companies already have one or more applications and part of their infrastructure in the cloud. The report also highlights that of those companies, 92% have part of their IT environments in the cloud.
ETL's move to the cloud is no coincidence and is related to the evolution of the world towards the digital environment. As in other spheres of life, businesses today store the vast majority of their data assets, tools, services and software in the cloud.
As we explained in a previous post where we explained the benefits of cloud integration, acquiring or developing a cloud-based ETL process offers certain advantages such as greater speed or the fact that the installation and implementation of ETL is much faster. In addition, the cloud environment allows real-time streaming, favours integration and scalability, saves money and is a way to ensure the security of the company's data, since companies that offer cloud environments for ETL processes must constantly review, reinforce, strengthen and renew their security systems.
The ETL process differs depending on the environment in which it takes place. In an ETL process carried out on a local server, data is extracted from a local data source and, after transformation, loaded into another local server or local data warehouse. This type of warehouse or server is usually physically located within the company's own office.
ETL Cloud Service or the ETL process carried out in the cloud performs exactly the same function as local ETL and involves the same steps. The only difference, then, is that both the source data warehouse and the data warehouse where the consolidated data is finally loaded are digitised and stored in the cloud.
This difference, however, conditions the process and implies that an on-premises ETL process and a cloud ETL process are carried out in somewhat different ways.
In the cloud environment, the process is developed from shared computing clusters that operate as independent entities and are located in different parts of the world. Thus, computing processes are developed through workspaces in cloud environments through systems such as, for example, Data Factory. These processes achieve higher levels of connectivity between data sources than local ETL and enable graphical management of the data flow through interfaces that connect the sources of origin with the data destination stores.
In addition to greater capabilities, speed and connectivity, the tools and systems involved in the ETL cloud process effectively address most of the limitations of the more traditional ETL process. The main problems that the cloud solves are the high cost of data warehouses and physical servers or the loss of all the information collected and consolidated every time a technical failure occurs. On the other hand, the cloud environment dispenses with tasks necessary in an ETL process carried out on a local server: updates, bug fixes, maintenance, etc.
All of this makes speed the biggest benefit of running an ETL process in the cloud. Organizations that still run their ETL processes on local servers are at a disadvantage compared to companies that have moved to the cloud, as they will hardly be able to compete with the agility and dynamism of the cloud service.
Another important advantage of the cloud is that it allows for greater scalability. The process, as it is carried out without requiring any type of installation or hardware, allows companies to expand their resources when they need them, without the need for large investments or paying for material that they do not use. This type of automatic expansion also saves time and money, since in the cloud companies only pay for the processing capacity and space they use. In contrast, on a local server, it is extremely difficult to adapt the size and capacity of the server to the specific needs of the business at any given time. Business needs are constantly evolving and scalability is a requirement for any company that intends to focus on expansion and growth.
Ultimately, the most noticeable and appreciated benefit of cloud ETL is the superior level of speed. The cloud facilitates the speed of computing tasks and favors the optimal and agile development of business intelligence activities, since local environments are more likely to run aground due to the rapid growth in the volume of data with which organizations work. Furthermore, the cloud environment connects to both local environments and other cloud-hosted services.
As we have already seen, as technology, digitization and the evolution of Internet capabilities have progressed, companies have been moving their ETL processes to the cloud environment. Along these lines, IDG announced in 2018 that 38% of organizations acknowledged that their IT departments were asking to move their entire IT infrastructure to the cloud.
As we have already seen, the reasons are manifold. The most prominent advantage of ETL Cloud Service is undoubtedly the increased speed. The computational tasks required by any ETL process are performed much faster in the cloud. In fact, on-premises servers, business intelligence processing and activities can be interrupted as a company's data assets expand and grow. Most on-premises servers have limited capacities and, when the time comes, they may no longer perform optimally. In the cloud, on the other hand, space can easily scale and organizations can increase capacity and processing when required. In addition, the cloud environment is far more flexible than the on-premises environment in that companies can pay only for what they use.
Today, the data warehouse lifecycle can be automated using state-of-the-art technology based on sophisticated design patterns and processes that automate data warehouse planning, modeling and data integration. Automation serves to avoid time-consuming tasks such as ETL code generation in a database.
The cloud-based ETL process can be applied to automate data warehousing. A data warehouse is a type of database specifically designed to facilitate the storage, filtering and analysis of large amounts of data and to allow simultaneous and cross-referenced querying and analysis of data, without the need to combine and consolidate information from multiple data sources
In a traditional data warehouse, data goes through three phases:
If we use data warehouse automation software to perform this process, we can aggregate and move the disparate data directly from the source data sources into a single data warehouse. In addition, automation does not require code. The software automates the ETL code deployment and batch execution of the warehousing process and offers a seamless approach based on agile methodologies.
The automation software performs a wide variety of functionalities, including:
Performing an ETL process enables you to maximize the value of your data warehouse. A data warehouse simply acts as the place where data is stored. Business intelligence tools, on the other hand, serve to perform analysis on the data once it has been transformed. ETL is the intermediate process that prepares all data, wherever it comes from and in whatever format it is in, so that it can be analyzed and used.
In this sense, it is essential to understand ETL as a process linked to the acquisition of a data warehouse. Without the ETL process, the data warehouse does not allow the value of the data to be exploited.
Along the same lines, if our data warehouse is in a cloud environment, it is advisable to opt for a cloud ETL process.
The emergence of Big Data has brought about a significant transformation in the way data is managed and stored, leading to new demands on traditional data warehousing processing. As new requirements have emerged in terms of volume and velocity, ETL processes have evolved into a different perspective known as ELT. This new perspective has emerged in response to today's demands for volume, velocity and veracity in data integration and storage, and has changed the usual order of the ETL process.
The term "Big Data" first emerged in the late 1990s to describe the growing problem faced by organizations in relation to the amount of data generated. In 1997, a group of NASA researchers published a paper highlighting that the increase in data was becoming a challenge for existing IT systems. This situation spurred technological advancement towards platforms capable of handling massive data sets. In 2001, the US firm Gartner published research entitled "3D Data Management: Controlling Data Volume, Velocity and Variety", which first mentioned the "3Vs" that Big Data technologies needed to address: volume, velocity and variety.
Big Data posed new challenges for the ETL process. The increase in volume, velocity and variety demanded by Big Data challenged the capacity of ETL tools, which often could not handle the pace required to process massive data sets, resulting in a lack of capacity and velocity, as well as cost overruns.
The emergence of new data formats and sources, along with data consolidation requirements, revealed the rigidity of the ETL process and changed the traditional way of consuming data. The demand for greater velocity and variety led data consumers to need immediate access to raw data, rather than waiting for IT to transform it and make it accessible.
In addition, Big Data also drove the creation of data lakes, which are data warehouses that do not require a predefined schema, unlike traditional data warehouses, which introduced more flexible storage schemes.
ETL tools, which were built primarily with IT management in mind, are often complicated to install, configure and manage. Moreover, these tools conceive of data transformation as a task for IT professionals only, making it difficult for data consumers to access. According to this logic, consumers can only access the final product stored in a standardized data warehouse.
In this context, innovation emerged, reshaping the process and making it more suitable for working with Big Data and cloud services. ELT provides greater flexibility, scalability, performance and speed, while reducing costs.
However, ELT also presents its own challenges. Unlike ETL, ELT tools are designed to facilitate data access to end consumers, which democratizes data access and allows users to obtain data from any data source via a URL. However, this can pose risks to data governance.
Although ELT has improved both extraction (E) and loading (L) of data, challenges remain in terms of transformation (T). Today, data analytics plays a critical role in business. Despite ELT's efforts, data transformation-based analytics has not been simplified and remains the purview of the IT department, especially engineers and data scientists. Transforming raw data into consumer-ready assets still requires multiple tools and complex processes that data consumers do not have the capacity to address.
In addition, the various processes and tools required for data transformation still present the same problems as ETL, such as the speed of the process, the amount of resources required, cost and lack of scalability.
Has the problem been solved? For ELT to definitively replace ETL, ELT tools would have to evolve. In terms of this evolution, it is expected that in the near future these tools will include data governance capabilities and will gradually address the remaining difficulties.
Both ELT (Extract, Load, Transform) and ETL (Extract, Transform, Load) are processes used to move raw data from a source system to a target database, such as a data lake or data warehouse. These data sources may reside in multiple repositories or legacy systems, and are then transferred to the target data warehouse via ELT or ETL.
In a data processing approach known as ELT (Extract, Load, Transform), unstructured data is extracted from a source system and loaded directly into a data lake for further transformation. Unlike the traditional ETL (Extract, Transform, Load) approach, the data is immediately available to business intelligence systems without requiring prior preparation. This allows analysts and data scientists to perform ad-hoc transformations as needed.
The ELT approach is especially useful for performing basic transformations on data, such as validation or de-duplication. These processes are updated in real time and are applied to large volumes of data in their original state.
However, the ELT approach is relatively new and its technical development has not yet reached the same level of advancement as the ETL approach. Initially, the ELT process relied on hard-coded SQL scripts, which increased the risk of coding errors compared to the more advanced methods used in ETL.
In an ETL process, data is extracted from its source and undergoes transformations to prepare it before it is loaded into the target systems.
In a traditional ETL scenario, unstructured data is extracted and loaded into a staging area, where it undergoes a transformation process. During this stage, the data is organized, cleansed and transformed into structured data. This transformation process ensures that the now structured data is compatible with the target data warehousing system, usually a data warehouse.
Although ETL tools and the ETL process continue to dominate the transfer and integration of data in the business world, recently the ETL process is being approached from another perspective: ELT. While it is true that some companies are already opting for ELT instead of ETL, it is necessary to emphasize that ELT does not have to be a substitute for ETL and that both processes can complement each other and, in fact, some experts are already talking about the ETLT process that combines both perspectives, creating the following pipeline: Extract, Transform, Load and Transform.
The main difference between the two is the alteration of the usual ETL order: Extract, Transform and Load. In ELT, on the other hand, the order of the process is: Extract, Load and Transform. In other words, in an ELT process, data are extracted from the source, loaded into a single place —usually a data lake— and finally, once the data are already integrated into the same data warehouse, data transformations are carried out: data normalization, filtering, merging, data validation, aggregations, etc. In the ETL process, on the other hand, data is extracted and transformations are performed before loading the data into a data warehouse.
In practice, however, it is common for the various steps of these processes to occur in parallel. That is, in ETL processes, companies usually perform data extraction, transformation and loading at the same time to save time. The main difference between ELT and ETL, then, is not so much in the order of the process but in the location where the data transformations are performed. In the case of ETL, the transformations take place in a temporary data warehouse - where the data is stored after being extracted and before being loaded into the final data warehouse. This temporary warehouse is powered by a specialized processing engine that enables the transformations. In the case of ELT, on the other hand, the transformations happen in the back-end data warehouse, which is capable of performing transformations and does not require a specialized engine. In this sense, ELT has a simpler architecture than ETL. When data is extracted from the source, it is directly included in the staging area of the target data warehouse where transformations are performed on the raw data. Once transformed, the data is copied to another area of the data warehouse.
For the ELT process to be productive, it is essential to have the necessary computing and processing capabilities to be able to perform data transformations. In practice, this translates into the acquisition and use of tools such as Azure Databricks, Azure Data Lake or Azure Synapse Analytics. On the other hand, this order of processing requires an environment that can be scalable and that allows space and capacity to be increased when necessary. Environments such as Azure are ideal for this type of process, as Azure tools are pay-as-you-go. That is, companies pay for the space they use and can increase the amount of space whenever they want.
As we have seen, while ETL Cloud offers greater speed than on-premises ETL, ELT increases that speed even more, this being precisely its biggest plus point. ELT speeds up data ingestion by avoiding the heaviest pipeline operations and data copying, an essential step in an ETL process.
Below, we explore the main differences between ETL and ELT:
The main advantages of ELT over ETL and the reason why many companies are betting on this new trend is undoubtedly the greater speed and flexibility. ELT has a much higher data ingestion speed than ETL due to the fact that it omits data copying, which occurs in ETL, and the pipeline avoids other arduous operations.
Also, the other great strength of ELT is that it provides flexibility to data analysts who can load the data without having to define beforehand what they are going to do with it and can, on the contrary, perform the transformations they want at the moment they need to do so. In addition, ELT allows, for the same reason, data analysts or data scientists to load data without first determining its structure. ELT, on the other hand, is a more rigid process that requires analysts to define the structure and usage of the data before it is loaded and also makes it difficult to retrieve the original data.
Although ELT brings benefits that ETL does not, the reverse is also true. ETL, for example, is a more suitable process for working with structured data and can enhance data security, quality and governance.
In any case, both perspectives have their pros and cons and will be more or less optimal depending on the characteristics of each company and its data assets, as well as the intended use of the data. Even the parallel use of both could mean an increase in value for the company.
The ETL process involves a number of additional steps compared to ELT, which makes it a bit slower. These steps include loading the data into a staging area to perform the necessary transformations. However, in exchange for this increased complexity, ETL offers a more secure process that results in cleaner data and less likelihood of coding errors.
One of the key advantages of ETL is its ability to perform transformations on the data before loading it into the target warehouse. This provides an additional layer of security, as the transformed data is loaded more reliably and ensures data integrity. On the other hand, ELT tools are designed to allow more direct access to data by end users, which implies a higher security risk and makes it more challenging to ensure data governance.
In addition, the ETL approach offers specific benefits in terms of compliance with data privacy regulations, such as the GDPR (General Data Protection Regulation). In an ELT process, sensitive data may be more exposed to risks of theft or hacking.
In terms of advanced capabilities, ETL tools have evolved significantly due to their longer time on the market. These include comprehensive data flow automation functions, rule recommendations for the extraction, transformation and loading process, a visual interface for specifying rules and data flows, support for complex data management, as well as additional security and privacy compliance measures. These features make ETL tools a more robust and established choice.
Both ETL and ELT are valid options to carry out the data extraction, transformation and loading process. Their validity and prevalence over the other option will always be linked to the specific characteristics and needs of each company, as well as the nature and amount of data assets.
If your company already has an ETL processing system that works effectively, there is no reason to give it up and switch to an ELT process. However, if your company plans to acquire more data warehouses - especially cloud warehouses - in the future, it may be worthwhile to go with ELT.
Always keep in mind that ELT is not a system and does not depend on a specific tool. It is an architecture and, therefore, if you have an ETL tool such as SSIS, you can easily integrate it into the ELT process.
The ETLT process, which is nothing more than the marriage of ETL and ELT, should also not be discounted. ETLT can maximize the value of the ETL and ELT process by leveraging the best of each. In an ETLT process, an ETL tool could extract data from its source data and store it in a data lake and then, through the ETL process, the data would be extracted from the data lake and transformed and stored in a data warehouse.
There is no doubt that the new ETL perspective, ELT, is here to stay. However, one should not think that ELT will replace ETL.
When is it better to perform the transformations, before or after loading?
The choice between ETL and ELT depends on several factors and no definitive answer can be given as to which is better in all cases. The choice of the most appropriate approach should be based on the specific needs and requirements of each project.
In general, ETL has traditionally been used when a thorough transformation of the data is required before loading it into the target warehouse. This approach is preferable when data quality and consistency needs to be ensured prior to analysis. By performing transformations before loading, validation rules can be applied, erroneous or duplicate data can be cleaned, and data can be structured according to a predefined schema. In addition, ETL is often more secure in terms of data privacy and data governance compliance, since transformations are performed before the data reaches the warehouse.
On the other hand, ELT has become more popular in the context of Big Data and the need to process large volumes of data efficiently. ELT enables fast loading of raw data into a data lake or unstructured data warehouse, which speeds up the ingestion process. Transformations are performed later, when the data is available in the warehouse, allowing ad-hoc analysis and greater flexibility to explore the data without prior constraints. ELT is especially useful when high processing speed is required and agility in data exploration and discovery is prioritized.
It is important to carefully assess the needs of the project, considering factors such as the volume and variety of the data, the complexity of the required transformations, security and compliance, as well as the resources and skills available in the team. In many cases, it may be beneficial to combine elements of both approaches, using ETL for initial data cleansing and structuring, and then leveraging ELT for subsequent analysis and exploration. The choice will depend on the particular circumstances and the balance between the quality, speed and flexibility requirements of the project.
The ETL process is efficient in scenarios where small to medium-sized data sets requiring complex transformations are handled. However, its efficiency decreases as data sets grow, as aggregation operations become more complicated.
What types of companies can benefit from using ETL?
Organizations that need to integrate and synchronize data from multiple sources find ETL a suitable solution. It is especially useful for companies that have multiple data repositories in different formats, as ETL allows them to unify the format of the data before loading it into the target location.
Also, organizations that need to migrate and update data from legacy systems benefit from the ETL process. Legacy systems often require transformations to adapt the data to the new structure of the target database, and ETL provides the necessary tools to perform this transformation efficiently.
ELT is a solution specially designed for the efficient management and integration of large amounts of data, whether structured or unstructured. It is also particularly suitable for environments that require fast, real-time access to data.
There are certain types of companies that would benefit from choosing ELT:
In summary, while ETL is better suited to small to medium-sized data sets that require complex transformations, ELT is the more appropriate choice when it comes to dealing with large volumes of data, both structured and unstructured. In addition, ELT is especially useful in environments that demand the use of real-time data and offers faster process execution.
Is your ETL compliant with GDPR?
The General Data Protection Regulation (GDPR) is the European Union's new regulatory framework regarding data protection that came into force in May 2018, after a two-year transition. The new legislation was born with the will to unify the data protection laws of all European Union countries and to strengthen citizens' control over their personal data and information.
The new law made all companies and organizations dealing with EU citizens' data to review and adapt their data protection policies and, according to a Gartner study, the vast majority of companies had still not implemented the necessary measures to comply with the law months before the final implementation deadline.
In this sense, most companies that were working with ETL processes had a problem since most ETL systems were not GDPR compliant. This forced companies to go for new systems such as Master Data Management Enterprise Information Integration (MDM/EII). MDM/EII is a data integration and interoperability technology that enables querying data in multiple formats and from different data sources and specially designed to comply with GDPR. The innovative system that streamlines the transfer of information between systems ensures that, at every step of the process, data collection and integration is carried out in compliance with the requirements of European legislation and guarantees the integrity of data that is consolidated, accurate and congruent with the purpose for which it is transferred and used.
Information-oriented integration does not require modifications to the systems involved, but only the implementation of the information exchange mechanism between the data repositories of the respective applications. It represents the simplest and least impactful form of integration compared to other types of integration: process or service oriented.
Likewise, after two years of GDPR, companies' compliance with personal data protection rules is still in question. In fact, Forbes magazine already published in 2017 that more than 50% of the companies surveyed were not prepared for the new regulatory measures. THE forecast was confirmed by the law firm DLA which published in January 2021 that the year 2020 GDPR fines had increased by 40% compared to the previous 20 months. This fact may be the result of either companies increasingly failing to comply with the regulation, which would be strange considering the consequences, or the European Union is progressively strengthening sanctions to force compliance with the new privacy and data protection policies.
This scenario leads us to reflect on the role of data quality and data governance in the application of regulatory measures. Compliance with the GDPR by organizations involves ensuring data quality and creating data governance measures to control, validate and consolidate data and, in particular, to be clear about what data the company stores, where all the data assets are, what processes they go through, whether they are secure, whether they are reliable and what their purpose is.
In this sense, then, the problem of non-compliance on the part of companies does not lie with ETL systems. While it is true that many were not prepared at the time, the operation of ETL tools is safe if their use is complemented by data quality and data governance policies, which are essential for compliance with the law.
The end of cookies in Google Chrome: One more step in data protection
In 2019, Google made public its intention to phase out third-party cookies in Chrome and block other covert tracking techniques, such as fingerprinting. Justin Schuh, Chrome's director of engineering, explained that this was Google's strategy to redesign web standards and ensure default privacy. In early March 2021, Google confirmed its commitment to completely eliminate third-party cookies in Chrome by 2022. The company plans to implement privacy protection measures through the Privacy Sandbox plan. Some of these new rules are already being tested in Chrome 89, with the expectation of offering them to Google Ads customers starting in April.
While the initiative has been well received by the general public, it has raised concerns among businesses and advertisers about the future of digital advertising and marketing in a cookie-less world.
👉 To help businesses adapt, Bismart has developed a practical guide that provides measures and precautions to be taken in this new scenario. You can download the "Guide to Surviving in a Cookie-Free World" below.