by Sarah Mason
Analytics is the systematic computational analysis of data or statistics. This is an analysis of raw data or the derivative of data. Data Ingestion and Data Processing are different terms that are essential to support ETL implementations. Extract Transform and Load is a positive way to handle data and implements tailored solutions for analysis. ETL refers to selecting data or statistics, performing computation, and transferring the data and results to an end storage location for use.
Data ingestion is a process for transporting data from various sources to storage. There are two ways that this can be done: batch or streaming. The process used to load data records from one or more sources to import data into a table or repository is data extraction, transform, and load. This is a flow to handle volume either continuously or discrete amounts of data. There can be multiple steps in this portion of data architecture to support computation to transform data. Batch is for discrete data. Streaming is for continuous data. Examples are inventory for discrete data and amount of time to sell inventory for continuous data.
ETL is the general procedure of copying data from one or more sources into a destination system which will be the new focus for analysis or access of retained information. This can be different types of data, discrete or continuous, quantitative or qualitative. Processing data creates standardization so a file system can be created and maintained to store and locate information. This creates an organized file system with an index to lookup and find facts easily.
Maintaining an organized file system makes it easier than ever before so that we can access information quickly while avoiding any potential problems with lost or incomplete records. By processing data, records are in a format for storage in a repository or table using ETL. This is different than Data Ingestion by copying records one source to one destination versus ingesting by creating a transport from one or more sources to one destination. This can be explained as Data Ingestion is like an apartment building with many housing units each with one family and Data Processing is one house with one family as residents, One-to-Many vs. One-to-One.
Data Ingestion includes Data Processing, but performs at a larger scale with more potential for automation than Data Processing. A One-to-Many process can use several One-to-One processes with a common endpoint. When creating ways to find answers in data by generating information and wisdom, creating a reliable data storage and keeping data properly will provide usable information through analysis of facts stored electronically.