



Big Data systems involve more than one workload type and they are broadly classified as follows:
The data sources involve all those golden sources from where the data extraction pipeline is built and therefore this can be said to be the starting point of the big data pipeline.
The examples include:
(i) Datastores of applications such as the ones like relational databases
(ii) The files which are produced by several applications and are majorly a part of static file systems such as web-based server files generating logs.
(iii) IoT devices and other real-time-based data sources.
This includes the data which is managed for the batch-built operations and is stored in the file stores which are distributed in nature and are also capable of holding large volumes of different format-backed big files.
All the data is segregated into different categories or chunks which makes use of long-running jobs used to filter and aggregate and also prepare data o a processed state for analysis. These jobs usually make use of sources, process them and provide the output of the processed files to the new files. The batch processing is done in various ways by making use of Hive jobs or U-SQL-based jobs or by making use of Sqoop or Pig along with the custom map reducer jobs which are generally written in any one of the Java or Scala or any other language such as Python.
This includes, in contrast with the batch processing, all those real-time streaming systems which cater to the data being generated sequentially and in a fixed pattern. This is often a simple data mart or store responsible for all the incoming messages which are dropped inside the folder necessarily used for data processing. There are, however, the majority of solutions that require the need of a message-based ingestion store that acts as a message buffer and also supports the scale-based processing, providing a comparatively reliable delivery along with other messaging queuing semantics. The options include those like Apache Kafka, Apache Flume, Event hubs from Azure, etc.
There is a slight difference between real-time message ingestion and stream processing. The former takes into consideration the ingested data which is collected at first and then is used as a publish-subscribe kind of tool. Stream processing, on the other hand, is used to handle all that streaming data which is occurring in windows or streams and then writes the data to the output sink. This includes Apache Spark, Apache Flink, Storm, etc.
This is the data store that is used for analytical purposes and therefore the already processed data is then queried and analyzed by using analytics tools that can correspond to the BI solutions. The data can also be presented with the help of a NoSQL data warehouse technology like HBase or any interactive use of a hive database which can provide the metadata abstraction in the data store. Tools include Hive, Spark SQL, Hbase, etc.
The insights have to be generated on the processed data and that is effectively done by the reporting and analysis tools which make use of their embedded technology and solution to generate useful graphs, analysis, and insights helpful to the businesses. Tools include Cognos, Hyperion, etc.
Big data-based solutions consist of data-related operations that are repetitive and are also encapsulated in the workflows which can transform the source data and also move data across sources as well as sinks and load in stores and push into analytical units. Examples include Sqoop, oozie, data factory, etc.