Streaming Analytics in 2023 – What, Why, and How


Streaming analytics continues to become more important because it lets businesses learn new things and make decisions in almost real time.

This is especially relevant in fields like finance, health care, and manufacturing where the amount of time needed to make decisions is very critical.

By the way, when measuring time in streaming analytics, you’ll often hear the term “latency”. “Latency” is the term for the amount of time it takes to process in order to realize the result output value.

When researching streaming analytics, batch to real-time streaming, streaming ETL, etc., you’ll often hear this term. To simplify, just think the amount of time it takes to process and create a result.

Table of Contents

Why Streaming Analytics?

Here are some reasons for why streaming analytics:

  • Real-time insights: Streaming analytics lets businesses gain insights from data as it is being created rather than waiting to process at some scheduled time. This facilitates the ability to make decisions in close to real time. As was already said, lower latency isn’t necessarily required for every use case, but it is for some.
  • Early detection of irregularities: Streaming analytics can be used to find irregularities and possible problems in real time, so they can be addressed before they become much larger problems. Examples include fraud in the financial world or device monitoring of metrics such as temperature in manufacturing.
  • Predictive analytics: Using streaming analytics, you can look at data in real time and make predictions about what will happen in the future. This can help predict how customers will act su as what they may be inclined to click or buy. Think about things like product suggestions, personalized ads, etc.
  • Streaming analytics can give businesses a competitive edge by letting them make decisions faster and with more information, which can lead to better business outcomes.

Unlike traditional batch processing, which processes large amounts of data all at once on scheduled intervals, streaming analytics processes data as it is created.

This lets businesses gain insights and make decisions in near real-time.

why streaming analytics

Streaming analytics analyzes data from different sources, such as clickstream log files, social media, IoT devices, sensors, and financial transactions, among others.

It involves ingesting data from these sources, processing and analyzing the data in real time, and giving real-time insights that can be used to act right away or make good decisions.

See also  Streaming Data Engineer Use Cases

Most of the time, the data sources are ingested into stream storage such as like Apache Kafka’s event log and then processed by a stream processor.

We’ll explore how streaming analytics is built later this post, but before we do, let’s first answer a common question.

What’s the difference between traditional ETL and streaming analytics?

Since the beginning of time, data pipelines have been built on Extract, Transfer, and Load (ETL) constructs by which data is moved (extracted), changed (transfer), and saved (loaded) into another system.

Traditionally, this process of collecting, processing, and storing is performed in “batches” and not in real time. Here, “batches” refers to the concept of new or changed data building up over a period of time before the ETL process is started. ETL processes are scheduled on a specific time interval such as once daily or every 15 minutes, etc.

Traditional ETL data pipelines have, and continue to be used for many things, like data warehousing, business intelligence, and machine learning.

In summary, streaming analytics and traditional ETL data pipelines both involve processing data, but the main difference is when the processing happens.

Streaming analytics works on data as it is being created, while batch data pipelines work on data in batches after it has been collected.

What are some ways that streaming analytics can be used?

We’ve talked a little bit about this already, but streaming analytics can be used in a wide range of industries and use where data needs to be analyzed and processed in as close to real time as possible.

Here are more specific use cases for streaming analytics:

  • Fraud Detection: Streaming analytics can be used to keep an eye on financial transactions in real time to identify patterns of behavior that do not make sense.
  • Predictive maintenance: Streaming analytics can be used to look at sensor data from industrial equipment in real time and determine any oddities which may indicate a failure will likely occur soon.
  • User sentiment analysis: Streaming analytics can be used to keep an eye on social media feeds to find trending topics and user engagement. This lets companies respond quickly to customer feedback and dynamically update marketing plans.
  • Real-time personalization: Streaming analytics can be used to personalize content and suggestions depending on how users act and what they like. The attempt is to provide a better user experience on one side and better engagement metrics on the other.
  • Network Monitoring: Streaming analytics can be used to watch network traffic. This lets network administrators and devices detect possible security threats and take action to fix them.
See also  Stream Processing

How are streaming analytics applications built?

Not that all the following are unique to streaming analytics, but usually, the following steps are taken to build a streaming analytics applications:

  • Define the use case and requirements: The only real difference here between building any application and streaming analytics is clearly explaining why low latency is important. Or, put another way, why can’t a traditional ETL pipeline be used?
  • Stream storage: Next, you need to get data from different places and put it into a streaming event log like Apache Kafka or Amazon Kinesis. You can collect and add data to your streaming event log using a number of tools and methods, such as REST APIs, data connectors, and data pipelines.
  • Stream processor: Once data is ingested into the streaming event log, you can use stream processing frameworks like Apache Flink, Apache Spark, or Apache Beam to process and analyze the data in real time. You can get insights from streaming data by transforming, filtering, and combining the data in different ways.
  • Optionally store and display insights: After processing and analyzing the data, you can store the insights in a database or data warehouse like Apache Cassandra, MongoDB, or Amazon Redshift. You can also use tools like Kibana, Grafana, or Tableau to show how the insights are useful.

In short, building a streaming analytics application requires 1) collecting and ingesting data into a durable event log, 2) processing the data in real time, and 3) possibly storing and visualizing the insights gained.

building streaming analytics

This is a changing field, and products like Apache Pinot and Clickhouse are providing alternatives to how stream processing have been built in code (Spark, Flink) by offering way to process streams and calculate results through configuration.

I’ll be looking into these options more soon, so let me know if there’s something specific you want.

As with most software choices, the tools and methods you use to build solutions are dependent on your use case and needs.

What are some open source tools or libraries used in building streaming analytics?

There are many open source tools and libraries that can be used to build streaming analytics applications. Here are some examples:

  • Apache Kafka is a distributed streaming platform that lets you publish streams of records and subscribe to them in real time.
  • Apache Flink is an open-source framework for stream processing that lets you build pipelines for processing data in real time.
  • Apache Spark Structured Streaming is an addition to the Spark processing engine that makes it possible to process data streams in real time.
  • Apache NiFi and StreamSets are data flow management systems that lets you get data from different sources, process it, and send it to different places in real time.
  • Custom built in Java, Python, etc. which consume from event logs and process data
  • Apache Beam is a unified programming model for batch and streaming data processing that lets you build pipelines that can run on different stream processing engines.
  • Kafka Streams is a client library that lets you build applications that process streams on top of Apache Kafka.
See also  Open Source Change Data Capture in 2023

These are just a few of the many open source tools and libraries that can be used to build streaming analytics applications. The tool or library you choose will depend on your needs, your team’s existing or desired skillsets, and how you plan to use it.

What examples of proprietary tools and libraries used in building streaming analytics?

For building streaming analytics applications, there are also proprietary tools and libraries.

Here are some examples in no particular order:

  • Amazon Kinesis is a streaming storage event log for collecting, processing, and analyzing data in real time that is fully managed.
  • Google Cloud Dataflow is a managed service to build and run data processing pipelines. It works with both stream and batch processing.
  • Microsoft Azure Stream Analytics is a managed service event processing engine that lets you process data from different sources in real time.
  • IBM Streams is a platform for building real-time streaming analytics apps that can take data from different sources, process it, and analyze it.
  • SAP HANA Streaming Analytics is a real-time analytics solution.
  • Splunk Stream can be considered real-time stream processing solution that lets you get data from different sources and look at it in real-time.
  • Informatica Intelligent Streaming is a real-time streaming data processing platform that lets you process and analyze data from different sources in real-time.
  • TIBCO StreamBase is a platform for real-time streaming analytics that lets you build and run applications that process data in real time.
  • StreamSets is a platform for building real-time pipelines for integrating and processing data from different sources.
  • Confluent is a streaming platform built on top of Apache Kafka that gives developers more tools and features for building applications that use streaming analytics.

These are just a few of the many proprietary tools and libraries that can be used to build streaming analytics applications.

Let me know if you have thoughts or good/bad experiences with any of these.

Note: the list is not differentiated between stream log and stream processor.

About Todd M

Todd has held multiple software roles over his 20 year career. For the last 5 years, he has focused on helping organizations move from batch to data streaming. In addition to the free tutorials, he provides consulting, coaching for Data Engineers, Data Scientists, and Data Architects. Feel free to reach out directly or to connect on LinkedIn

Leave a Comment