Showing posts with label stream-processing. Show all posts
Showing posts with label stream-processing. Show all posts

Thursday, January 01, 2015

Probabilistic techniques, data streams and online learning - Looking forward to a bigger 2015

I look forward to 2015 as the year when randomized algorithms, probabilistic techniques and data structures become more pervasive and mainstream. The primary driving factors for this will be more and more prevalence of big data and the necessity to process them in near real time using minimal (or constant) memory bandwidth. You are given data streams where possibly you will see every data only once in your lifetime and you need to churn out analytics from them in real time. You cannot afford to store all of them in a database on disk since it will incur an unrealistic performance penalty to serve queries in real time. And you cannot afford to store all information in memory even if you add RAM at your own will. You need to find clever ways to optimize your storage, employ algorithms and data structures that use sublinear space and yet deliver information in real time.

Many such data structures are already being used quite heavily for specialized processing of data streams ..


These data structures are becoming more and more useful as we prepare to embrace and process larger data sets with fairly strict online requirements. And it has started making a difference. Take for example Impala, the open source analytic database from Cloudera that works on top of Hadoop. Impala's NDV aggregate function (number of distinct values) uses the HyperLogLog algorithm to estimate this number, in parallel, in a fixed amount of space. This blog post has the details of the performance improvement that it offers in comparison to the standard distinct count. The immensely popular NoSQL store Redis also offers a HyperLogLog implementation that you can use to get an approximation on the cardinality of a set using randomization. Salvatore has the details here on the implementation of HyperLogLog algorithm in Redis.

The most important reason these algorithms and data structures are becoming popular is the increased focus on our "online" requirements. We are not only processing bigger and bigger data set, we need results faster too. We just cannot afford to push all analytics to the batch mode and expect results coming out after an overnight batch processing. Various architectural paradigms like the lambda architecture also target to address this niche area. But before investing on such complex architectures, often some neat data structures that use probabilistic techniques and randomization may offer a much lighter weight solution that you are looking for.

Consider processing the Twitter stream and generating analytics (of whatever form) online. This means that immediately after seeing one twitter feed you must be able to predict something and update your model at the same time. Which means you need to memorize the data that you see in the feed, apply it to update your model and yet cannot store the entire hose that you have seen so far. This is online learning and is the essence of techniques like stochastic gradient descent that help you do this - the model is capable of making up to date predictions after every data that you see. John Myles White has an excellent presentation on this topic.

Consider this other problem of detecting similarities between documents. When you are doing this on a Web scale you will have to deal with millions of documents to find the similar sets. There are techniques like minhash which enable you to compress documents into signature matrices. But even then the scale becomes too big to be processed and reported to the user in a meaningful amount of time. As an example (from Mining Massive Datasets), if you process 1 million document using signatures of length 250, you still have to use 1000 bytes per document - the total comes to 1 gigabyte which very well fits into the memory of a standard laptop. But when you check for similar pairs, you need to process (1,000,000 choose 2) or half a trillion pairs of documents which will take almost 6 days to compute all similarities on a laptop. Enter probabilistic techniques and locality sensitive hashing (LSH) algorithm fits this problem like a charm. Detecting similarity is a problem that arises in recommender systems with collaborative filtering and LSH can be used there as well. The basic idea of LSH as applied to similarity detection is to use hashing multiple number of times and identify candidate pairs that qualify for similarity checking. The idea is to reduce the search space using probabilistic techniques so that we can eliminate a class of candidates which have very low chance of being similar.

Here I have only scratched the surface of the areas where we apply randomization and probabilistic techniques to solve problems that are very real today. There are plentiful other areas in data mining, graph clustering, machine learning and big data processing where similar techniques are employed to reduce the curse of dimensionality and provide practical solution at scale. 2014 has already seen a big surge in terms of popularizing these techniques. I expect 2015 to be bigger and more mainstream in terms of their usage.

Personally I have been exploring data stream algorithms a lot and have prepared a collection of some useful references. Feel free to share in case you find it useful. I hope to do something more meaningful with stream processing data structures and online learning in 2015. Have a very happy and joyous new year ..

Sunday, January 19, 2014

Count-Min Sketch - A Data Structure for Stream Mining Applications

In today's age of Big Data, streaming is one of the techniques for low latency computing. Besides the batch processing infrastructure of map/reduce paradigm, we are seeing a plethora of ways in which streaming data is processed at near real time to cater to some specific kinds of applications. Libraries like Storm, Samza and Spark belong to this genre and are starting to get their share of user base in the industry today.

This post is not about Spark, Storm or Samza. It's about a data structure which is one of the relatively new entrants in the domain of stream processing, which is simple to implement, but has already proved to be of immense use in serving a certain class of queries over huge streams of data. I have been doing some readings about the application of such structures and thought of sharing them with the readers of my blog.

Using Sublinear Space


Besides data processing, these tools also support data mining over streams that include serving specialized queries over data using limited space and time. Ok, so once we store all data as they come we can always serve queries with O(n) space. But since we are talking about huge data streams, it may not even be possible to run algorithms on the full set of data - it simply will be too expensive. Even if we have the entire set of data in a data warehouse, the processing of the entire data set may take time and consume resources that we cannot afford to have, considering the fee charged under the evolving models of using the platform-as-a-service within the cloud based infrastructure. Also the fact that these algorithms will be working on data streams, there's a high likelihood that they will get to see these data only in a single pass. The bottom line is that we need to have algorithms that work on sub-linear space.

Working on sublinear space implies that we don't get to store or see all data - hence an obvious conclusion from this will be the fact that we also don't get to deliver an accurate answer to some queries. We rely on some approximation techniques and deliver an accuracy with a reasonably high probability bound. We don't store all data, instead we store a lossy compressed representation of the data and deliver user queries from this subset instead of the entire set.

One widely used technique for storing a subset of data is through Random Sampling, where the data stored is selected through some stochastic mechanism. There are various ways to determine which data we select for storing and how we build the estimator for querying the data. There are pros and cons with this approach, but it's one of the simplest ways to do approximation based queries on streaming data.

There are a few other options like Histograms and Wavelet based synopses. But one of the most interesting data structures that have been developed in recent times is the Sketch, which uses summary based techniques for delivering approximation queries, gets around the typical problems that sampling techniques have and are highly parallelizable in practice.

An important class of sketch is one where the sketch vector (which is the summary information) is a linear transform of the input vector. So if we model the input as a vector we can multiply it by a sketch matrix to obtain the sketch vector that contains the synopses data that we can use for serving approximation queries. Here's a diagrammatic representation of the sketch as a linear transform of the input data.



Count-Min Sketch


One of the most popular forms of the sketch data structure is the Count-Min Sketch introduced by Muthukrishnan and Cormode in 2003. The idea is quite simple and the data structure is based on probabilistic algorithms to serve various types of queries on streaming data. The data structure is parameterized by two factors - ε and δ, where the error in answering the query is within a factor of ε with probability δ. So you can tune these parameters based on the space that you can afford and accordingly amortize the accuracy of results that the data structure can serve you.

Consider this situation where you have a stream of data (typically modeled as a vector) like updates to stock quotes in a financial processing system arriving continuously that you need to process and report statistical queries on a real time basis.

  • We model the data stream as a vector a[1 .. n] and the updates received at time t are of the form (it, ct) which mean that the stock quote for a[it] has been incremented by ct. There are various models in which this update can appear as discussed in Data Streams: Algorithms and Applications by Muthukrishnan which includes negative updates as well and the data structure can be tuned to handle each of these variants.


  • The core of the data structure is a 2 dimensional array count[w, d] that stores the synopses of the original vector and which is used to report results of queries using approximation techniques. Hence the total space requirement of the data structure is (w * d). We can bound each of w and d in terms of our parameters ε and δ and control the level of accuracy that we want our data structure to serve.


  • The data structure uses hashing techniques to process these updates and report queries using sublinear space. So assume we have d pairwise-independent hash functions {h1 .. hd} that hash each of our inputs to the range (1 .. w). Just for the more curious mind, pairwise independence is a method to construct a universal hash family, a technique that ensures lower number of collisions in the hash implementation.


  • When an update (it, ct) comes for the stream, we hash a[it] through each of the hash functions h1 .. hd and increment each of the w entries in the array that they hash to.



  • for i = 1 to d
      v = h(i)(a[it]) // v is between 1 and w
      count[i, v] += ct // increment the cell count by ct
    end

    At any point in time if we want to know the approximate value of an element a[i] of the vector a, we can get it from computing the minimum of all values in each of the d cells of count where i hashes to. This can be proved formally. But the general intuition is that since we are using hash functions there's always a possibility of multiple i's colliding on to the same cell and contributing additively to the value of the cell. Hence the minimum among all hash values is the closest candidate to give us the correct result for the query.


    The figure above shows the processing of the updates in a Count-Min sketch. This is typically called the Point Query that returns an approximation of a[i]. Similarly we can use a Count-Min sketch to get approximation queries for ranges which is typically a summation over multiple point queries. Another interesting application is to serve inner product queries where the data structure is used to query inner products of 2 vectors, a typical application of this being the estimation of join sizes in relational query processing. The paper Statistical Analysis of Sketch Estimators gives all details of how to use sketching as a technique for this.

    Count-Min sketches have some great properties which make them a very useful data structure when processing distributed streams. They have associativity properties and can be modelled as monoids and hence terribly performant in a distributed environment where you can parallelize sketch operations. In a future post I will discuss some implementation techniques and how we can use count-min sketches to serve some useful applications over data streams. Meanwhile Twitter's algebird and ClearSpring's stream-lib offer implementations of Count-Min sketch and various other data structures applicable for stream mining applications.