There is much hype around "big data" these days - how it's going to change the world - which is causing data scientists to get excited about big data analytics, and technologists to scramble to understand how they employ scalable, distributed databases and compute clusters to store and process all this data.

Interestingly, Gartner dropped "big data" out of their infamous hype cycles charts in 2015...

Gartner’s 2012 Hype Cycle for Emerging Technology

Gartner’s 2013 Hype Cycle for Emerging Technology

Gartner’s 2014 Hype Cycle for Emerging Technology

so how useful is big data going to be? Not to mention the obvious question of what constitutes "big data".

Clearly acquiring and storing data alone produces no meaningful benefit to the organisation collecting/housing the data. However, "monetising the data" is where the quants and data scientists come in. Typically, the value-add from quants is often thought to come from sophisticated models crafted and tuned by extremely clever PhD-laden mathematicians with deep knowledge in a particular field. That might or might not be true, but it certainly raises the question...

Is the quality of the algorithm used in the model the most important ingredient?

If this were true, then:

  • organisations should put the bulk of their effort into developing the best models possible;
  • organisations employing the best data scientists should therefore be better equipped to monetise big data;
  • organisations need not purse enormous data capture efforts.

BUT with all this hype around big data it is prudent to consider the relative importance of the size of training data available to quants.

In this paper by Banko and Brill, 2001 two researchers from Microsoft investigated how important training set size was in building a model for a problem from the NLP domain, confusion set disambiguation. Quoting the authors, "Confusion set disambiguation is the problem of choosing the correct use of a word, given a set of words with which it is commonly confused. Example confusion sets include: {principle, principal}, {then, than}, {to,two,too}, and {weather,whether}." The authors reviewed a variety of approaches which were considered state-of-the-art at the time of publication (2001) and examined model performance, as measured by accuracy, when various training set sizes were employed in those models. The chart below, extracted from their research paper, shows some interesting observations...

As the training set size, measured on the X-axis, increases by an order of magnitude we can see that even the worst-performing model often produces greater accuracy than the best-performing model that had less training data.

Whilst this paper is for a specific problem in a specific domain, it is widely recognised that across a variety of machine learning fields that a "dumber" model with more data will outperform a smarter model that has less data.

On this very topic, Pedro Domingos, a well-respected and leading researcher in machine learning, published a paper on A Few Useful Things to Know about Machine Learning where in Section 9, titled "More Data Beats a Cleverer Algorithms" he notes...

"... pragmatically the quickest path to success is often to just get more data. As a rule of thumb, a dumb algorithm with lots and lots of data beats a clever one with modest amounts of it. (After all, machine learning is all about letting data do the heavy lifting.) This does bring up another problem, however: scalability. In most of computer science, the two main limited resources are time and memory. In machine learning, there is a third one: training data. Which one is the bottleneck has changed from decade to decade. In the 1980’s it tended to be data. Today it is often time. Enormous mountains of data are available, but there is not enough time to process it, so it goes unused. This leads to a paradox: even though in principle more data means that more complex classifiers can be learned, in practice simpler classifiers wind up being used, because complex ones take too long to learn. Part of the answer is to come up with fast ways to learn complex classifiers, and indeed there has been remarkable progress in this direction."

The comments from Professor Domingos give us an insight into the evolution of learning systems based on the availability of (big) data and compute clusters:

  1. Researchers add more data to improve the performance of a learning algorithm;
  2. As training set size increases for a given model, training TIME also increases;
  3. Researchers turn to more efficient learning algorithms to reduce the training time;
  4. The availability of large, cost-effective compute grids in the cloud, and HPC technologies like GPUs allow researchers to deploy even "bigger" models (more models, more features).

The Rise of Deep Learning

Indeed the above cycle has led to the rise of deep learning. The scale of available data and processing capacity is enabling large models, often Neural Networks, to train on large amounts of data, with sophisticated tools that allow researchers to still do experiments with reasonably short feedback loops. With near-unlimited data and compute power it becomes more important to pick models that scale well with available training data, and the current sentiment in academia is that deep-learning is the approach which scales best (see second image from a recent presentation by Andrew Ng).

Taken from this video, Andrew Ng has a nice picture explaining the rise of Deep Learning:

Bookmark and Share