Information Technology - Blog V-Soft Consulting

Big Data, What Is It?

Written by V-Soft Consulting Group, Inc. | Apr 13, 2012 2:28:52 PM

This article attempts to define “Big Data” and provide an understanding of what it is, why do we have it, and where does it come from. My next article will describe in more detail why “Big Data” does not lend itself to be analyzed in a relational database.

If you have not been living under the preverbal IT rock for the last few months you have heard someone mention “Big Data”. What the heck is “Big Data”.

The truth is I don’t have a definitive definition. If you search Bing for “Big Data” you will receive 3,680,000 results. Of those 3.7MM results you will probably have about 2.5MM different definitions.

From Wikipedia:

“In information technology, big data consists of data sets that grow so large that they become awkward to work with using on-hand database management tools.”

The Wikipedia definition does not tell me what it is, what makes it big, or who/where do I get this “Big Data”.

I found the below quote as the first search result in my recent search for “what is big data?” It comes from Ed Dumbill at O’Reilly and it got me going down the correct path in my search for a good definition:

“I think of Big Data as the data that comes in so fast, the data that humans don’t directly make – or don’t necessarily think about making. Financial transactions isn’t really what comes into my head as Big Data. I think of it as data that doesn’t necessarily need to have a schema defined first. Data that doesn’t necessarily need to conform to a certain order right away.”

The normal approach to describing the big data constrains the discussion of big data to scale. This fails to realize the key difference between regular data and big data.

Big data can really be very small and not all large datasets are big! It’s time to find a new definition for big data.

Machines such Formula 1 race cars all have increasing numbers of sensors constantly collecting data. It is common to have hundreds or even thousands of sensors all collecting information about the performance and activities of an F1 car.

The data streaming from a thousand sensors on an F1 car is big data. However the size of the dataset is not as large as might be expected. Even a thousand sensors, each producing an eight byte reading every second would produce less than 30MB of data in an hour of racing/testing. This amount of data would fit comfortably into memory on a laptop!

We are increasingly seeing systems that generate very large quantities of very simple data. Telecommunications companies have to track vast volumes of calls and internet connections and petabytes of data is produced. However, the content produced is extremely structured. As relational databases have shown, datasets can be parsed extremely quickly if the content is well structured. Even though this data is large, it isn’t “big” in the same way as the data coming from the F1 sensors in the earlier example.

If size isn’t what matters then what makes big data big? The answer is in the number of independent data sources, each with the potential to interact. Big data doesn’t lend itself well to being tamed by standard data management techniques simply because of its inconsistent and unpredictable combinations.

Another attribute of big data is its tendency to be hard to delete making privacy a common concern. Imagine trying to purge all of the data associated with an individual car driver from toll road data. The sensors counting the number of cars would no longer balance with the individual billing records which, in turn, wouldn’t match payments received by the company.

Perhaps a good definition of big data is to describe “big” in terms of the number of useful permutations of sources making useful querying difficult (like the sensors in an F1 car) and complex interrelationships making purging difficult (as in the toll road example).

Big then refers to big complexity rather than big volume. Of course, valuable and complex data sets of this sort naturally tend to grow rapidly and so big data quickly becomes truly massive.