When does data get big? People, devices and things are constantly generating massive volumes of data – big data. At work people create data, as do children at home, students at school, people and things on the move, as well as objects that are stationary. Devices and sensors attached to millions of things take measurements from their surroundings, providing up-to-date readings over the entire globe – data to be stored for later use by countless different applications.
Today, there are slightly more than 6.2 billion mobile subscriptions worldwide – a figure that is set to rise to 9 billion in 2017. With so many people using mobile networks, and owing to the increase in smartphone penetration and constant connectivity, the volume of data traffic in these networks is expected to rise 15 times in the same time frame. The nature of this data – big data – is also set to change; the size of single data sets, the variety of data types and the demand for real-time access to data are all on the rise.
These factors lead to varying types of data being collected by the network and transmitted through it. Data sets are irregular and may be unstructured, they can contain ambiguities, they are time- and location-dependent, and are constantly being updated by capture equipment such as mobile devices, sensors and RFID readers.
In its simplest form, big-data technology encompasses the tools, processes and procedures to consolidate, verify, analyze, manage and visualize very large data sets with mainly non-relational but also relational repositories.