ActiveViam

Nordic Capital, a leading sector-specialist private equity investor, has made a majority investment in ActiveViam | READ MORE

From 24 hours to 24 seconds – How In-Memory Computing is Accelerating Business Performance

ActiveViam |
February 18, 2014

Countless articles are written about Big Data every day. Beyond the hype, the Big Data phenomenon is a real change agent delivering capabilities that were never thought of before. Financial institutes and banks, for example, can calculate and asses their risk in near-real time, throughout the day.

To a large degree, the phenomenal performance and interactive analysis capabilities of Big Data projects are enabled by in memory computing. In-memory databases have become the foundation of a new generation of business applications that bring the power of analytics to the hands of decision makers.

Why should you consider in-memory databases and what sort of analytics-enabled applications can it help you build? This post provides a summary of some key facts to explaining the disruptive power of in-memory databases. Our next post will explore various use cases from the worlds of e-commerce, logistics and finance, illustrating how best-performing organizations are gaining a competitive advantage by using in-memory databases.

The rise of in-memory computing

Processing data in memory is not new. In the early 80’s, memory was sold by the kilobyte. Ten dollars would get you approximately one kilobyte, so that storing an MP3 album in memory (~ 100 Megabytes), would carry the tag price of about a million dollars.

Around the same period, the first generation of databases appeared. With the high price of memory price it is not surprising that database developers did not consider designing programs that stored data in the computer’s random memory. Everything – from algorithms to data structures – was designed for hard disks with one objective: Reduce the number of disk access.

Times have changed. Today, the same ten dollars buy you one gigabyte of memory. It is estimated that 95% of the databases in the world can be stored within one terabyte of memory! Facilitated by this memory hardware cost drop, in-memory processing has now become mainstream.

Memory price history

Bandwidth, Latency and Performance Acceleration

When considering bandwidth – the speed at which a block of data is sequentially read from the first to the last byte – memory provides x200 times more bandwidth than a hard drive. Scanning one terabyte of data will take one minute with in-memory, but three hours with a hard drive!

Another in-memory key feature is latency. Also known as “response time,” it is the time it takes to prepare the component before data can be read. Again, when comparing the two technologies, memory is 100,000 times faster than a hard disk. Therefore hard drives are not a fit for highly interactive applications requiring rapid response times.

Of course, memory is not the only element to take into consideration when considering speed. The CPU and the network also play a significant role. However, there is a phenomenal reserve of power available for those applications that are specifically designed for In-Memory computing.

Turning Computing Performance Increase into Business Performance Gains

With in-memory applications executing transactions and tasks much faster than traditional applications, business performance is greatly impacted. The financial industry is probably one of the first industries to have achieved significant performance gains by embracing in-memory computing.

Consider traders in a trading room, who buy and sell financial products with various risk levels. To manage their risk, traders need interactive applications that calculate the position of their portfolio and the associated risk level in real-time. Any time the stock price or as soon as a trader executes a new deal, everything must be recalculated. Due to the volume and volatility of the data, calculations are compute-intensive and are restricted to the workstation of each trader.

Yet, banks must gauge their risk at a higher level than each individual trader. Following the 2008 financial crisis, regulators are pressing banks to produce much more sophisticated risk metrics and banks are expected to demonstrate a consolidated view of their risk at any time.

Value at Risk is one such metric, which helps quantify the maximum loss of the bank, should the market move in the wrong direction. It is a statistical measure requiring the aggregation of non-linear data. Prior to in-memory computing, a trader could not immediately gauge the impact of his own transactions on the bank’s overall Value at Risk and had to wait for the next day for the information.

Simulations

With In-Memory technology, metrics that took a full night to be calculated become interactive and Value at Risk can now be processed in real-time. A trader can now simulate a transaction and immediately see the impact on his portfolio, his desk, or the entire bank. Moreover, the trader can simulate the same transaction with several counterparties and select the counterparty that has the least impact on the overall risk. And since the trader takes less risk, he has an incremental buffer to make more deals than his competitors.

This is just a single example demonstrating how the technical performance gains brought by in-memory turn into a competitive business advantage. There are many other ways that businesses can benefit from in-memory computing. In the next post, we will delve into other examples where in-memory has helped companies break away from their competition by doing things radically different. Stay tuned.

Like this post? Please share it on your socials

Schedule a demo

ERROR!

Sorry! We were unable to process your
request. Please try again!

success!

Your request is submitted successfully!
We will keep you up-to-date.