Use In-Memory Database to Remove the IO Bottleneck

November 10th, 2015
Ron Gordon
Director of Power Systems


Many customers today have experienced significant growth in customer interaction via social, mobile or web processes.

With the increase of transactions, all requiring database access for both retrieval and updating, comes the problem of decreased response time from the storage device, and the corresponding possibility of customer and user dissatisfaction. Performance used to be the function of many variables such as processor speed, memory speed, IO bus speed, disk speed and network bandwidth. Today’s Power System technology has addressed almost all of these elements with faster and highly threaded cores, faster memory and L4 memory caches, high speed memory controllers, high speed busses, faster 15K disks and flash storage. However, even with all of these advancements, the increase in number of users, transactions and applications can still create a “performance issue.” Compound this with the demand for new, never-thought-of applications (such as real time recognition, faster fraud detection, analytics, real time social interactions, mobile applications, etc), and regardless of the technology advancements, we still have limitations on performance, which in turn limits application capabilities, customer satisfaction and business differentiation. However, it seems that as technology presents us with a problem, it then very quickly addresses that issue.

Open Source collaboration, a relatively new solution, greatly contributes to these breakthroughs and is now available on POWER8 systems, allows programmable co-processors to offload cores and increase performance. This is applicable to a great many performance-bound applications, but can also address increased performance of “in-memory” databases. An in-memory database is just as the words imply… the full database of information that you wish to access and update is in the memory of the system — no disk IO, no flash access, no IO bus latency, etc. There are several providers of in-memory databases such as SAP HANA, Memcache, Casandra, and one I would like to focus on, Redis and Redis Labs. First, let’s look at a minor problem with in-memory databases… they have to be in memory. So, if I have a 40 TB database, I need 40 TB of RAM to contain it. This will usually span several servers, approximately 80 servers. And, if each server can only support 500 GB of memory, now our fast in-memory database is slowed by interconnect latency and processing to determine which server has the data you want. This is faster than a pure storage database, but the cost of 80 servers makes one really consider if the cost is worth the benefit. So, maybe in-memory databases are only good for a small database, which then again limits the usefulness of the application. This is where Power Systems technology has enabled a breakthrough answer.

Power Systems has implemented in POWER8 based systems, a technology called CAPI. For now, let me just say that CAPI (Coherent Accelerator Processor Interface) enables FPGAs (Field Programmable Gate Array) to offload the work of the cores, and it also provides a “direct access” into the cache of the processor over a PCIe link. This eliminates the “slow” functions necessary of normal IO operations, that must have processing time spent on doing the required IO operations (overhead), and then device to memory to cache to processor data flow (also overhead). IBM and Redis Labs have teamed to create a solution where a flash system (eg. 840 or 900) can act as extended memory, and the access time to the “extended memory flash” is lightning fast. Using the example of the 40 TB database needing 80 servers, we can now put the 40 TB database on the flash system and have 2 or 3 servers connected to the flash system, via a CAPI interface FPGA, and provide almost the same performance − maybe even better − than the 80 servers. This saves cost, administration and reliability. Redis Labs uses the OpenSource Redis database, and has extended it with CAPI enablement, but has also added enterprise capabilities such as redundancies, high availability and performance improvements over the standard Redis database.

I’d like to give a little deeper view of CAPI so you can understand how performance increases happen: Built into the POWER8 processor chip is a PCIe interface into the cache hierarchy (L3 and L2); this interface is called CAPI. The PCIe link connects to an FPGA adapter, in a PCIe slot, on the POWER8 server. An FPGA has a processor and memory, and it can perform functions that can be programmed onto it. (The FPGA enabled for the CAPI interface comes with a programming tool kit from Altera). In the case of the Redis Labs in-memory database accelerator, the FPGA is pre-programmed with the device driver of the flash memory and the logic to retrieve flash data based on a key value. When the application program needs data from the in-memory database, if the data resides on the “extended memory” (the flash system), the request is sent to the FPGA to retrieve the data. The IO device driver and read function runs in the FPGA, and the POWER8 processor, in the server, is not impacted and can continue doing application or analytic work, while the FPGA does, in parallel, the database record access function. As the FPGA has the data returned from the flash storage, the CAPI interaction between the FPGA and the processor places the data directly into the cache hierarchy of the processor. This is much faster than normal IO, which has to go to RAM via the IO device driver running in the server processor. This implementation off loads approximately 20,000 instructions to the FPGA and saves corresponding RAM access to cache overhead. This “saving” of 20,000 instructions allows the application to spend more time in application mode, rather than waiting on data. This increases the overall application performance, as well as lowers the cost due to a lower number of costly servers.

While I find this very exciting, I think there are other factors that make this even more revolutionary. First, the Redis Labs database is NoSQL. This means, it is a key value store and can store any object, such as text, data, audio, video, etc., in a single “keyed” record. As fields grow, the schema need not change. Access is very rapid since it is “key-driven” and not row column, and it is in-memory. Second, the Redis Labs database can act as a cache to a relational database, such as DB2, Oracle DB or Enterprise DB. This enables the “back-end” database to remain in place for the “standard” applications, but also be enabled for very high-speed access via an “in-memory” NoSQL database. Programming is required, but the value received is tremendous, and it opens a whole new world of application possibilities.

This NoSQL Redis Lab implementation and exploitation of CAPI technology is only one example of how CAPI can improve performance and enable applications that were previously only thoughts. IBM is working on generalizing this solution to multiple databases, as well as teaming with several application providers who are working on exciting new solutions that “break the performance barrier.” These will soon be available to the entire IT community. But, for specific unique very high performance needs, CAPI and a Nallateck development kit is available.

No longer are we restricted to limits… we can expand beyond those.

For additional detail, please download the IBM white papers on CAPI and the Redis Labs Power Systems solution.

Submit a Comment

Your email address will not be published. Required fields are marked *