Ron Gordon
Director – Power Systems
Director Power Systems
Files, databases, data warehouses, data lakes, data oceans, big data… all that data, and more, is coming in every day. The burning question is, “What can I do with all that data?” The age of analytics use in decision making has progressed from row/column relational analytic queries to the much higher realm of advanced analytics called artificial intelligence (AI). In AI we create models based on large amounts of data, using much more data than normally found in traditional relational database tables, to create views, insights and real-time predictions. Today, data comes from more sources and in more formats than we were previously able to store in relational data bases and analyze effectively. In the world of Artificial Intelligence (AI) these new analytic processes are called Machine Learning and/or Deep Learning. Data used today in Machine Learning and Deep Learning can come from many sources, such as social media (Facebook, Twitter, Instagram, emails, etc.), real-time feeds from instrumentation (cash registers, pressure gauges, thermometers, blood pressure monitors, etc.), and public domains (weather forecasts, stock market data, currency exchanges, associated press, etc.). With today’s internet speeds, access speeds, and data storage capability, we can create tremendous amounts of data that apply to any business or industry. Use cases validate both cost savings and increased profit, such as predictive maintenance, lower inventories, just in time and accurate deliveries and routings, faster acquisitions, etc.
There is also competitive advantage (right price and right products at the right time, to meet customer demand) that can be derived from all the new data. But the question is… ”How do companies and data scientists do it?” The answer to that lies in the emerging field of Artificial Intelligence, thru Machine Learning or Deep Learning modeling. Technology has moved from “AI is coming” to “AI is here and now.” This capability is causing IT organizations, CIOs, CTOs, and CEOs to rethink their business and ask, “Is there a way to leverage the new data analytics using Artificial Intelligence?”
What is Artificial Intelligence?
First of all, AI is a high-level umbrella term used to define a process where data scientists create predictive learning models that are built based on multiple data sources. Once the model is tested and proven accurate, based on existing data, the model can be applied to real-time data for accurate analysis and prediction. One commonly referenced example from the medical world is personalized health monitoring. The Artificial Intelligence system, using Machine Learning and Deep Leaning, ingests millions of data points from smart watches, routine physicals, age, geographic location, and other wearable devices. This allows doctors and relatives to monitor the person’s current health, and then predict changes, based on large amounts of data that have been used to create patterns. In retail, one example is tracking prices. As the price of retail items tend to fluctuate over time, AI can help ecommerce stores by tracking patterns in the fluctuations and reactively set prices accordingly. Obviously, the predictive model is based on a wide spectrum of data from new and real-time sources. And, it cannot be matched with historical warehouse data contained and used in standard relational database analytics used by tools such as Cognos, SAS, Fusion, Tableau, WebQuery, etc.
Building a System for Artificial Intelligence – What is needed?
So, how do we do this? What software is needed? Do I need new storage technology? Do I need increased system infrastructure performance? Do I need new data scientist skills? Let’s look at this from a high level using tools made available from the Cognitive Computing departments within IBM. The answers to the above questions can be addressed, and will most likely require new components and skills, but the value of the results in a time-dependent scenario can be both very informative and critical for business decisions.
» Software for Artificial Intelligence System
Most of the AI software is open source, no-charge, and can be downloaded from the Microsoft GetHub site. These frameworks go by names including Caffe, Torch, Tensorflow, etc. Their models can be created by using many programming languages, and they also interface to GPUs for higher performance. These frameworks are used to create the AI models which create the patterns analysis. IBM Cognitive Systems offers a product called IBM Power AI Enterprise, which contains the frameworks in an easy-to-install and supported package. IBM also offers IBM Power AI Vision, which includes a visual model-development tool to assist the AI learning curve for data scientists. But, the models need data… lots and lots of data, to create high probability in pattern recognition and prediction.
» Storage for Artificial Intelligence System
Then, there is the storage requirement based on larger amounts of data. Since the processing infrastructure will be a clustered HPC architecture, data connectivity, resiliency, capacity, and performance are critical to obtaining results in a reasonable time. IBM offers ESS storage, supported by IBM Spectrum Scale, which is a cluster file system (previously called GPFS) for a shared structure across many cluster compute nodes. For higher levels of management and control, IBM offers a portfolio of IBM Spectrum products: Conductor for creation of the scalable data and application fabric to access and analyze the data; and IBM Spectrum Symphony for management of the compute data and applications in a shared grid and integration with Apache Spark.
» System Architecture for Artificial Intelligence System
Now the question is, “Can I just use any system technology and architecture for the compute element of the architecture?” The answer could be yes, as most of the software is open source Linux-based, and all modern architectures support Linux. BUT, and this is a big BUT, would you like your answers in a timely manner, so you can react and plan before the event, as opposed to after the event? IBM Power Systems are proven to be the best architecture for AI based on performance, capacity, and bandwidths of memory and IO. One example: A model was developed and run on an x86 architected infrastructure, using PCIe connected GPUs with Tensor Flow, and the model took 7 days to complete. That same model, using the same tools, was then run on a Power Systems AC922, with the NVlink integrated GPUs, and it took 7 hours to complete.
The Power System most suited to the AI requirements is the AC922. Since AI models are highly computational, GPUs are used for acceleration. On the IBM Power AC922, there are up to 6 Nvidia V100 32 GB GPUs integrated on the system planar, with direct NVLink (300 GB/s) connection to the POWER9 processor with memory coherency. Additionally, the server memory can act as GPU memory, with up to 2 TB per socket on the IBM Power AC922. These Power systems are low cost, plus they support PCIe Gen4, and run Linux distributions from RedHat, SUSE, Ubuntu, Centos, Debian, as well as the IBM Power AI frameworks and Symphony tools.
In summary, AI is here and can take companies to a new level of analytics. Mainline has the expertise to assist with selection, architecture and implementation of your AI infrastructure. For more information, please contact your Mainline Account Executive directly, or click here to contact us with any questions.
Related articles
High Performance Computing powered by Nvidia for Machine Learning and Deep Learning workloads
Watch this Video
IBM Power Systems upgrade High Performance Systems for Cognitive and AI workloads
Read this blog