There is nothing new under the Sun, the saying goes, and that also holds true for Field Programmable Gate Arrays. FPGAs though are back with a rage, with major cloud vendors offering FPGA-as-a-service and chip manufacturers renewing their interest in the space. Among other facts, Intel acquired Altera for $16B and is now shipping the new Intel Xeon CPUs with an integrated FPGA.
At the same time, applications like data analytics, machine learning, and genomics are extremely computationally intensive and require a large amount of processing power -let alone manpower- to manage the clusters. To face this challenge, warehouse data centers and cloud providers have recently started to adopt the use of hardware accelerators.
Hardware accelerators (e.g. FPGAs) can speed up the application processing and at the same time reduce the energy consumption in data centers. Tech titans like Amazon Web Services (AWS), IBM, Intel, Baidu, and Alibaba have recently announced that they will start hosting in their public clouds hardware accelerators in the form of IP (Intellectual Property) cores.
FPGAs is not a novel technology however it has been costly to procure and program for. FPGAs are used to implement a single algorithm in the pursuit of improved performance. Typical computers like a laptop or a server are using general purpose CPUs. You can think of a CPU as a calculator and an FPGA as a special-purpose calculator that can be programmed to solve only one equation. By sacrificing flexibility for performance, the equation can be solved extremely fast using the special-purpose calculator (FPGA).
As the industry is challenged by the slowing of Moore’s law and big data analytics are becoming mission critical but also costly, companies are increasingly looking for alternative computation platforms to accelerate workloads. We have seen the rise of GPUs and Nvidia, as these chips are ideal for deep learning applications (large sparse array algebra). FPGAs have also been in the mix of large companies, resulting in what we call vertical computing or vertical integration, where we have problem specific hardware.
InAccel is riding the wave of the proliferation of FPGAs via their availability in public clouds and the increased adoption of several standard algorithms provided by open source computation frameworks like Apache Spark. InAccel provides hardware accelerators that are a drop-in replacement for the Apache Spark MLLib, resulting in 3–10x speedups. All it takes is to run a node of Apache Spark on an F1 AWS instance, run the InAccel IP core that you can find on the AWS Marketplace and replace one line on your code running on Spark! Your CIO will thank you.
We are excited to partner with Christophoros Kachris, a researcher who has spent his entire career in the field of FPGAs, along with his co-founders Elias Koromilas and Ioannis Stamelos. Vertically integrated computation is the future for intense and critical workloads and we look forward to InAccel becoming the hub of hardware accelerators.
👉 If you are interested in the future of vertical computation InAccel is hiring.