Preview Mode Links will not work in preview mode

Sep 20, 2020

Most AI breakthroughs are driven by deep learning. However, current models and deployment methods suffer from significant limitations, like large energy and memory consumption, high costs, and hyper-specific hardware. Hardware advancements have gotten deep learning deployments this far, but for AI to meet its full potential, a software accelerator approach is required.

Dr. Eli David, a pioneering researcher in deep learning and neural networks, has focused his research on the development of deep learning technologies that improve the real-world deployment of AI systems, and believes the key lies in software. Bringing his research to fruition, Eli has developed DeepCube, a software-based inference accelerator that can be deployed on top of existing hardware (CPU, GPU, ASIC) in both datacenters and edge devices to improve deep learning speed, efficiency, and memory drastically.

For example, some of his results include:

•Increasing the inference speed on a regular CPU to match and surpass that of a GPU, which costs several times more
•Increasing the inference speed on a GPU to equal the performance of 10 GPUs