High-end RISC-V cores and processors for deep learning on Esperanto agenda, as announced at Hot Chips
Esperanto Technologies Inc. hired two senior engineering managers from Tesla’s Autopilot group. David Glasco and Dan Bailey will head up engineering for the startup working on high-end RISC-V cores and processors, targeting deep-learning and general-purpose jobs.
The news comes at the opening here of Hot Chips, one of the top gatherings of microprocessor designers. As many as half the talks at this year’s event focus on machine learning, reflecting the race to design silicon accelerators for the emerging style of computing.
At the event, startup Tachyum will detail ambitious plans for processors that most closely rival Esperanto. It will describe a family of 16-64 core SoCs it claims outperforms Intel’s Xeon, and a water-cooled, 64-core version with 32 GBytes HBM3 for AI, all taping out next year.
Xilinx will describe a 75W FPGA delivering 20 tera-operations/second (TOPS) on 8-bit integer operations for inference jobs, using an 18×27 MAC array, 382 Mbits SRAM and 64 GBytes DRAM on board. Separately, it will detail its first Everest accelerator, a 7nm chip taping out this year using vector cores organized into different dataflows for AI, 5G and other processor-intensive jobs.
In addition, DeePhi, a China startup Xilinx acquired last month, will detail the latest version of its AI core and software optimizations for it. The talks show Xilinx now has at least three separate AI architectures as it competes with Intel’s FPGA group that has design wins in Microsoft’s data centers.
In mobile, Arm will give a deep dive on its new core for machine learning. A GHz-class ML core promises 4 TOPS on convolutional networks and 3 TOPS/W for a 2.5mm2 die at 7nm. Google and Samsung will give talks about applications processors that include hardware to bolster AI.
In addition, startup Mythic will detail its processor-in-memory, an emerging architecture it claims will deliver next year high-end GPU performance in embedded inference tasks a fraction of the power. For its part, Nvidia will give separate talks on its open-source, deep-learning accelerator for clients as well as its GPU server for training models in the cloud.
The activity underscores what has become a race for processor architects to build chips tailored for the new AI workloads. Salaries gave been soaring, sometimes pitting startups, big chip vendors and their customers in competition for engineers with expertise in deep learning.
In the overheated space, Esperanto’s two hires are something of a coup. David Glasco, former architecture and design lead for Tesla’s Autopilot SoC, was named the startup’s vice president of engineering. Dan Bailey, former circuit design lead for Autopilot hardware, has become Esperanto’s senior director of engineering.
Tesla has been leaking top talent this year amid struggles to produce new models and a March crash still under investigation that may have involved use of Autopilot. In April, Intel hired veteran chip designer Jim Keller from Tesla where he was vice president of Autopilot and low-voltage hardware.
Both new Esperanto engineers have long careers as chip designers. Before Tesla, Glasco was a senior director of server SoCs at AMD and spent more than 12 years at Nvidia. Bailey was an AMD senior fellow before joining Tesla and worked on a variety of processors dating back to Digital Equipment’s Alpha.
Esperanto employs more than 100 people, a high watermark at a time increasingly focused on lean silicon startups. “We are delighted to be able to attract seasoned technology managers such as David and Dan who have deep background and experience,” said Dave Ditzel, chief executive of the startup.
Esperanto launched itself at a RISC-V workshop in November. At that time, Ditzel said the startup was developing two RISC-V cores and two SoCs, at least one made in a 7nm process. A 16-core chip using a so-called Maxion core targets high single-thread performance, and a 4,096-core chip using Minion cores puts a vector unit in each core and aims at best performance-per-watt.