Cadence Design Systems, a developer of IP blocks, created the NeuroCPU core and a neural processing unit designed to solve AI problems with low energy efficiency. The solution will allow the creation of SoCs for intelligent sensors, IoT devices, wearable devices, assistive vehicles (ADAS) etc.
The atomic performance for Neo NPU is expected to be scalable from 8 GOPS to 80 TOPS per core. Multicore scenarios make a 100-second difference. The NPU core handles AI classics and generative tasks. It’s about support for INT4/8/16 and FP16 for convolutional neural networks (CNN), recurrent neural networks (RNN) and transformers.
Image Source: Cairns
Neo NPU is expected to use 7nm manufacturing technology. The clock speed is 1,25 GHz. Compared to the first generation Cadence AI core, the Neo NPU is a 20x higher performance level. By 58 %, the rate of deference varies by hour and by night.
Developers will have a NeuroWeave Kit (SDK) supported by TensorFlow, ONNX, PyTorch, Caffe2, TensorFlow Lite, MXNet, JAX, and Android Neural Network Compiler, TF Lite Delegates and TensorFlow Lite Micro. The innovative solution for Neo NPU is going to be available in December 2023.
If you notice any errors, click the mouse and press CTRL+ENTER. | Could you write better? We always welcome new authors.