sensAI - AI close to the sensors
Easics uses its expertise in system-on-chip design to develop small, low-power and affordable AI engines that run locally, close to your sensors.
sensAI architecture and data flow
The input sensor data and the quantized weights are loaded through a DMA controller into the buffers. Both data and weights are shifted through the convolution engine. The result goes to the accumulator and is finalized in the post processor. The sequencer manages the execution of the subsequent layers of the network. It generates a continuous flow of output tensors becoming input tensors of the subsequent layers. The final output tensors result is returned to the application microcontroller.
features and applications
The core is optimised by parameterisation of our generic core towards application specific needs. SensAI supports the following operations:
- convolution engine:
- convolution 2D
- matrix multiplications for LSTM
- Depthwise convolutions
- fully connected layers
- configurable post-processor
- Bias, Max pooling, ReLU,RELI6, Leaky ReLu, ...
e.g., CNN (ResNet, YOLO, mobilenet, ...), RNN (Deepspeech, ...)
Why choose sensAI:
- Low hardware cost: MAC efficiency above 95%
- Superb flexibility: It supports CNN and RNN on the same core instance.
- Fast time to market to embed AI close to the sensor
- Customization of the core: in terms of performance, power consumption, area (number of multiplier units) and memory requirements.
- configure your sensAI core via the easics estimator tool.
Sensors benefit from sensAI:
- Image sensor + AI
- other (Hyperpsectral, X-ray,...)
- Audio sensor + AI
- Other sensors + AI:
- Any application that you would like to discuss with us to add AI on chip