nearbAI - AI close to the sensors

 

 

easics uses its expertise in system-on-chip design to develop small, low-power and affordable AI engines that run locally, close to your sensors. 

This results in a low and predictable latency and runs with ultra low-power consumption.  easics’ embedded AI solutions integrate tightly with novel and existing sensors such as image sensors capturing light inside and outside the visible spectrum (such as hyperspectral and thermal infrared), 3D scanning laser (LiDAR), Time-of-Flight (ToF) sensors, radar, microscopy, ultrasound sensors, and microphones.
To create such innovative hardware, easics is developing the next generation of its embedded AI framework. This framework automatically generates hardware implementations of the deep neural networks that make your specific application smart. easics maps these AI engines on custom ASIC technology and FPGA. The framework and technology offers enough flexibility in terms of different deep learning models and hardware to have a scalable AI engine that is ready is for the future.
framework AI
 

nearbAI architecture and data flow

The input sensor data and the quantized weights are loaded through a DMA controller into the buffers. Both data and weights are shifted through the convolution engine. The result goes to the accumulator and is finalized in the post processor. The sequencer manages the execution of the subsequent layers of the  network. It generates a continuous flow of output tensors becoming input tensors of the subsequent layers. The final output tensors result is returned to the application microcontroller.

 

 

 

AI engine on chip
 

features and applications

nearbAI features: 

The core is optimised by parameterisation of our generic core towards application specific needs. nearbAI supports the following operations:

  • convolution engine: 
    • convolution 2D
    • matrix multiplications for LSTM
    • Depthwise convolutions
    • fully connected layers
  • configurable post-processor
    • Bias, Max pooling, ReLU,RELI6, Leaky ReLu, ...

e.g., CNN (ResNet, YOLO, mobilenet, ...), RNN (Deepspeech, ...)

Why choose nearbAI: 

  • Low hardware cost: MAC efficiency above 95%
  • Superb flexibility: It supports CNN and RNN on the same core instance. 
  • Fast time to market to embed AI close to the sensor  
  • Customization of the core: in terms of performance, power consumption, area (number of multiplier units) and memory requirements.
  • configure your nearbAI core via the easics estimator tool.

Sensors that benefit from nearbAI:

  • Image sensor + AI
    • RGB
    • Time-of-Flight
    • other (Hyperpsectral, X-ray,...)
  • Audio sensor + AI
  • Other sensors + AI:
    • Lidar 
    • Ultrasound
    • ..
  • Any application that you would like to discuss with us to add AI on chip

AI engine ASIC
 

Deep learning on ASIC - documents

 

FPGA evaluation kit

easics recommend to evaluate our deep learning engine on an FPGA platform. Have a look at https://www.easics.be/products/deep-learning-fpga

Share: