each nearbAI core is an ultra-low power neural processing unit (NPU) and comes with an optimizer / neural network compiler. it provides immediate visual and spatial feedback based on sensory inputs, which is a necessity for live augmentation of the human senses.
optimized neural network inferencing for visual, spatial and other applications
unparalleled flexibility: customized & optimized for the customer’s use case
produces the most optimal NPU IP core for the customer’s use case: power, area, latency and memories trade-off
minimized development & integration time
answers to your needs
function examples
why nearbAI?
highly computationally efficient and flexible NPUs
enable lightweight devices with long battery life ... with ultra-low power, run heavily optimized AI-based functions locally
enable truly immersive experiences ... achieve sensors-to-displays latency within the response time of the human senses
enable smart and flexible capabilities ... fill the gap between “swiss-army knife” XR / AI mobile processor chips and limited-capability edge IoT / AI chips
optimizer platform for ASIC
the L3 optimizer is a powerful ASIC optimizer platform that specializes in optimizing low power, low area, and low latency use cases for our customers. with the L3 optimizer, customers can expect customized solutions that are tailored to their specific needs, resulting in superior performance, reduced power consumption, and minimized area usage.
let’s do a custom benchmark together
provide us with your use case:
quantized or unquantized NN model(s):
ONNX, TensorFlow (Lite), PyTorch, or Keras
constraints:
average power & energy per inference, silicon area, latency, memories, frame rate, image resolution, foundry + technology node
first-time right
we are also offering: