• Products
  • Applications
  • Technologies
  • Tools
  • Company
  • HeliaRT

    Ultra-Efficient Runtime

    HeliaRT is an ultra-efficient AI runtime built on TensorFlow Lite for Microcontrollers, optimized for Ambiq’s Apollo family of ultra-low-power SoCs. It simplifies development and accelerates deployment of high-performance, energy-efficient AI applications at the edge.

    HeliaRT highlights

    01

    Optimized for Performance

    Utilizes Apollo SoC’s M-Profile Vector Extensions (MVE) and DSP capabilities to accelerate AI computations and maximize efficiency.

    02

    Custom AI Kernels

    Includes purpose-built kernels optimized for Apollo510’s vector acceleration, unlocking enhanced speed and performance for edge applications.

    03

    High Performance, Ultra-Low Power

    Enables developers to create responsive AI applications with minimal energy usage—ideal for battery-powered and always-on devices.

    04

    Seamless Compatibility

    Fully backwards-compatible with TensorFlow Lite for Microcontrollers and supports the entire Ambiq SoC lineup, ensuring flexibility across diverse hardware platforms.

    Turbocharged Inference


    HeliaRT delivers up to 7× faster inference than TensorFlow Lite for Micro, while offering extensive kernel support across AI layer types—no fallbacks, zero performance loss

    Video Library

    Preparing to download
    This site is registered on wpml.org as a development site. Switch to a production site key to remove this banner.