• Products
  • Applications
  • Technologies
  • Tools
  • Company
  • HeliaAOT

    Blazing Fast Neural Inferencing

    HeliaAOT is an ahead-of-time compiler that converts LiteRT models into highly optimized, standalone C inference modules—custom-tailored for Ambiq’s ultra-low-power SoCs. It produces compact, efficient, and human-readable C code with zero runtime overhead, enabling lightning-fast, power-efficient AI at the edge.

    HeliaAOT highlights

    01

    Zero Guesswork in Memory Allocation

    Automatic tensor memory planning eliminates over-allocation and strips away unused code for lean, efficient deployment.

    02

    Up to 10x Smaller Code Size

    Dramatically reduces flash footprint on Apollo SoCs compared to standard TensorFlow Lite for Microcontrollers.

    03

    Deep Optimization & Customization

    Fine-tune inference pipelines at the operator, subgraph, or full graph level with advanced techniques like layer fusion, tensor reordering, and intelligent memory placement.

    04

    Seamless integration

    Easily integrates as a Zephyr RTOS module or as a plug-in to Ambiq’s neuralSPOT AI Development Kit (ADK) for streamlined workflow.

    Uncompromising Performance


    Get HeliaRT-level inference speed in a tiny package—HeliaAOT slashes memory footprint by up to 2.6× on MLPerf Tiny models.

    Design Resources

    Preparing to download
    This site is registered on wpml.org as a development site. Switch to a production site key to remove this banner.