• Products
  • Applications
  • Technologies
  • Company
  • Careers
  • HeliosAOT

    Blazing Fast Neural Inferencing

    HeliosAOT is an ahead-of-time compiler that converts LiteRT models into highly optimized, standalone C inference modules—custom-tailored for Ambiq’s ultra-low-power SoCs. It produces compact, efficient, and human-readable C code with zero runtime overhead, enabling lightning-fast, power-efficient AI at the edge.

    HeliosAOT highlights

    01

    Zero Guesswork in Memory Allocation

    Automatic tensor memory planning eliminates over-allocation and strips away unused code for lean, efficient deployment.

    02

    Up to 10x Smaller Code Size

    Dramatically reduces flash footprint on Apollo SoCs compared to standard TensorFlow Lite for Microcontrollers.

    03

    Deep Optimization & Customization

    Fine-tune inference pipelines at the operator, subgraph, or full graph level with advanced techniques like layer fusion, tensor reordering, and intelligent memory placement.

    04

    Seamless integration

    Easily integrates as a Zephyr RTOS module or as a plug-in to Ambiq’s neuralSPOT AI Development Kit (ADK) for streamlined workflow.

    Uncompromising Performance

    Get HeliosRT-level inference speed in a tiny package—HeliosAOT slashes memory footprint by up to 2.6× on MLPerf Tiny models.

    Design Resources

    Preparing to download