So, in my previous article, I discussed how we have three powerful tailwinds running into three daunting headwinds – what does this mean for Edge AI? With all those countervailing winds, a sane sailor would tell you to stay in port, but what fun would that be? It does make predicting trends difficult, though not enough that I’m not willing to risk making a fool of myself by taking a stab at it.
Invisible AI-improved Features
Prediction 1: novel ‘invisible’ models that tweak the inner functioning of edge devices will proliferate.
Given the constraints we discussed above, there are only a handful of things AI can do at the edge that is popular enough to encourage investment, such as simple (yet clever) speech recognition, audio and image classification, activity and health monitoring, and industrial sensor monitoring.
There are many other things AI can do that are only partially visible to the end user. For example, AI can be used to detect the type of room you’re in and use that to tune the noise cancellation during a phone call more effectively. The caller will only notice the call with good sound quality without knowing AI was involved. Many of these types of ‘near physics‘ models don’t need vast datasets – instead, techniques such as simulation, reinforcement learning, and zero- or few-shot learning can be used to train the model.
AI Models Continue to Improve
Prediction 2: Existing models will continue to improve and add features.
As I mentioned above, most current models are leaving a lot on the table regarding the information that can be deduced from sensors. For example, you can tell a lot about a person’s health from their walking gait – how balanced it is, what style of walking they prefer, and so forth. Most models don’t attempt this because the datasets aren’t currently there, but the data is there. Every person wearing a smartwatch is a potential data source, and watch manufacturers merely need to collect a fraction of that data to build an amazing dataset. It is a matter of finding people willing to volunteer the data, which some major manufacturers currently do.
What is Edge AI?
Typically, when users interact with AI in consumer, industrial, and security devices, only a small part of the AI processing happens locally, with the ‘heavy lifting’ being delegated to the cloud. This isn’t ideal because sending all that data back and forth expends energy, increases latency, and exposes potentially private data.
Edge AI is meant to solve those problems. It refers to AI that runs entirely locally on embedded systems without the constant need to communicate with the cloud, saving power and time while increasing privacy and security. Ambiq’s low power utilization means more AI can be performed locally without compromising battery life.
I’ve often written of the tailwinds that are propelling Edge AI to new levels of usefulness and value, and I’ll touch on those again below. Still, it is also important to understand the headwinds it is also experiencing. The combination of the potential threats and opportunities is keeping Edge AI in a confused, non-linear state, with a lot of progress in some areas and less in others.
More Efficient AI Deployment
Prediction 3: 2023 will see the maturing and adoption of highly efficient AI runtimes.
AI is executed on devices via a ‘runtime’ – a piece of code that understands the model definition and runs it on the device. The most popular runtime is “Tensorflow Lite for Microcontrollers” (TLFM.) TLFM is a great way to get a model up and running on a device, as it has a large ecosystem of developers to draw on for questions and support and lots of tools built around it. Unfortunately, it isn’t very efficient compared to what a good coder could produce by hand (as Ambiq® did for our speech recognition model collect, NNSP.) TLFM uses more memory and compute, resulting in higher use, latency, and memory footprint. Coding AI models by hand is *hard*, though, so developers tend to stick to TFLM.
There is a middle path, however: tools that convert AI models into ‘compilable’ languages such as C. These work in combination with efficient runtimes to execute models many times faster than TFLM does so. There are a handful of such solutions, though none are as widely adopted or have broad support as TFLM.
Conclusion
Adding AI features to your edge products is an exercise in tradeoffs – AI is notoriously resource-intensive, consuming prodigious amounts of CPU cycles, memory, and power. Here at Ambiq, we’re obsessed with energy efficiency, and this obsession has resulted in the industry-leading power-efficient CPU-based inference benchmark results by a large margin. We purposefully build our hardware and software to work together to deliver the most optimal results, balancing accuracy, performance, and power consumption to any edge device’s AI requirements.
Ambiq has grown into one of the most coveted Artificial Intelligence (AI) technology companies globally by driving ultra-low power innovations. In July 2022, Ambiq launched neuralSPOT® to enable artificial intelligence on electronic devices within the power constraints of batteries. What was impossible in the past due to power consumption requirements, IoT edge devices are now capable of high-performance AI capabilities, including speech recognition, activity detection, and real-time analytics.
neuralSPOT includes everything needed to get an AI model onto Ambiq’s platform, such as the latest Apollo4 Plus and Blue Plus SoC. The SDK consists of libraries for talking to sensors, SoC peripherals management, power and memory controlling configurations, tools for easily debugging AI models from any laptop or PC, and examples that tie it all together.
Because models are the heart of AI, to make practical EdgeAI possible, Ambiq also created ModelZoo, a collection of open-source edge AI models packaged with all the tools needed to develop the model from scratch. There are libraries for talking to sensors, managing SoC peripherals, controlling power and memory configurations, tools for easily debugging your model from your laptop or PC, and examples that tie it all together. neuralSPOT contains Ambiq-specific embedded libraries for audio, i2c, and USB peripherals, power management, and numerous helper functions such as interprocess communications and memory management. For the first time, a simple yet elegant tool can empower AI developers in different roles, including data specialists, AI specialists, and application specialists.
For more information, visit Ambiq AI – Ambiq Supercharging Edge AI