“Only charlatans and fools predict the future, and only a fool is certain.” – some smart person probably
Predicting the future is often a fool’s errand, especially in highly dynamic fields such as technology and AI. Bearing that in mind, please be gentle with me as I try not to make a fool while making a few predictions about Edge AI in 2023. Before we dive into predictions which I’ll discuss in the next article, it’s useful to get some context by looking at where the industry is today.
The State of Edge AI
I’ve often written of the tailwinds that are propelling Edge AI to new levels of usefulness and value, and I’ll touch on those again below. Still, it is also important to understand the headwinds it is also experiencing. The combination of the potential threats and opportunities is keeping Edge AI in a confused, non-linear state, with a lot of progress in some areas and less in others.
Tailwinds
Let’s start with three major trends that fuel Edge AI innovation: new focuses for AI research, community engagement, and technological breakthroughs. The bulk of AI’s roughly 300,000 research papers has traditionally focused on mainstream AI innovations centered around training and optimizing very large models. In recent years, a handful of researchers realized that ‘small is beautiful,’ leading to breakthroughs in model efficiency that will facilitate artificial intelligence running on IoT electronic devices at the edge. At the same time, technological breakthroughs, such as Ambiq’s SPOT® platform, enabled startups and developers to discover new applications for practical Edge use cases. These recent developments have helped produce some truly amazing products with intelligence that consumers started to take for granted. The speed with which “my watch knows I’m on an elliptical” went from “wow!” to “of course, it does” is impressive.
Headwinds
Not everything is rainbows and unicorns, though: there are also headwinds the Edge AI must contend with, including memory constraints, a scarcity of sensor types, and the availability of datasets. Let’s examine each of these, starting with memory.
Memory
Edge AI runs on embedded devices, and while power and compute innovations are progressing rapidly, memory is improving rather linearly (this is a general trend, not specific to embedded devices.) AI computation is memory intensive, involving the multiplication of massive arrays of numbers by the almost equally enormous amount of data captured by the sensors. The more complex the model, the more memory it needs – for example, famous AI applications ChatGPT and Midjourney need gigabytes of memory, thousands of times more than is available on edge devices. In the Edge AI space, there are several ways to mitigate this: we may sacrifice a bit of accuracy or reduce the model’s sophistication (for example, the model may identify fewer types of fitness activities.) AI designers can also take advantage of some of the research we mentioned above, much of which is focused on producing denser, more compact models while sacrificing less accuracy or capabilities.
Dataset
Dataset availability is another constraint. AI depends on (lots and lots of) data to train models. A small vs. large dataset makes the difference between an activity tracker that can “sort of” detect when you are doing a sit-up and one that can detect any style of sit-up for every fitness level of any body type. Producing datasets is laborious and can be expensive, and public datasets are rare, especially in the embedded space. For example, a ‘sit up’ dataset would require hundreds of people doing thousands of sit-ups while wearing data-capture devices. Over the last decade, the industry has pieced together a handful of public datasets (for example, by using public domain books as training materials for machine translation AI) in limited but popular domains such as speech, audio, and images. This approach has yet to happen widely for data specific to Edge AI. The fact that much of Edge AI’s promise is in privacy-sensitive medical and fitness tracking applications only makes collecting data more challenging.
Finally, Edge AI differs significantly from ‘mainstream’ AI because it interacts closely with the physical world. Whereas typical AI mostly works with text, images, and audio, Edge AI is embedded in the real world, sensing it via gyroscopes, accelerometers, cameras, microphones, and biosensors. It exacerbates the dataset problem we mentioned above (an image is an image, but there are many accelerometers, all slightly different even though they measure the same thing.) At the same time, Edge AI’s reliance on edge sensors means that there are only a few things AI can do at the present moment – there is only so much an AI can deduce from an accelerometer, although we’re nowhere near exhausting the possibilities. Novel sensors are being introduced, but these will run headfirst into the dataset problem we mentioned above, slowing their adoption by AI.
An Exercise in Tradeoffs
Adding AI features to your edge products is an exercise in tradeoffs – AI is notoriously resource-intensive, consuming prodigious amounts of CPU cycles, memory, and power. Here at Ambiq, we’re obsessed with energy efficiency, and this obsession has resulted in the industry-leading power-efficient CPU-based inference benchmark results by a large margin. We purposefully build our hardware and software to work together to deliver the most optimal results, balancing accuracy, performance, and power consumption to any edge device’s AI requirements.
Ambiq has grown into one of the most coveted Artificial Intelligence (AI) technology companies globally by driving ultra-low power innovations. In July 2022, Ambiq launched neuralSPOT® to enable artificial intelligence on electronic devices within the power constraints of batteries. What was impossible in the past due to power consumption requirements, IoT edge devices are now capable of high-performance AI capabilities, including speech recognition, activity detection, and real-time analytics.
neuralSPOT includes everything needed to get an AI model onto Ambiq’s platform, such as the latest Apollo4 Plus and Blue Plus SoC. The SDK consists of libraries for talking to sensors, SoC peripherals management, power and memory controlling configurations, tools for easily debugging AI models from any laptop or PC, and examples that tie it all together.
Because models are the heart of AI, to make practical EdgeAI possible, Ambiq also created ModelZoo, a collection of open-source edge AI models packaged with all the tools needed to develop the model from scratch. There are libraries for talking to sensors, managing SoC peripherals, controlling power and memory configurations, tools for easily debugging your model from your laptop or PC, and examples that tie it all together. neuralSPOT contains Ambiq-specific embedded libraries for audio, i2c, and USB peripherals, power management, and numerous helper functions such as interprocess communications and memory management. For the first time, a simple yet elegant tool can empower AI developers in different roles, including data specialists, AI specialists, and application specialists. For more information, visit Ambiq AI – Ambiq Supercharging Edge AI