Table of contents

    It doesn’t take long for this question to pop up at parties once people learn you’re an “AI person” (I imagine this is how podiatrists feel, minus the mid-party bunion exams and MD-level income.) The question is a stumper, though. 

    It’s a deceptively difficult question to answer, especially if you’re trying to yell the answer over cocktail party music. “Deceptive” because it should be simple: after all, it stands for “Artificial Intelligence” – the answer is right in the name! In fact, it is genuinely difficult to define “AI” succinctly because AI isn’t succinct – there are many kinds of AI, which can do things from playing games far more complex than Chess to generating convincing art and even videos. Given the hundreds of thousands of novel AI papers every year, it is only getting broader. 

    If you search online for “what is the simple definition of AI”, the top hit will be singularly unsatisfying to the average cocktail party interrogator:  

    “Artificial intelligence is the simulation of human intelligence processes by machines, especially computer systems.” 

    This answer is likely to be met with blank stares and a change of subject – the truly curious might persist with a “yeah, cool, but what even is AI?”. 

    I’ve decided that what people are actually asking is “What does AI do?”, or more specifically “What does the AI at Ambiq do?”. That is easier to answer and an excellent question to ask, so let me take a stab at it: 

    “Artificial Intelligence is a class of algorithms that are very good at drawing conclusions from fuzzy, real-world inputs.” 

    That sounds great! It’s worth unpacking our definition a bit, though. 

    When we say “fuzzy, real-world inputs,” we’re referring to data from the real world – things like images, sounds, noises, vibrations, and even electrocardiograms. All of these are somewhat random and chaotic. For example, a picture of a cat can be almost infinitely varied (as any cat owner will proclaim!) Even if we’re talking about one particular cat, we’ll get different lighting, angles, distances, low-light noise, and other objects in the image. It is easy for a person to figure out that they’re looking at a cat (we do it almost unconsciously – people don’t have to analyze the picture, we just know), but without AI, it is extremely challenging to program a computer to do the same. The really exciting thing is that we can apply AI to things that aren’t as ‘automatic’ for us. For example, looking at a squiggle and figuring out it corresponds to the acceleration in 3 dimensions of a person jogging is hard for humans but is easier than dealing with cat pictures for an AI. Likewise, teaching a human to read an ECG takes years of very expensive education, but an AI can do it on a smartwatch

    When we say “conclusions,” we mean something like “this is a picture of a cat, probably.” The “probably” is important here because, much like humans, AI deals in probabilities. An AI algorithm will rarely be 100% certain of its conclusions. This comes in handy when you’re dealing with real-world ambiguity, such as deciding if a dog is a Malamute, Husky, Akita, or Samoyed (who knew there were so many look-alike dogs?). 

    Finally, when we say “very good,” we mean “the best algorithms for this kind of stuff humans have managed to create.” For example, teaching computers to understand human speech has been an ongoing project since the middle of the last century, but it wasn’t until the advent of AI-driven speech recognition that it was useful (as anyone who tried to dictate emails in the early 2000’s knows). Computer visionlanguage translation, and even simple step counting are other features that have existed for decades or even centuries, but all became much better once AI was applied. 

    At that point of the cocktail party conversation, I tend to get overly excited talking about the AI stuff we’re doing at Ambiq (such as our highly efficient speech recognitionECG monitors, and other features), people start making excuses and heading back to the bar, and I go looking for more victims to talk about AI with, which is a lot more fun that talking about bunions. 

    Artificial Intelligence Concept Image

    Teaching Versus Dictating 

    What makes AI so powerful lies in how we tell the computer to do things – in AI, we “teach” it instead of dictating rules to it. 

    Traditional programming is ‘rule-based’– that is, you tell the computer, in painstaking detail, what the rules are. This conventional approach works very well for ATM transactions or online stores but is terrible for the kinds of fuzzy tasks AI is best at. 

    Take our favorite example: cat pictures. If you had to explain to an extraterrestrial how to figure out a cat is a cat, what rules would you use? Eye shape? Distance between eyes? Nose shape? Distance from eyes to nose? Trust me – the list of rules quickly gets into the thousands or even millions. For decades, this was how computer scientists approached this type of problem (not because they didn’t know about AI, which was first envisioned in the 1940s, but because the compute power needed for practical AI wasn’t there).  

    Once the computational capacity and data needed by AI became practical, AI developers were able to establish a simple set of statistical rules (a neural network architecture and the rules to train it) which mimicked how brains worked and shove millions and millions of cat pictures at it. The cleaver bit was that for each picture, they let the AI draw a conclusion and subtly altered the neural network so that the conclusion was more accurate each time. 

    Eventually, the AI’s neural network will learn that a cat is a cat, no matter the circumstances of the picture. And those millions of rules that we didn’t explicitly define? We still need them, and it’s just that instead of writing each one down, they’re captured in the thousands of neurons that make up the neural network.

    IoT ecosystem endpoint devices

     

    Dec 14. 22
    Written by

    Carlos Morales

    Tags