No results found for “

Putting AI in the Driver’s Seat for Traffic Management 

Putting-AI-in-the-Driver’s-Seat-for-Traffic-Management---Intersection

How many hours do you spend in traffic a year? Probably more than you realize. While a few minutes every day might not seem like a lot at the moment, it adds up over time. According to transportation analytics firm INRIX, the average U.S. motorist spent 51 hours in traffic in 20221. That’s more than an entire workweek. The same study found that traffic jams cost U.S. drivers more than $81 billion during 2022. Long story short, traffic is a major drain on consumers’ time and resources. 

The good news? Innovations in the Internet of Things (IoT) and artificial intelligence (AI) have paved the way for new solutions that would introduce smarter traffic mitigation and make the roads safer. From accident detection to traffic light management, AI offers the transportation industry a range of opportunities to beat the traffic. In this article, we’ll outline how the intersection of AI and IoT can optimize road operations and ultimately give people their time and money back. 

Tech Innovations for Traffic Optimization 

While the use of AI in traffic management is still in its early stages, a range of applications already allows for real-time monitoring and predictive analytics. Below, we outline key tech innovations that are making traffic innovation possible. 

Flow Sensors 

Traffic management has traditionally operated on a fixed schedule. Now, AI-powered sensors can be deployed in streets to monitor demand and adapt traffic signals based on shifts to optimize the flow. These sensors ultimately reduce congestion during peak times by prioritizing high-traffic roads. 

Predictive Monitors 

With mounds of historical and real-time data, traffic monitors can be trained to understand traffic patterns and predict traffic flow at a given time. This can be used to forecast future conditions so that traffic personnel can better allocate resources, optimize routes, and adjust traffic signals. 

Incident Monitors 

AI-powered monitors can be deployed to watch for and identify traffic incidents, including accidents, speeding, or blockages. If detected, this data can be sent to the correct parties to dispatch support and resolve hold-ups more quickly. 

Players Leading AI-Powered Traffic Management 

So, what companies are fueling this innovation…and how? Below, we examine several leaders in AI-powered traffic management and their real-world applications. 

INRIX 

INRIX offers instantaneous real-time traffic conditions, pinpointing traffic speeds in different lanes and delivering accurate ETAs for any road in the world, including interstates, country roads, and intersections 2. Additionally, they leverage vast datasets to provide insights into problem areas and trends in traffic conditions that can help users get back more of their time or allow the transportation aspect of businesses to run more profitably. 

Canon 

Canon, one of the top manufacturers of consumer cameras, is leading the trend in support of computer vision technology that can be used in traffic mitigation with its complementary metal oxide semiconductor (CMOS) sensors3. The Canon CMOS sensors are highly sensitive and power-accurate imaging for traffic management automation. They can provide clarity in different light conditions and with varying speeds of objects, which can help governments calibrate their traffic signal coordination for ramp meters and red-light cameras. 

Putting-AI-in-the-Driver’s-Seat-for-Traffic-Management---Traffic-Camera

Miovision 

Miovision collects multimodal traffic data and delivers actionable, real-time insights for urban grids and corridors4. This second-by-second response to dynamic conditions eliminates driver stops, reduces wait time, and improves travel speeds. The result is less congested roads and even reduced CO2 emissions as there won’t be as many automobiles on the road. 

Challenges in AI-Powered Traffic Management 

With any new technology, there are challenges to widespread adoption that can only be addressed with time. Let’s examine challenges in AI-powered traffic management. 

Privacy 

Privacy must remain at the forefront of all traffic management technology to ensure industry and consumer buy-in. Data and imaging collected on the roads must be anonymous and protected, including compliance with global privacy regulations. 

Energy Efficiency 

Like many battery-powered IoT devices, traffic monitors and sensors have limitations when it comes to energy efficiency. To process large amounts of data, these devices require significant processing power and, thus, battery power. Implementing these technologies shouldn’t require several battery changes throughout the week. 

Accuracy 

As with any device placed outdoors, there’s a high likelihood that external factors, such as earthquakes, storms, and extreme weather conditions, can impact device accuracy and, ultimately, the intelligence of those devices. Additionally, these devices are tracking moving objects, which presents an accuracy challenge within itself. Continued development and processing power could help offset some of these constraints. 

The Future of AI-Powered Traffic Management 

The intersection of AI and IoT is already facilitating easier and faster coordination across the range of systems contributing to traffic operations. As adoption increases, these enhanced sensors and monitors will ultimately be fully interoperable, allowing for greater optimization down the line. 

How Ambiq is Contributing 

Sophisticated sensors and monitors that can reduce the dread of driving in traffic is a win for motorists, granted these devices can overcome the energy challenges associated with them. Fortunately, Ambiq, specializes in creating ultra-low power system-on-chips (SoCs) that provide long-lasting battery life and processes complex data with greater energy efficiency. Continuous monitoring is possible, and these innovations will allow cities to become more effective in managing traffic conditions for drivers.  

Sources: 

1 Cities where motorists lose the most time and money | August 2023 
2 Real Time Traffic Data [Powered by Artificial Intelligence] | INRIX | 2023 
3 Why Choose Canon CMOS Sensors | Canon USA | 2023 
4 Miovision TrafficLink – Smart cities start with smart signals | 2023 

Scott Hanson Founder and CTO of Ambiq Interview with Safety Detective 1200x800.JPG

In a recent interview with SafetyDetectives, Scott Hanson, the Founder and CTO of Ambiq, delved into the creation and evolution of the SPOT platform, which focuses on building the world’s most energy-efficient chips. Originating from Hanson’s time as a PhD student at the University of Michigan, where he developed tiny systems for medical implants, the SPOT platform utilizes Subthreshold Power Optimized Technology to achieve unprecedented energy efficiency.

Hanson discussed the platform’s departure from conventional digital chip designs, emphasizing the significant energy savings achieved by operating at low voltages. The interview covered Ambiq’s role in addressing technical challenges in the IoT industry, emphasizing the crucial aspects of power efficiency and security. Hanson also expressed his optimism about the future of ultra-low power technology, foreseeing continuous improvements and a surge in compute power for IoT devices in the next 5-10 years.

Can you describe the journey that led to the creation of the SPOT platform?

Hi, my name is Scott Hanson, and I’m the founder and CTO of Ambiq.

Ambiq is a company that builds the world’s most energy-efficient chips. We’re putting intelligence everywhere; that is really the tagline. We want to make chips with such low power that we can embed little microprocessors and everything in your clothing, paint on the walls, bridges we drive over, pet collars, etc.

The company is built around a technology that we call SPOT, or Subthreshold Power Optimized Technology. It’s a low-power circuit design technology platform from my time at the University of Michigan. I was a PhD student there, and we were building tiny systems for medical implants.

When I say tiny, I mean really small. We’re talking about one cubic millimeter containing a microprocessor, a radio, an antenna, sensors, and a power source. When you’re building a system like that, the first thing you figure out is that the battery’s tiny, so the power budget that corresponds to that battery must also be tiny.

We got to thinking in terms of picowatts and nanowatts, right? We usually think about watts and kilowatts in the normal world, but we had to think about picowatts and nanowatts. When we did that, we could build these cubic millimeter systems; they could be implanted in the eyes of glaucoma patients to measure pressure fluctuations. It was exciting that this SPOT platform made all of that possible.

During that project, I started to see a lot of interest from companies in the technology, and it got me thinking that this technology had commercial potential. I remember riding in an elevator at the University of Michigan in the Computer Science building and realizing that the technology would be commercialized one day, and I needed to be the person to do that.

Shortly afterward, I started Ambiq with my two thesis advisors At Michigan, and we managed to raise some money. Then we launched our first few products, and here we are 13 years later, having shipped well over 200,000,000 units of our SPOT-enabled silicon. It’s been a really fun journey.

How does the SPOT platform fundamentally differ from traditional microcontroller technology?

I’m going to avoid giving you the full Circuits 101 lecture here. Conventional digital chips are not just microcontrollers but any chip signals digital ones and digital zeros using voltage.

A digital zero might be 0 volts, and a digital one will be a much higher voltage at 1 or 1.2 volts. That higher voltage is chosen because it’s easy to distinguish between zero and one. It’s also chosen because 1 or 1.2 volts is much higher than the turn-on voltage of the transistor.

Every modern chip is basically made of transistors, the fundamental building block.
We’ve got billions of transistors on these chips, which look like little switches. Think of them like a light switch. When the voltage applied to that transistor is above a turn-on voltage, what we call the threshold voltage, it turns on. A drop below the threshold voltage, and it turns off. You can see how you can string these things together and get devices that signal zeros and ones.

If you look at your Circuits 101 textbook, they teach you to apply a voltage well above the threshold voltage to turn on voltage, and it functions as a proper digital switch. Pretty much every chip out there operates in that way. However, at Ambiq, with SPOT, we ignore that convention. We represent the 1 with a much lower voltage; think of 0.5, 0.4, or 0.3 volts. If that voltage is below the turn-on voltage of the transistor, we call that subthreshold. If it’s at or near that turn-on voltage, it’s called near threshold.

It turns out that by shrinking what a digital one is, you get some huge energy savings. Energy equals voltage squared, so it’s quadratic with voltage, and therefore, you get this huge energy reduction by operating at low voltage.

That comes with all kinds of idiosyncrasies, and it’s what’s kept all the other companies away. Transistors start behaving weirdly but still operate like a transistor or switch, which becomes tough to manage. Our SPOT platform is about dealing with those idiosyncrasies or those subthreshold challenges, and it works.

The subthreshold and near-threshold technology has been around for decades. Ambiq was the first to really commercialize it widely, and our timing was perfect. About ten years ago, as the company was getting off the ground, the Internet of Things (IoT) was just starting to explode. Battery-powered devices were going everywhere, and there was this insatiable hunger for low power. The SPOT platform came along at the right time, and we’ve solved a lot of power problems, but the need for more compute continues to grow.

AI is popping up everywhere. We’re seeing that it’s not just in the cloud but also beginning in the endpoint devices like the ones we serve, such as consumer wearables or smart home devices. AI is straining the power budgets of all these devices, and that means that we’ve got to continue to innovate and release new products that are lower power.

What are the environmental implications of the widespread adoption of ultra-low power technologies?

Lower power is good for the environment, and it’s a green technology. If I have power needs that are that big, with low power technology, I can reduce the need to a lower amount, that’s good. However, the truth is a little bit more complex than that.

What tends to happen is that our customers don’t necessarily take advantage of consuming less energy by having smaller batteries and charging less often. Instead, they tend to add new functions. They’ll say, you have a more power-efficient processor? Then, I’ll add more stuff to that same power budget. So, the power footprint of the world is not really decreasing; it’s just that we’re able to get more done in that same power footprint.

It’s probably a wash in terms of environmental impact. That said, the IoT as a whole has some pretty fantastic potential for the environment. As we put sensors all over our homes and buildings, we could put climate sensors all over the world. We have a better sense of what’s going on in the world, whether it’s climate change and we’re able to track that, or it’s how much energy this building is using, and whether we are leaving lights on in a hallway where there’s nobody, there’s nobody present.

So there’s a potential to use the Internet of Things to dramatically reduce our energy usage, meaning manage buildings, energy consumption, managed homes, and energy consumption better, but also get a better sense of what’s happening in the world.

So I think there is the potential for our technology to be used in a very positive way.
But most of our customers tend to be using it just to kind of get more out of their existing power footprint.

What are the biggest technical challenges facing the IoT industry today?

Power is one of them, and it’s one that we’re addressing diligently every day. We want to put billions of devices everywhere, and you don’t want to be changing billions of batteries? So, having a low-power platform like SPOT is really critical.

Security is a growing concern. Io T has been developed very quickly and often without a proper eye toward security needs. The average person has tens of IoT devices in their home now, and they are collecting all kinds of intimate data about us. It’s collecting health information and movement patterns in my home. We’re seeing virtual assistants constantly capturing our speech to listen for the “Alexa” keyword or the “OK Google” keyword, and that’s not stopping, right?

There’s this insatiable appetite for more AI; deep neural networks are exploding everywhere. We’re going to see this constant need for processing in the cloud, which means we’re sending data up to the cloud.

That’s a privacy problem, right?

There are many ways to handle that. There’s a lot of good, interesting security hardware security software popping up. However, I’m going to say that probably the most effective solution is pretty simple – don’t send as much data to the cloud.
Do most of the processing at the endpoint, such as on the smartwatch, smart thermostat, and the Echo device in your house.

There’s no need to send all the data up to the cloud. It can be processed locally. It turns out that’s a power problem, too. If I say that instead of sending 100% of the raw sound data from an Amazon Echo up to the cloud, we’ll only send the 1% that’s interesting enough to send, it means there needs to be local processing. It must be done on the smartwatch, thermostat, Echo, or devices with sensors capturing the data.

That’s a power problem, especially as the neural networks running to support these use cases are getting bigger. Fortunately, SPOT is a great solution. We’re doing a lot of architectural innovation to ensure our customers can run big, beefy neural networks locally and on devices like this. I’m confident that we can attack that problem. However, I foresee a few years of rocky security problems here in the next few years.

How do you see the role of AI and machine learning in the evolution of IoT devices?

What I think is going to happen is that we’ll see a migration of AI from purely a data center thing to the endpoints, which means wearables, smart home devices, your car, or devices with sensors.

We’re seeing our customers run lightweight activity tracking or biosignal analysis on wearables. They’re embedding virtual assistants in everything, whether it’s a wearable, hearable, or smart home appliance. In all these cases, there’s a balance between the endpoint and the cloud. The neural network can have a lightweight front end that runs locally; if it identifies something interesting, it then passes it to the cloud for further analysis.

What I’m really excited about is the potential for large language models (LLM), such as chatGPT, that are trained with enormous amounts of data, largely from the Internet. However, they don’t have eyes and ears; they’re not in the real world understanding what’s happening. That’s the role of the endpoint devices, your wearables, smart home devices, or smart switches. Those devices are constantly capturing information about us.

If they could run lightweight neural networks locally to identify activities or events of interest, they could ship those to the cloud to a ChatGPT-like model. Imagine if your wearable monitors your vitals, heart rate, breathing, and talking, and it identifies trends of interest and sends those up to the AI.

I’m not talking about megabytes or even gigabytes of the data it’s constantly collecting. I’m talking about sending a few little snippets – a few little observations. For example, you had a high activity date today, or sleep was not very good last night. You send that up to the cloud, and then you can ask it more useful questions like –  Hey, I haven’t been feeling great lately, what’s wrong? The AI would be able to answer with something along the lines of I’ve been watching you for the last six months, and I see that your sleep has been irregular. You need to get more regular sleep, and here’s what you can do to fix that, right?

There are countless examples of things like this where endpoint devices can collaborate with large language models in the cloud to achieve fantastic results.

Now, there are obvious security problems there. We just talked about how security problems are one of the major challenges facing IoT.  That’s no different here, and it’s a problem that needs to be managed. However, if you do most of the processing locally, we can effectively manage the security issues. I think between the endpoint and the cloud,  there’s a way to address the security problems that pop up there. And I think there’s real power in what can be achieved for the endusers.

How do you see ultra-low power technology evolving in the next 5-10 years?

The good news is that I see it improving with really no end in sight. We’re going to see far more compute power packed into shrinking power budgets.

Moore’s law is alive and well for the embedded world. We’re at a process node today that’s 22 nanometers. The likes of Qualcomm, Intel, and others are down below 5 nanometers, so we’ve got a long way to go to catch up to them.

Moore’s law is going to deliver all kinds of gains. That means faster processors and lower power processors. We’re also doing a ton of innovation on the architecture, circuits, and software. I don’t see an end to power improvement, certainly not in the next decade.

Just look at how far Ambiq has come in the last ten years. We have this family of SoCs called Apollo. The first one was launched in 2014 and ran at 24 MHz. It had a small processor and less than 1 MB of memory. Our latest Apollo4 processors have many megabytes of memory. They run at nearly 200 MHz. They have GPUs and USB interfaces, consuming 1/8 of the power of our initial product. So we’re getting dramatically faster, dramatically low power that will continue.

If you just extrapolate those numbers going forward, we’re going to have an amazing amount of compute power for all your IoT devices, and that’s exciting.

I don’t 100% know exactly what we’ll do with all that compute, but I do know that I’ve got customers asking me every day for more processing power, lower power, and they’re going to be doing some pretty exciting things.

So, I’m excited about where we’re going from here.

This interview originally appeared on Safety Detectives with Shauli Zacks on January 4, 2024

Treating-Mental-Health-with-AR-VR-Woman-1200x800

Augmented reality (AR) and Virtual reality (VR) technologies have always held a foothold in entertainment as they create immersive environments for users to get lost in. The healthcare industry has noted AR and VR applications and is now using the same approach to diagnose and treat mental health illnesses. 

Roughly one in four American adults suffer from a mental illness, making mental health one of the fastest-growing illnesses plaguing our society1. From talk therapy to cognitive behavioral therapy, options to treat and support the symptoms of anxiety, depression, stress, and sleep disorders are abundant, with innovations constantly being released. 

While healthcare organizations are already using artificial intelligence (AI), such as AI voice analysis for disease detection and remote patient monitoring, AR and VR are two AI-based tools that are increasingly prominent regarding therapeutic experiences. 

In this article, we will detail how AR and VR treat symptoms of mental illness, along with real-life applications. 

How AR and VR Treat Symptoms of Mental Illness 

Mental illness symptoms vary widely from physical pain like headaches to mental fatigue, brain fog, disordered sleeping, and more. AR and VR work to reduce these symptoms and provide effective therapeutic treatments for the overall illness. 

Exposure therapy 

In cases of post-traumatic stress disorder (PTSD), virtual reality can create simulated environments to help patients increase exposure to phobias or traumatic environments. Exposure therapy works to gradually introduce and expose an individual to the things they are afraid of with the eventual goal of reducing that fear. Through this immersive exposure, VR and AR projects visual and auditory environments to encourage patients that they’re in a safe place while exposing them to their fear. Research showed a 66% and 90% success rate when virtual reality therapy was combined with cognitive behavioral therapy2

Mindfulness 

For social anxiety or panic disorders, VR can provide mindful and meditative stress reduction. Through guided visualization, calming music, or guided breathwork, patients experiencing an anxiety attack or panic attack can slow their breathing, calm their heart rate, and work themselves down. 

Pain Reduction 

VR also immerses patients in different environments to distract them from chronic pain. This reduction in physical pain can be morale-boosting for patients suffering from serious mental illnesses and allow them to continue other treatments. For patients dealing with pain, traveling to smart buildings equipped with VR tools might not be possible, so at-home technology is best. 

Real-Life Applications

AI-enhanced wearables like fitness trackers have gained significant, steady adoption for the last few years, and VR and AR technology could be the next quickly adopted health tool. Take a look at how companies are developing and launching revolutionary products. 

Apple’s Vision Pro Headset 

As one of the forefront pioneers in VR, Apple has long been testing and developing different virtual reality headsets. One of their rumored mental health headsets, the Vision Pro, is thought to detect different levels of anxiety, depression, PTSD, and other types of stress 3

XR Virtual Reality Therapy 

In the comfort of your own home, XR Health promises drug-free, personalized mental health therapy through VR. Leveraging a cutting-edge headset, therapists using video and talk therapy will deliver through confronting experiences using different modalities such as cognitive behavioral therapy, acceptance and commitment, therapy, and psychodynamic therapies 4

PsyTech VR 

Ideal for mindfulness practices, meditations, and an overall reduction in anxiety and stress, psyTech VR also uses cutting-edge AR and VR technology to build therapeutic immersive experiences. It’s designed to provide interventions for anxiety disorders, phobias, and PTSD 5

Treating-Mental-Health-with-AR-VR-Man-1200x800

Challenges of AR and VR for Mental Health 

The future applications of AR and VR in healthcare and mental health are exciting, but like many new AI technologies, there are concerns around ethical considerations, safety, costs, and privacy. 

Privacy 

As consumers increasingly care about data, privacy, and security, it’s important that AR and VR technologies for mental health protect sensitive healthcare data. Also, as data is aggregated at scale to identify trends and patterns, anonymity should be considered to protect the identity of individuals participating in these newer technologies. 

Safety 

Like any new therapy treatment, thorough research must be done before releasing AR and VR tools to the public for mental health treatment. Mental health patients often experience up and down episodes; AR and VR mustn’t be more harmful than helpful. All exposure should be thoroughly evaluated and done under the care of trained professionals. 

Costs 

AR and VR technology for mental health will undoubtedly be expensive, such as Apple’s estimated $3,500 headset. This significant barrier to entry could create socioeconomic disparities and biased data sets, leaving some patients without critical treatments in the future. However, as more AR and VR technologies are released, prices will go down, and technology will be more affordable for the average consumer. 

The Future of AR/VR for Mental Health 

Before a mental health issue can even start to be treated, it has to be formally diagnosed, yet mental health issues are often called the “silent disease.” The vast majority of mental illnesses go untreated, as proper diagnosis can be difficult to get. In the future, research is hopeful that AR and VR can also be used for faster, more accurate patient diagnosis before treatment starts. AI is already showing its effectiveness in healthcare diagnosis

Also, as more companies develop AR and VR technology, cost barriers will reduce, and practitioners will feel more comfortable recommending these innovative technologies in conjunction with other treatments. New enhancements, such as voice commands or wearable solutions, will make it even easier for all patients to use AR and VR technology in a way that makes sense for them. 

How Ambiq Contributes 

As more and more people suffer from depression, sleep disorders, and anxiety, innovative technologies will create a better quality of life, reduce pain and suffering, and offer new hope for patients. Ambiq’s ultra-low power System-on-Chips (SoCs) allow battery-powered, energy-efficient IoT devices to operate at optimal battery life and performance, extending their usage. Already, Ambiq has enabled over 230 million endpoint devices capable of giving users active control over their health. These battery-powered endpoint devices could last days, weeks, and even months on a single charge. Learn more about different applications Ambiq can help here

Sources 

1 Mental Health Disorder Statistics | 2023
2 Virtual reality exposure therapy for post-traumatic stress disorder (PTSD): a meta-analysis | August 19, 2019
3 Apple Is Considering Treating Mental Health With $3,500 Vision Pro Augmented Reality Headset, Report Says | October 25, 2023
4 XRHealth | 2023
5 PsyTech | 2023 

A 2020 Look Into Computer Vision - Outside

Often without realizing it, most humans are now regularly engaging with sophisticated computer vision technology. Things such as password authentication that used to require a password or fingerprint now need little more than a glance at your smartphone.  

Forty percent of Americans are utilizing face biometrics and facial recognition technology with at least one app per day, and that adoption increases to 75% among 18 to 34-year-olds1. The global computer vision market is expected to grow at an exponential compounded annual growth rate of 19.6% by 2030 2, and other deep learning techniques have widened the scope of what’s possible with today’s computer vision technology. 

From unlocking smartphones to walking through facial scanners at airports, computer vision technology is rapidly integrating into our everyday lives. 

What is Computer Vision? 

Computer vision is a subfield of computer science and artificial intelligence that utilizes computers and systems to gather meaningful information from images. Computer vision extrapolates data from images to make decisions. Its ultimate goal is correctly identifying objects and people to take appropriate action, such as avoiding a pedestrian on a walkway in a self-driving car or accurately identifying a smartphone user who can unlock their phone. 

How Does Computer Vision Work? 

Computer vision technology aims to mimic the human brain’s process of recognizing visual information. Utilizing pattern recognition, it absorbs inputs, labels them as objects, and finds patterns that produce familiar images. Computer vision works to derive meaning from images while cataloging visual data from the real world. 

The History of Computer Vision 

Like many fields of artificial intelligence, the first forays into computer vision occurred decades ago. In the 1960s, researchers used algorithms to process and analyze visual data. By the 1970s, this technology had become more accurate at image processing and pattern debt recognition. 

Over the next several decades, scientists utilized machine learning algorithms to power most computer vision technology, culminating in one of the largest breakthroughs of the time, the Viola-Jones face detection algorithm. This algorithm is still used today as a core machine-learning object detection framework. As technology rapidly progressed in the 2000s, convolution neural networks enabled computers to detect objects and track movement with even greater accuracy. 

Real-Life Applications of Computer Vision 

Computer vision technology is revolutionizing many industries, from improved cancer detection to self-driving cars. Tools like facial recognition, object detection, and augmented reality offer multiple use cases for real-life applications. 

Autonomous Vehicles 

Tesla is a well-known example of self-driving vehicles, but Hyundai also recently invested in a deep-learning computer vision startup to apply the technology to its autonomous vehicles 3. Computer vision, a core functionality of autonomous vehicles, empowers self-driving cars to make sense of their surroundings; sensors and hardware gather billions of visual data points to create an image of what is happening outside the vehicle. From stop signs to hazardous objects on the road to pedestrians to other cars, computer vision algorithms improve the safety and efficiency of self-driving cars. 

Cancer Detection 

AI rapidly evolves in the healthcare industry, and cancer detection is no exception. X-ZELL is a company that uses sophisticated computer vision technology to enable same-day cancer diagnosis from imagery4. Computer vision uses advanced algorithms and machine learning to analyze medical images like X-rays, MRIs, and CT scans to identify potential signs of cancer with higher accuracy. Computer vision learns upon massive data sets, and it can accurately identify subtle patterns and features that might be difficult for humans to pick up on. In healthcare, this can improve patient outcomes, enhance treatments, and ultimately save lives. 

Security in Schools and Public Areas 

For increased school and public area security, Visio.ai combines edge computing with on-device machine learning and sophisticated vision systems5. From high-traffic walkways in airports to intrusion detection in universities and schools, deep learning intelligence can be merged with common surveillance cameras to use facial recognition analysis to measure emotions and detect suspicious activities. Computer vision technology offers the opportunity to improve safety throughout public areas like schools, airports, transportation systems, and more. 

Manufacturing Settings 

Manufacturing settings are full of opportunities for computer vision technology, from quality inspections to production monitoring to supply chain logistics. For example, in quality inspections, computer vision can automatically detect defects, scratches, and other anomalies. With radio frequency identification (RFID), computer vision technology can track products across supply lines, optimizing inventory, production schedules, and delivery. From improved supply chain logistics to ensuring consistent quality for semiconductors, computer vision supports better lighting, better product consistency, increased efficiencies, and more. 

A 2020 Look Into Computer Vision - Eye

Challenges of Computer Vision 

Computer vision and endpoint intelligence offer seemingly limitless opportunities for advancement in critical sectors. While safer vehicles and faster cancer diagnosis don’t seem problematic, the intelligence behind computer vision has challenges. 

Privacy Concerns 

Privacy and security are top concerns, like with many artificial intelligence tools. The risk for data breaches is high, and with sensitive, confidential information stored in potentially unsecured platforms, cybercriminals are highly incentivized to attack. Consumers worry about giving too much personal data to technology companies, and with cybercrime on the rise, computer vision AI tools need to ensure they’ve shored up their defenses. 

High Costs 

Currently, computer vision technology is not cheap to implement, and especially in more sophisticated use cases, the cost of purchasing hardware and software and performing maintenance is high. Add in large data sets that need to be cleaned, stored, and maintained; computer vision technology is extra costly. Also, the maintenance of these systems is expensive, and predictive maintenance is necessary to fix potential equipment defects before they become a bigger issue. 

Lack of Trained Experts 

While computer vision is rapidly evolving, few companies or individuals have vast expertise. As with any newer technology, it will take time for education and training to catch up with real-life applications adequately. Companies struggle to maintain specialized tech talent, and computer vision is no exception. Organizations also need trained experts on the differences between artificial intelligence, machine learning, and deep learning to train systems adequately. 

The Future of Computer Vision 

Computer vision technology is still in its infancy, but society has already seen its vast impact across manufacturing, education, security, retail, healthcare, the automotive industry, and more. There is so much opportunity for consumer computer vision technology as the desire for the Internet of Things (IoT) devices accelerates—virtual reality headsets, augmented reality smart glasses, and more. As hardware becomes more sophisticated yet affordable, computer vision wearables and smart gadgets can trickle down to the average person. Also, as generative AI and deep learning accelerate, computer vision models will have more inputs from which to learn. 

How Ambiq Contributes 

Computer vision technology requires an embedded chip capable of processing machine learning inferencing. For this technology to be practical on endpoint devices, it needs to perform at low power and run at maximum efficiency. Ambiq’s ultra-low power System-on-Chips (SoCs) enable endpoint devices with optimal performance and energy efficiency that can perform locally on the device.  

Our friends at Northern Mechatronics (NMI) recently performed digit recognition on their flagship NM180100, which was enabled by Ambiq’s Apollo3 SoC. Numbers were identified and returned in less than 2-seconds. See it for yourself: 

Sources: 

1 New CyberLink Report Finds Over 131 Million Americans Use Facial Recognition Daily and Nearly Half of Them to Access Three Applications or More Each Day | November 22, 2022 
2 Computer Vision Market Size, Share & Trends Analysis Report By Component, By Product Type, By Application, By Vertical (Automotive, Healthcare, Retail), By Region, And Segment Forecasts, 2023 – 2030 | 2021 
3 Hyundai Invests in Deep Learning Computer Vision Startup allegro.ai | May 11, 2018 
4 X-Zell | 2023 
5 Top 9 Applications of AI Vision in the Education Sector | 2023 

Revolutionizing-Recycling-with-AI---Robot-with-trashbag

Recycling, when done effectively, can significantly impact environmental sustainability by conserving valuable resources, contributing to a circular economy, reducing landfill waste, and cutting energy used to produce new materials. However, the initial progress of recycling in nations like the United States has largely stalled to a current rate of 32 percent1 due to problems around consumer knowledge, sorting, and contamination.  

Artificial intelligence (AI), machine learning (ML), robotics, and automation aim to increase the effectiveness of recycling efforts and improve the country’s chances of reaching the Environmental Protection Agency’s goal of a 50 percent recycling rate by 2030. Let’s look at common recycling problems and how AI could help. 

What Is Contamination in Recycling? 

As one of the biggest problems facing effective recycling programs, contamination happens when consumers place materials into the wrong recycling bin (such as a glass bottle into a plastic bin). Contamination can also occur when materials aren’t cleaned properly before the recycling process. 

Today’s recycling systems aren’t designed to deal well with contamination. According to Columbia University’s Climate School, single-stream recycling—where consumers place all materials into the same bin leads to about one-quarter of the material being contaminated and therefore worthless to buyers2

Industry insiders also point to a related contamination problem sometimes referred to as aspirational recycling3 or “wishcycling,4” when consumers throw an item into a recycling bin, hoping it will just find its way to its correct location somewhere down the line. 

This, unfortunately, rarely happens. Here’s why: 

Recycling Breaks Down 

When the number of contaminants in a load of recycling becomes too great, the materials will be sent to the landfill, even if some are suitable for recycling, as it costs extra money to sort out the contaminants. Recycling materials have value aside from their benefit to the planet. Contamination reduces or eliminates the quality of recyclables, giving them less market value and further causing the recycling programs to suffer or resulting in increased service costs. 

In addition, Americans throw nearly 300,000 tons of shopping bags away each year5. These can later wrap around the parts of a sorting machine and endanger the human sorters tasked with removing them. When consumers throw non-recyclable materials into sorting bins, they can also expose workers to hazardous waste, vector-borne diseases, and other dangerous items. 

How AI Could Help 

Fortunately, several researchers, startups, and manufacturers are developing innovations fueled by AI to improve the effectiveness of recycling programs. 

Pello Cuts the Plastic 

Pello Systems has created a system of sensors and cameras to help recyclers reduce contamination by plastic bags6. The system uses AI, ML, and advanced algorithms to identify plastic bags in photos of recycling bin contents and provide facilities with high confidence in that identification. 

By identifying and removing contaminants before collection, facilities save vendor contamination fees. They can improve signage and train employees and consumers to reduce the number of plastic bags in the system. 

TrashBot Cleans Up 

The TrashBot, by Clean Robotics, is a smart “recycling bin of the future” that sorts waste at the point of disposal while providing insight into proper recycling to the consumer7. Through AI, ML, robotics, and computer vision, the Trashbot diverts each deposited item into its proper bin inside, assigning contaminated items to landfill bins or organics into their corresponding bin. 

Trashbot also uses a consumer-facing screen that provides real-time, adaptable feedback and custom content reflecting the item and recycling process. In addition to this educational feature, Clean Robotics says that Trashbot provides data-driven reporting to its users and helps facilities boost their sorting accuracy by 95 percent, compared to the typical 30 percent of conventional bins. 

Revolutionizing-Recycling-with-AI----Plastic-analysis

Oscar Sorts It Out 

Intuitive AI, a Canadian startup, has introduced Oscar Sort, an AI-driven, intuitive, “smart recycling assistant” trained to identify a broad spectrum of beverage and food containers8. Consumers simply point their trash item at a computer screen, and Oscar will tell them if it’s recyclable or compostable. 

Adaptable to existing waste and recycling bins, Oscar Sort can be customized to local and facility-specific recycling rules and has been installed in 300 locations, including university cafeterias, sports stadiums, and retail stores. 

AMP Cortex Doubles the Picks 

AMP Robotics has built a sorting innovation that recycling programs could place further down the line in the recycling process. Their AMP Cortex is a high-speed robotic sorting system guided by AI9

AMP’s AI platform uses computer vision to recognize patterns of specific recyclable materials within the typically complex waste stream of folded, smashed, and tattered objects. Their robots perform physical tasks of sorting, picking, and placing materials to achieve what they say has a 99 percent accuracy and 80 picks per minute (the average human makes roughly 40 picks per minute.) 

The Outlook of AI in Recycling Management 

As AI continues to make strides in recycling management, the outlook is promising: 

AI-driven sensors and robotics will provide real-time data analytics, enabling recycling facilities to make data-based decisions for process optimization. This will likely expand into other areas, such as predictive maintenance, supply chain optimization, and adaptive recycling strategies. The widespread adoption of AI in recycling has the potential to contribute significantly to global sustainability goals, reducing environmental impact and fostering a more circular economy. 

As innovators continue to invest in AI-driven solutions, we can anticipate a transformative impact on recycling practices, accelerating our journey towards a more sustainable planet. 

How Ambiq is Contributing 

Utilizing key technologies like AI to take on the world’s larger problems such as climate change and sustainability is a noble task, and an energy consuming one. Performing AI and object recognition to sort recyclables is complex and will require an embedded chip capable of handling these features with high efficiency. 

Ambiq creates a wide range of system-on-chips (SoCs) that support AI features and even has a start in optical identification support. Implementing sustainable recycling practices should also use sustainable technology, and Ambiq excels in powering smart devices with previously unseen levels of energy efficiency that can do more with less power. Learn more about the various applications Ambiq can support

Sources 

1 America Recycles Day | 2023
2 Recycling in the U.S. Is Broken. How Do We Fix It? | March 13, 2020&
3 The Waste of Aspirational Recycling | February 6, 2023
4 What Is Wishcycling? Two Waste Experts Explain | January 21, 2022
5 Recycling Statistics | 2023
6 Pello Systems | 2023
7 Clean Robotics | 2023
8 Intuitive AI | 2023
9 AMP Robotics | 2023

Revolutionizing Recycling with AI
Our website uses cookies

Our website use cookies. By continuing navigating, we assume your permission to deploy cookies as detailed in our Privacy Policy.

Preparing to download