What is an Edge AI?

What is an Edge AI?

What is Edge AI / tinyML?

Edge AI means that AI algorithms are processed locally on a hardware device. The algorithms are using data (sensor data or signals) that are created on the device.

A device using Edge AI does not need to be connected in order to work properly, it can process data and take decisions independently without a connection.

In order to use Edge AI, you need a device comprising a microprocessor and sensors.

Example: A handheld power tool is by definition on the edge of the network. The Edge AI software application that runs on a microprocessor in the power tool processes data from the power tool in real time. The Edge AI application generates results and stores the results locally on the device. After working hours the power tool connects to the internet and sends the data to the cloud for storage and further processing. One of the key properties in the example above is to have a long battery life. If the power tool would continously stream data to the cloud, the battery would be drained in no time.

Why is Edge AI important?

Edge AI will allow real time operations including data creation, decision and action where milliseconds matter. Real time operations is important for self-driving cars, robots and many other areas.

Reducing power consumption and thus improving battery life is superimportant for wearable devices.

Edge AI will reduce costs for data communication, because less data will be transmitted.

By processing data locally, you can avoid the problem with streaming and storing a lot of data to the cloud that makes you vulnerable from a privacy perspective.

The world is about to get a whole lot smarter.

As the new decade begins, we’re hearing predictions on everything from fully remote workforces to quantum computing. However, one emerging trend is scarcely mentioned on tech blogs – one that may be small in form but has the potential to be massive in implication. We’re talking about microcontrollers.

There are 250 billion microcontrollers in the world today. 28.1 billion units were sold in 2018 alone, and IC Insights forecasts annual shipment volume to grow to 38.2 billion by 2023.

Perhaps we are getting a bit ahead of ourselves though, because you may not know exactly what we mean by microcontrollers. A microcontroller is a small, special purpose computer dedicated to performing one task or program within a device. For example, a microcontroller in a television controls the channel selector and speaker system. It changes those systems when it receives input from the TV remote.

Microcontrollers and the components they manage are collectively called embedded systems since they are embedded in the devices they control. Take a look around — these embedded systems are everywhere, in nearly any modern electronic device. Your office machines, cars, medical devices, and home appliances almost all certainly have microcontrollers in them.

With all the buzz about cloud computing, mobile device penetration, artificial intelligence, and the Internet of Things (IoT) over the past few years, these microcontrollers (and the embedded systems they power) have largely been underappreciated. This is about to change.

The strong growth in microcontroller sales in recent years has been largely driven by the broad tailwinds of the IoT. Microcontrollers facilitate automation and embedded control in electronic systems, as well as the connection of sensors and applications to the IoT. These handy little devices are also exceedingly cheap, with an average price of 60 cents per unit (and dropping).

Although low in cost, the economic impact of what microcontrollers enable at the system level is massive, since the sensor data from the physical world is the lifeblood of digital transformation in industry. However, this is only part of the story.

A coalescence of several trends has made the microcontroller not just a conduit for implementing IoT applications but also a powerful, independent processing mechanism in its own right. In recent years, hardware advancements have made it possible for microcontrollers to perform calculations much faster. 

Improved hardware coupled with more efficient development standards have made it easier for developers to build programs on these devices. Perhaps the most important trend, though, has been the rise of tiny machine learning, or TinyML. It’s a technology we’ve been following since investing in a startup in this space.

Big potential

TinyML broadly encapsulates the field of machine learning technologies capable of performing on-device analytics of sensor data at extremely low power. Between hardware advancements and the TinyML community’s recent innovations in machine learning, it is now possible to run increasingly complex deep learning models (the foundation of most modern artificial intelligence applications) directly on microcontrollers.

A quick glance under the hood shows this is fundamentally possible because deep learning models are compute-bound, meaning their efficiency is limited by the time it takes to complete a large number of arithmetic operations. Advancements in TinyML have made it possible to run these models on existing microcontroller hardware.

In other words, those 250 billion microcontrollers in our printers, TVs, cars, and pacemakers can now perform tasks that previously only our computers and smartphones could handle. All of our devices and appliances are getting smarter thanks to microcontrollers.

TinyML represents a collaborative effort between the embedded ultra-low power systems and machine learning communities, which traditionally have operated largely independently. This union has opened the floodgates for new and exciting applications of on-device machine learning. However, the knowledge that deep learning and microcontrollers are a perfect match has been pretty exclusive, hidden behind the walls of tech giants like Google and Apple.

This becomes more obvious when you learn that this paradigm of running modified deep learning models on microcontrollers is responsible for the “Okay Google” and “Hey Siri,” functionality that has been around for years.

But why is it important that we be able to run these models on microcontrollers? Much of the sensor data generated today is discarded because of cost, bandwidth, or power constraints – or sometimes a combination of all three.

For example, take an imagery micro-satellite. Such satellites are equipped with cameras capable of capturing high resolution images but are limited by the size and number of photos they can store and how often they can transmit those photos to Earth.

As a result, such satellites have to store images at low resolution and at a low frame rate. What if we could use image detection models to save high resolution photos only if an object of interest (like a ship or weather pattern) was present in the image? While the computing resources on these micro-satellites have historically been too small to support image detection deep learning models, TinyML now makes this possible.ADVERTISEMENT

Another benefit of deploying deep learning models on microcontrollers is that microcontrollers use very little energy. Compared to systems that require either a direct connection to the power grid or frequent charges or replacement of the battery, a microcontroller can run an image recognition model continuously for a year with a single coin battery.

Furthermore, since most embedded systems are not connected to the internet, these smart embedded systems can be deployed essentially anywhere. By enabling decision-making without continuous connectivity to the internet, the ability to deploy deep learning models on embedded systems creates an opportunity for completely new types of products.

What are Current Applications of Edge AI?

Here’s a list of some real-life applications of edge artificial intelligence.

Autonomous Cars

Self-driving vehicles use edge AI devices that can process data within the same hardware. An autonomous car requires immediate data processing, such as recognizing oncoming vehicles, identifying traffic signs, and looking out for pedestrians and other road hazards to ensure the safety of both the people in and outside it. Edge AI allows autonomous vehicles to collect and process all necessary inputs in real-time.

Surveillance and Monitoring

Edge AI is also beneficial for security cameras as these no longer have to upload raw video signals to a cloud server for processing. Edge AI-capable security cameras can use machine learning (ML) algorithms to process captured images and videos locally. This process allows the devices to track and monitor several people and items directly. Footage would only be transmitted to a cloud server when necessary, thus reducing remote processing and memory consumption.

Industrial Internet of Things

The Industrial IoT (IIoT) highly depends on automating manufacturing and operational processes to increase productivity. Using edge AI allows IIoT devices to do visual inspections and carry out robotic control faster and at lower costs.

What Benefits does Edge AI Provide?

Here’s a list of the benefits that edge artificial intelligence brings its users.

Cost Effectivity

Data charges depend on bandwidth use. By keeping AI processing to a local machine, users can reduce data communication costs since they no longer have to transmit data to another device for analysis. Users get results faster, too.

Enhanced Security

Cloud users are often afraid of losing transmitted data along the way. Edge AI lessens the chances of data leakage or loss because the processing occurs locally. Users can thus control or limit who has access to information better.

Operational Efficiency

Real-time processing is one of the most robust features of edge AI. It allows users to collate, process, and analyze data then implement solutions in the fastest way possible, making devices highly useful for time-dependent applications.

TinyML is giving hardware a new life

Aluminum and iconography are no longer enough for a product to get noticed in the marketplace. Today, great products need to be useful and deliver an almost magical experience, something that becomes an extension of life. Tiny Machine Learning (TinyML) is the latest embedded software technology that moves hardware into that almost magical realm, where machines can automatically learn and grow through use, like a primitive human brain.

Until now building machine learning (ML) algorithms for hardware meant complex mathematical modes based on sample data, known as “training data,” in order to make predictions or decisions without being explicitly programmed to do so. And if this sounds complex and expensive to build, it is. On top of that, traditionally ML-related tasks were translated to the cloud, creating latency, consuming scarce power and putting machines at the mercy of connection speeds. Combined, these constraints made computing at the edge slower, more expensive and less predictable.

But thanks to recent advances, companies are turning to TinyML as the latest trend in building product intelligence.

Early TinyML applications

It’s easy to talk about applications in the abstract, but let’s narrow our focus to specific applications likely to be available in the coming years that would impact the way we work or live:

Mobility: If we apply TinyML to sensors ingesting real-time traffic data, we can use them to route traffic more effectively and reduce response times for emergency vehicles. Companies like Swim.AI use TinyML on streaming data to improve passenger safety and reduce congestion and emissions through efficient routing.

Smart factory: In the manufacturing sector, TinyML can stop downtime due to equipment failure by enabling real-time decision. It can alert workers to perform preventative maintenance when necessary, based on equipment conditions.

Retail: By monitoring shelves in-store and sending immediate alerts as item quantities dwindle, TinyML can prevent items from becoming out of stock.ADVERTISEMENT

Agriculture: Farmers risk severe profit losses from animal illnesses. Data from livestock wearables that monitor health vitals like heart rate, blood pressure, temperature, etc. can help predict the onslaught of disease and epidemics.

Before TinyML goes mainstream …

As intriguing as TinyML may be, we are very much in the early stages, and we need to see a number of trends occur before it gets mainstream adoption.

Every successful ecosystem is built on engaged communities. A vibrant TinyML community will lead to faster innovation as it increases awareness and adoption. We need more investments in open-source projects supporting TinyML (like the work Google is doing around TensorFlow for broader machine learning), since open source allows each contributor to build on top of the work of others to create thorough and robust solutions.

Other core ecosystem participants and tools will also be necessary:

  • Chipset manufacturers and platforms like Qualcomm, ST, and ETA Compute can work hand-in-hand with developers to ensure chipsets are ready for the intended applications, and that platform integrations are built to facilitate rapid application development.
  • Cloud players can invest in end-to-end optimized platform solutions that allow seamless exchange and processing of data between devices and the cloud.
  • Direct support is needed from device-level software infrastructure companies such as Memfault, which is trying to improve firmware reliability, and Argosy Labs, which is tackling data security and sharing on the device level. These kinds of changes give developers more control over software deployments with greater security from nearly any device.
  • Lifecycle TinyML tools need to be built that facilitate dataset management, algorithm development, and version management and that enhance the testing and deployment lifecycle.

However, innovators are ultimately what drives change. We need more machine learning experts who have the resources to challenge the status quo and make TinyML even more accessible. Pete Warden, head of the TensorFlow mobile team, has an ambitious task of building machine learning applications that run on a microcontroller for a year using only a hearing aid battery for power. We need more leaders like Pete to step up and lead breakthroughs to make TinyML a near-term reality.

Benefits of Edge AI in software testing

Before AI, testing software was a critical step in the software development lifecycle. After AI, it’s still like that. But now AI can help with testing.

AI and machine learning are being applied to software testing, defining a new era that will make the testing process faster and more accurate, according to a recent report from AZ Big Media.

The authors outline the benefits of AI in software testing as follows:

Improved automation tests. Quality assurance engineers spend a lot of time testing to ensure that new code does not destabilize existing, working code. The more features and functionality that are added, the more code that needs to be tested, which can overwhelm the QA engineers. Manual testing becomes impractical.

Test automation tools can run tests repeatedly over a long period of time. Adding AI capabilities to these tools is powerful. Machine learning techniques will help the AI ​​testbots evolve as the code changes, learn and adapt to the new features. When they see changes to the code, they can determine whether it is a bug or a new feature. The AI ​​can also detect whether minor errors can be tested on a case-by-case basis, which speeds up the process.

Support with API tests, which developers use to assess the quality of interactions between various programs that communicate with servers, databases, and other components. Testing ensures that requests are processed successfully, the connection is stable, and the user is getting the correct output.

The addition of AI to this process helps analyze the functionality of connected applications and create test cases. The AI ​​is able to analyze large data sets to identify potentially risky areas of the code.

QA engineers use various tools and expertise to test Edge AI apps

Arbon tells his children about the old days, when he had a car with manual window cranks, and they laugh. Soon, QA tech will laugh at the idea of ​​selecting, managing, and controlling systems under test (SUT). “AI will make it faster, better, and cheaper,” said Merrill.

test.ai offers bots that explore an application, interact with it, extract screens, elements and paths. It then generates an AI-based model for testing that searches the application under test according to a schedule set by the customer. The website says “Go Beyond Legacy Software Test Automation Tools”.

The founders of Application tools, which provides a test automation platform with so-called “Visual AI”, describes a test infrastructure that must support expected test results from the same data that trains the decision-making AI. “This is very different from our current work with systems under test, ”said Merrill, Principal at Beaufort Fairmont, a software testing consultant based in Cary, NC.

He describes the experience of Angie Jones, former senior software engineer on Twitter’s test, in a recent article entitled: “Test Automation for Machine Learning: A Field Report.” Jones described how she systematically isolated the system’s learning algorithms from the system itself, isolating the current data to reveal how the system learns and what it inferences based on the data it provides. Jones is now the Senior Director of Developer Relations at Applitools.

Merrill asks these questions: “Are processes like these becoming best practices? Are they integrated into methods that we all use to test systems? “

About AI in testing, Applitools co-founders Moshe Milman and Adam Carmi were quoted by Merrill as saying, “First, we’re going to see a trend where people become less and less mechanically dirty with implementing, running and analyzing test results, but they will still be an integral and necessary part of the testing process to approve and act on the results. This is already evident today in AI-based test products such as Applitools Eyes. “

Says Merrill, “If AI can do less of the work for a tester and help them identify where to test, we need to consider BFF status.”

Milman and Carmi describe the skills needed by AI testers Applitools blog, “Test engineers would need different skills to build and maintain AI-based test suites that test AI-based products. The job requirements would include a stronger focus on data science skills and test engineers would need to understand some deep learning principles. “

Four approaches for AI in software testing are outlined

Four AI-driven test approaches were taken from an account entitled AI in software test: 2021, on the website of TestingXperts, a software testing company based in Mechanicsburg, Pennsylvania.

The four approaches are: differential testing, visual testing, declarative testing, and self-healing automation.

In Differential test QA engineers classify differences and compare application versions for each build.

Examples of products that support this are Startable based on an ML algorithm that predicts the probability of failure for each test based on past runs and whenever the source code changes in the test. This tool allows the user to record the test suite so that tests that are likely to fail are run first. This tool allows you to run a dynamic subset of tests that are likely to fail, reducing a long-running test suite to minutes.

In visual inspection, Engineers test the look and feel of an application using image-based learning and screen comparison. Sample products that include this include the Applitools platform with its Visual AI capabilities, including Applitools Eyes, which helps increase test coverage and reduce maintenance costs. The ultrafast grid is intended to help with cross-browser and cross-device tests and to accelerate functional and visual tests. The Applitools platform should be able to be integrated into all modern test frameworks and work with many existing test tools, including those from Selenium, Appium and Cypress.

In declarative check, Engineers try to specify the intent of the test in natural or domain-specific language, then the system decides how to conduct the test. Sample products include Test Suite of UIPath is used to automate a centralized testing process and helps build robots that run tests through robotic process automation. The suite includes tools for testing interfaces, managing tests, and running tests.

Also tools from Tricentis aim to enable Agile and DevOps teams to achieve their test automation goals, with features such as end-to-end testing of software applications. The tool includes test case design, test automation, and test data design, generation and analysis.

In self-healing automation, the elements selected for testing are automatically adapted to changes in the user interface. Sample products include Mabi, a test automation platform designed for continuous integration and continuous delivery (CI / CD). Mabi crawls the app screens and runs standard tests that are common for most applications. It uses ML algorithms to improve test execution and error detection.

In summary: TinyML is a giant opportunity that’s just beginning to emerge. Expect to see quite a bit of movement in this space over the next year or two.

Some great articles about Edge AI for your reference

Johan Malm, PhD and AI researcher at Imagimob, has written a very good blog about the more technical aspects of Edge AI, Edge AI for techies. Read it here. 

Johan has also written a blog about project with Acconeer, where we developed an application for gesture-controlled headphones using radar and Edge AI. Read it here.

Ben Dickson is an experienced software engineer and tech blogger. He contributes regularly to major tech websites such as the Next Web, PCMag.com, VentureBeat, International Business Times UK and The Huffington Post. Read his article explaining why Edge Ai is important. 

Nathan Cranford is a writer at RCR Wireless News since 2017. His previous work has been published by a myriad of news outlets, including COEUS Magazine, dailyRx News, Texas Writers Journal and VETTA Magazine. Read his article on how to take AI from the cloud to the edge. 

S. Somasegar is the managing director of Madrona Venture Group, a venture capital firm that teams with technology entrepreneurs to nurture ideas from startup to market success. Read his article with his predictions for AI and Machine Learning in 2018.

A recent (July 15, 2019) article in Forbes by Ami Gal, CEO and Co-Founder at SQream explains Edge AI in a very good way. Read his article about the cutting edge of IoT here.

In a recent report, MarketsandMarkets forecasts the global Edge AI software market size to grow to USD1,1 billion in 2023. Imagimob is listed as a major player together with 14 other companies. Read about the report here.


1 thought on “What is an Edge AI?”

Leave a Comment