When you find yourself overwhelmed by tech marketing buzzwords it is always a sobering experience to ask an expert for clarification. What is AIoT – and why should we care? As an experienced CTO and software engineer Hans Christian Lønstad is eminently qualified to pick the AIoT buzzword apart and put it into the proper context:– The first time I heard about AIoT was in an advertisement from Nvidia, who is a big player in this game. AIoT is the ability to put together machine learning and edge computing, and it’s a natural development in both machine learning and edge computing. There are many good reasons for machine learning to take place at the edge, among them reducing latency and Cloud related cost, and enhancing performance.
– For instance, the “smart camera” is currently one of the most popular applications in this area. These surveillance cameras are used to monitor crowds or traffic, or for inspection and quality control on a production line. Nvidia has a very strong foothold in this area, and they offer the possibility to process vision data on the device, instead of having to send it up to the cloud. You can even buy pre-trained models for certain use cases, like counting the number of people or cars in an image. – However, in my opinion AIoT is just a small part of something much bigger. It’s part of the mega trend towards automation, and one of the building blocks to enable us to design autonomous systems at a level of complexity and precision we haven’t seen before.
No magic ingredient
But according Hans Christian Lønstad, Artificial Intelligence is not the magic ingredient that will effortlessly bring us to the next level of human/digital interaction. Far from it. In fact, he prefers to use the expression Machine Learning and leave AI to the marketing people. Because, as he points out, 99,9 per cent of AI is Machine Learning anyway.
– We’re seeing more and more low cost edge computing hardware with facilities for machine learning computation. To be precise, what is situated at the edge is the decision part of Machine Learning. It is called an “inference engine”, which is a glorified matrix multiplier architecture increasingly supported in standard processors, cellular phone CPUs and in hardware in general. The inference part of Machine Learning requires much, much lighter computational resources than the training of a Machine Learning system.
Edge and cloud combined
– That is why we often see a combination of edge and cloud computing, for reinforced Machine Learning. Let’s have a look at Tesla. A Tesla uses a lot of Machine Learning at the edge to respond to input from on-board cameras and sensors while driving. When the car is parked and connected to a Wi-Fi, it uploads huge amounts of data to the cloud to be used as input for enhancing the Machine Learning algorithms. So, you have two levels of Machine Learning, one in the cloud, the other at the edge. The training takes place in the cloud, and the actual decision-making takes place at the edge, based on models trained in the cloud.
Training is difficult
Actually, training a Machine Learning model is a task not to be underestimated, Lønstad explains.
– You can buy pre-trained models like the ones provided by Nvidia. They give you the benefit that you’re quickly up to speed with what you want to do. But there is a downside: Precision is low. We are talking maybe 80 per cent correctness on pre-trained models for camera vision. That may be good enough for many applications, but in other use cases it’s unacceptable.
– If you want higher precision and you have items with specific features that you need to put in the system, then you need to get your fingers dirty and train the model yourself. You need to qualify your data and your algorithm, and this is where it gets complicated. That is a lot of work, and you need vast amounts of data.
Garbage in & out
According to Hans Christian Lønstad, the performance of a Machine Learning system depends on the data that is fed into it. The old saying “Garbage in equals garbage out” applies very much to Machine Learning. The quality of output is determined by the quality of the input. – Machine Learning is statistics. It is a statistical approach, as opposed to a conventional algorithm with some kind of direct connection between input and output. But you need a lot of high-quality data to train you Machine Learning system. And data is easily biased, so we will have systematic errors which is not a good thing. It’s a kind of paradox with all statistical data. If you want to reduce the variance in the result, you need to accept more bias and vice versa. So it will never get perfect.
– In my opinion, there is only a very, very exclusive group of companies that has access to enough high-quality data to build good Machine Learning systems. If you look at who has succeeded with Machine Learning, it’s basically the big Internet companies like Google and Facebook, which are collecting data from their users in any way they can. They have an abundance of data, and their users are giving it to them for free. In an industrial setting you won’t have the same possibilities.
Don’t get overambitious
Hans Christian Lønstad issues a warning to companies attracted to the high-flying concept of Artificial Intelligence:
– Don’t think, that because companies like Facebook and other big league players are succeeding with this, you will as well. That’s a wrong assumption, so you should be careful not to get overambitious. Without access to similar amounts of data it’s impossible to build Machine Learning systems on that level of sophistication. But you can build something that’s good enough for some specific purposes, you just need to be careful to make the right choices.
– As mentioned before, there is potential in Machine Learning in an industrial setting in regards to computer vision for quality control, for instance. But it’s not for free. You need to put a lot of effort into training the systems, qualifying the data, and evaluate and develop over time.
Not good enough
And, while Hans Christian Lønstad is hard at work sticking pins in the hot air balloons of tech buzzwords, here is another one that needs deflating. In Lønstads opinion, Closed Loop Machine Learning won’t be as big as some people are hoping for. In his opinion it’s just not good enough, and you can’t use it to drive a car, for instance. If you require close to 100% confidence, you can’t use machine learning, which is why it cannot be used in safety-related systems. In these you won’t accept the risk of somebody getting injured or dying, even if that risk is only 2 per cent, which actually is a very high confidence level in machine learning.
Also, you can’t use Closed Loop Machine Learning for decisions that have legal implications towards a person, for instance compensating people for something according to specific legal rights. In this, 95 per cent certainty is not enough. Moreover, in these cases you are required to document your decision, and to have a trackable line of events leading up to the decision. A Machine Learning “black box” is unacceptable in these use cases.
Tool for decision-making
– Instead, Machine Learning can perform a lot of tasks going through vast amounts of data and finding the bits and pieces that need your attention. We are drowning in information, and Machine Learning can help you sort out what you really should look at. It can be a helpful tool for decision-making. It probably shouldn’t be the decision-maker itself, but it can assist you in making decisions by focusing the information you have to look into. In that way we can apply it in many areas, but again, that’s statistics. Machine Learning is statistical methods, and those have been used for years.
As mentioned in the beginning, AIoT is the ability to put together machine learning and edge computing. It’s a natural development in both machine learning and edge computing, and it’s part of the mega trend towards automation. According to Hans Christian Lønstad, an important enabler in this game will be 5G:
– With 5G you can have powerful computing resources at the edge. When we are talking about AIoT, we should bring in 5G and edge computing at the next level. With 5G you can have computing resources very close to the IoT devices. You can eliminate latency issues and you won’t need to ship bulk data up to the internet and up to cloud systems. In production facilities you can have private 5G networks, which allow you to handle the cost implications of sending more data.
Huge engineering task
– 5G is an enabler for doing more sophisticated Machine Learning at the edge. But 5G itself is not the Holy Grail, and AI is not either. All these things together will enable us to reach the next level of automation and design autonomous systems we haven’t seen earlier.
– These super-complex systems need to be put together. That is a huge engineering task and will involve tons of software. The solutions will be different, from application to application and from industry to industry. The technology itself may be horizontal, but the verticals applying this technology and putting it together as a system to achieve specific applications – that will require an enormous effort and big investments.