A Blog By Peter Logan, Machine Learning Solution Architect at Intel
The folks at GOTO50.ai were kind enough to invite me to put pen to paper and talk about some of the “whys”, as well as the “whats”, of the work that I, my team, and company do in the fields of Artificial Intelligence (AI), Machine Learning (ML), and Computer Vision (CV). The “why” is pretty easy for me, as Computer Vision has been good for my inner geek. It’s a genuinely long term and interesting area of computing in an industry that hypes one fashion after another. I was involved in the API boom, of maybe a decade ago now, and not since then have I been as interested in what I’m doing.
Talking of fashions and hype, IoT seems to have started to settle down now and the camera is replacing many of the sensors that had been feeding data previously. If you think about it, a supermarket or distribution center could invest in wired shelf sensors and multiple radio technologies just to have an idea of stock levels. The ability to see and infer can do that, giving you the ability to find misplaced stock, show where people most often go, what grabs their attention, and more. Not to mention estimating volume and space for restocking, theft detection, and product quality reporting.
I have given a slightly mundane example here because this is exactly the sort of utilitarian use case that will be the justification for the rapid rise in cameras becoming the go-to sensor in future. Healthcare and safety are also seeing a big rise as businesses opt for assisting human judgement, benifiting patients and customers.
The Edge is the key here and there are several parts to why it’s happening.
Firstly, because technology enables it. Mobile phones have given us camera sensors as well as relatively cheap access to a billion transistors’ worth of computing power, in the palm of our hands. This, in turn, makes camera based IoT devices much more affordable and accessible. The techniques in Computer Vision processing have got a LOT better. Perceptrons have given way to Sigmoid Neurons, shallow to deep networks, discrete blocks like convolutions and now sparsity techniques have increased efficiency. Just the fact that early networks to infer handwriting had tens of thousands of neurons and these days models with 100 billion neurons are commonplace, speaks volumes.
While compute hardware and neural networks got exponentially better at transporting data, the cost of it did not improve at the same rate. So, we’re always going to be isolated from the data center, if we have computer vision that needs a decision on its inference now and reliably so. For obvious reasons we also do not want healthcare and security image data going to data centres that we cannot audit and keep track of. Take, for example, this use case that offers local processing for detecting blindness.
Developers are going to matter so much more now. For Edge AI, it’s not just a case of writing a script to get a device to talk to a sensor API and report back to the cloud. Data sets are going to have to be collected and curated. You’re going to have to exercise judgement in making trade offs between efficiency and accuracy. There are choices to be made about software frameworks and the hardware it’s implemented on.
This is where GOTO50.ai, and efforts like it, are a step in the right direction. Providing a place where developers can help each other with practical steps on implementing use cases out in the field. Intel has also contributed in helping developers get up to speed with OpenVINO™ (Open Visual Inferencing Neural Network Optimisation). https://intel.com/openvino
We have a phrase in our team at Intel; “No transistor left behind”. If you’re buying kit for an Edge AI application it’s more than likely to have some kind of Central Processing Unit (CPU), as well as the AI or CV accelerator. More than likely it will have a Graphics Processing Unit (GPU) bundled with the CPU. What if you could make use of all of that silicon you have just bought for AI inferencing and not just the accelerator? What if you could take a vast range of models in different frameworks like Caffe, ONNX, Tensorflow, PyTorch and have them optimized to run on all of that different silicon without having to maintain separate build folders or APIs? This is what OpenVINO delivers, and does it all with simple-to-use Python wrappers. It also gives the developer and sysadmin tools for deploying, tuning, and monitoring performance, so that every bit of performance can be squeezed out of the kit in which you have invested.
ADLINK has been instrumental in helping developers and companies get onto the first rungs of this ladder, by teaming their Vizi-AI kit with Intel’s OpenVINO™, so that you have something that you can start with on the workbench, take with you into the Proof-of-Concept stage, and also into a Production environment. And when it comes to growing your commercial ideas, we’re ready to pitch in, helping you as partners with both programmes and funding.
Enjoy taking your ideas into the field and do make use of this new GOTO50.ai community, it’s for you.
Peter Logan, @PeteL0gan
Intel Neural Compute Stick Product Manager https://intel.com/ncs
Peter will be presenting at IoT Newcastle Meetup – March 3rd 2020