Microsoft Azure: This new developer kit helps dispel the myth that AI is heavy

Microsoft’s new Azure Percept Developer Kit aims to make computer vision cheaper and easier, bringing AI to more businesses.

The Azure Percept (DK) development kit contains vision (right), optional audio (left) and development board modules (middle).

“data-credit =” Image: Microsoft “>

The Azure Percept (DK) development kit contains vision (right), optional audio (left) and development board modules (middle).

File: Microsoft

More and more sensors are being added to the edge of networks, using tools like the Azure IoT Hub to connect them to cloud services, where maximum utility can be extracted from the data they generate. But too many of these devices are customized, which requires significant development to get that data in the right format and in the right place.

More on artificial intelligence

To make the most of this growing industrial IoT, software and hardware engineers need to work with device firmware, learn new operating systems in real time, and think about very low-level security. This is a complex area where it will be difficult to benefit if you do not have the resources to establish a dedicated development team.

SEE: Natural language processing: deceiver (TechRepublic)

But what if the industrial IoT was really industrial, built to standards, with the ability to fit like robust Lego bricks? And what if there is a common development environment that helps you connect APIs and services to build your own IoT applications based on machine learning?

Discovering Azure Percept DK

That’s why Microsoft recently launched Azure Percept. Like other Microsoft IoT services, it combines hardware, software and the Azure cloud. At the heart of the platform is a set of reference designs for edge hardware that takes advantage of machine learning. Hardware developers will be able to take these designs and create their own devices, adding their own features – for example, using a custom camera module or changing the radio. The designs could also be adapted to different industries, with different systems in storage or on oil wells. Azure Percept is conceived as a family of plug-and-play IoT hardware from multiple vendors, with different designs using the same software platform.

Although reference design is part of the story, cutting-edge hardware needs software. So, Microsoft is delivering a starter kit for developers to run the Percept ecosystem. Available from the Microsoft Store, it consists of a hub and a camera, with an optional audio sensor. The basic programmer costs $ 349, and the audio sensor an additional $ 79. They are designed to fit the standard 80/20 mounting rails found in many industrial facilities, so they can be installed on existing rails or quickly installed in any space.

The main Percept DK module is built around NXP’s iMX8M system on the processor board module, with 4GB of RAM and 16GB of storage and TPM for security. In addition to the four 64-bit Arm cores, it also has additional acceleration for machine learning loads with a dedicated Intel Movidius Myriad neural network locking processor. This allows it to unload most of Percept’s processor image, saving both processor time and energy.

Connectivity comes via Ethernet, Wi-Fi or Bluetooth. It uses Microsoft’s own distribution of CBL-Mariner Linux, with management and update services from Azure. The camera module connects to the main board of the carrier via USB-Ca, and Microsoft suggests that you can get to the images in less than 10 minutes from opening the box.

Getting started with Percept

You don’t need an 80/20 stand to get started, because the devices can be placed next to your development computers, so you can quickly see how they work. All you need to do is plug in the power, plug in the antennas, and then connect the camera unit via USB. Once turned on, you can start the initial configuration via Wi-Fi. A set of web pages guides you through connecting to a Wi-Fi network, before configuring SSH. Once ready, it connects to Azure where it needs to be registered in your account, connecting Percept DK to the Azure IoT hub (either by creating a new instance or joining an existing one). You must use a standard level instance, because Percept is not supported on free or basic instances.

When Percept DK first connects to Azure, it will update and download the default software modules. You can then use Azure’s Percept Studio management and development environment to work with the hardware, initially testing streamed video from the AI ​​vision recognition model built into Percept.

SEE: Office 365: A Guide for Technology and Business Leaders (Free PDF) (TechRepublic)

A quick start is a definite advantage, because you can show results quickly. To help move beyond the basic identifier, there are samples of vision models based on common business problems. You can quickly apply tools to detect people or identify empty shelves, for example, without writing a line of code.

This approach to the practical vision of artificial intelligence with a low and no code is crucial to perception; here it is important what you can do in your work with machine learning and computer vision (and sound). After connecting your Percept system to the Azure IoT Hub, you can use the Percept Studio development tools hosted by Azure to build your own applications, connect various APIs, and deliver code modules to your devices.

Azure Percept Studio has numerous samples of AI models, such as those for computer vision.

“data-credit =” Image: Microsoft “>azure-percept-vision-models.jpg

Azure Percept Studio has numerous samples of AI models, such as those for computer vision.

File: Microsoft

Creating your first Percept applications

Getting started with Percept Studio is like working with any Azure tool because everything you create needs to be assigned to a resource group and assigned a price level – in this case for Azure Cognitive Services, which provides machine learning APIs that use Percept. Once you have done the basic resource setup, you can quickly configure the vision solution. Start by choosing whether to detect or classify objects. You do not need to select the target device type, as this is handled automatically by Percept Studio.

Then start training your model, with at least 30 images taken from the Percept camera module stream. You can automate this process – for example, if you are building an application that is designed to monitor space. Once the images have been taken and uploaded to Percept Studio, you can start tagging them. Tags are key to machine learning because they allow you to tag image elements, ensuring that your application can identify specific objects or phenomena. Manually tagging a series of images and running them through a training cycle is probably the longest part of building a basic ML application.

Percept Studio offers tools for model testing and retraining as needed. Don’t expect to fix things for the first time; you can improve your model with more examples to work with. Once you are satisfied with the results, you can deploy your model to your Percept devices and run it.

Percept is capable of a lot because it is built on top of Azure’s custom vision tools, which are part of its cognitive service learning package. There is an additional suite of development tools that can be downloaded to build more complex solutions, along with a GitHub repository to help you get started. This gives you access to the software used to run the AI ​​module, as well as tools to help you train and deploy your own neural networks.

Microsoft is trying something quite ambitious with Percept: to provide a reference design for AI sensor hardware and tools to build applications around it. There is a myth that artificial intelligence is difficult and it is clear that this is one of the myths that the Percept team wants to help you fail. Codeless solutions start fast, ready to deploy on relatively inexpensive hardware, while more complex, custom neural networks can be built on your own hardware. It’s an effective mix that should grow along with you as you gain experience with both computer vision and sound processing.

See also

Source