There are some challenges in setting Jetson Nano device for carrying out custom learning on your dataset. The challenges stem from the incompatibilities between PyTorch, JetPack, TensorRT and other components in your environment. The purpose of this blog is to clear the air and provide a simple step by step guide on how to set up the environment right in the first place.
Follow along the instructions to flash the image to your SD card mentioned in the above link.
Preparing the Jetson Nano device
Once the image has been flashed to your SD card and you have booted up the Jetson Nano device, you can proceed with the following steps.
1. Run the Software Updater: This is to make sure any components that are updated after the official image is released, have the latest updates.
2. Validate JetPack version: Once you have run the Software Updater, it is time to validate that you have the correct version of Jetpack installed. For that you have to clone https://github.com/jetsonhacks/jetsonUtilities and download the code to be able to run jetsonInfo.py.
There are few things that we need to make note of. The JetPack version is 4.4. This is compatible with TensorRT version 7.1. In my experience most of the problems related to environment setup for custom learning stem from the incompatibility between these two components.
3. Install protobuf: In my experience, I found it helpful to install protobuf before installing any other componets.
Here is the command:
pip install protobuf # for python 2x
pipe3 install protobuf # for python 3x
Note: If pip is not installed then install pip using the following command
sudo apt install python-pip # for python 2x
sudo apt install python3-pip # for python 3
Since you might be using python3 for training your dataset and running AI tests, I highly recommend to install protobuf and pip for both python 2x and python 3x.
Here are the set of commands as mentioned by Dustin Franklin.
$ sudo apt-get update
$ sudo apt-get install git cmake libpython3-dev python3-numpy
$ git clone --recursive https://github.com/dusty-nv/jetson-inference
$ cd jetson-inference
$ mkdir build
$ cd build
$ cmake ../
$ make -j$(nproc)
$ sudo make install
$ sudo ldconfig
During the steps mentioned in Dustin's set of commands, you will see the option to install PyTorch. If you are planning to train your dataset using PyTorch then you would need install PyTorch.
Note: During the installation of PyTorch you will get the option top install PytTorch 1.6.0 for Python 3.6 as shown below:
Since the PyTorch is installed for Python 3.6, I used python3 for my custom learning.
In this article we have seen from start to finish how to setup your Jetson Nano environment to carry out custom learning for your models using your datasets. The goal of this article to address the compatibility issues that cause failed attempts to carryout custom learning of your models.
In this tutorial we will see how we can deploy an IoT Edge Module on NVidia Jetson Nano device and send direct message to the newly created module and get its response. This tutorial can also be used on any Linux VM that Azure IoT Edge runtime supports. Just deploy the Azure IoT Edge runtime and use that VM as your device.
Azure IoT Hub. Steps to create Azure IoT Hub can be found here
IoT Edge Device registered. Details can be found here
Docker Image repository: This will be used to push the container images to. This tutorial requires access to already created Docker Image Repository with user name and password. Details on how to create Azure Container Registry can be found here
VS Code
Azure IoT Tools for VS Code: This should configured and connected to your IoT Hub. More details can be found here
Preconfigured Nvidia Jetson device with IoT Edge runtime installed and configured to use Azure IoT Edge. More details can be found here. Or Linux Virtual Machine with Azure IoT Edge runtime installed and configured. More details can be found here. In this case "Linux Virtual Machine" will act as a device.
Steps
Setting up the environment
Step 1. Create device identity as shown below:
az iot hub device-identity create --device-id [your-azure-iot-edge-device] --edge-enabled --hub-name [your-azure-iot-hub_name]
Step 2. On VS Code open command palette and enter command Azure IoT Edge: New IoT Edge solution.
Step 3. Choose the location for solution files.
Step 4. Choose name for solution. NvidiaJetsonEdgeSolution was selected for this tutorial.
Step 5. On "Select module template" question, choose "Python Module".
Step 6. Enter the name for "Python Module". For this tutorial "RequestProcessorModule" was chosen.
Step 7. For "Provide Docker image repository" question, choose a pre-existing image repository followed by name of your repository. Example: [your-docker-image-registry].azurecr.io/requestprocessormodule After the above step, VS Code will open a new window with the following view:
Step 8. Open the .env file and enter the user name and password for your docker image registry as shown below:
Step 9. On VS Code open the command palette and enter command "Azure IoT Edge: Set Default Target Platform for Edge Solution".
Step 10. Select "arm64v8" or your correct architecture. You can find out the architecture of your device by running the following command on the device:
$ uname -m
aarch64
In this case "aarch64" corresponds to "arm64v8". Once the architecture is set the settings.json file would look like:
Adding code
Step 1. Open "main.py"
Step 2 . Replace the code with the code mentioned below:
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Step 1. Right click "deployment.template.json and select "Build and Push IoT Edge Solution" as shown below:
Step 2. The result of above step will be the creation of new folder called "config". The folder will contain a deployment json file corresponding to the default platform selected on step 8 under "Setting up the environment" section. For our Nvidia Jetson Nano device the architecture is arm64v8 as shown below:
Step 3. [Optional] Removing of SimulatedTemperatureSensor module. If you open the "deployment.arm64v8.json" file under config folder, you will notice that it comes with "RequestProcessorModule" and "SimulatedTemperatureSensor" module. This means if you deploy this deployment json file to the device, you will end up with additional SimulatedTemperatureSensor module. If you would like to not have that module added, then simple remove the section as shown below:
Step 4. Open the "AZURE IOT HUB" section under the "Explorer" panel on VS Code.
Step 5. Select the target IoT Edge Device and right click.
Step 6. Select "Create Deployment for Single Device" menu item as shown below:
Step 7. This will open a dialog window to asking to select Edge Deployment Manifest. Select the correct deployment manifest file that corresponds to your device architecture under the config folder as shown below:
This will result in the deployment of your edge module as shown below:
Step 8. Head over the Azure Portal and navigate to IoT Edge Device. This will show the newly created IoT Edge module as shown below:
Step 9. To view the newly created IoT Edge module on the device, open the device terminal and run the following command:
$ sudo iotedge list
This will show the newly created IoT Edge module as shown below:
Step 10. To view the log entries by the newly created IoT Edge module, run the following command on the device terminal:
$ sudo iotedge RequestProcessorModule
This will show the following result:
Test
Step 1. Head over to Azure Portal, select the IoT Edge device, click the "RequestProcessorModule".
Step 2. On the IoT Edge Module Details page, select "Direct Method" link. This will open up "Direct Method" page that is used to test.
Step 3. Execute the test as shown below:
Step 4. Head over to the device terminal and run the following command:
$ sudo iotedge RequestProcessorModule
This will show the following result:
Conclusion
In this tutorial we have seen how easy it is to create a new Azure IoT Edge module using python and deploy it using VS Code.
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
The CopyTags.ps1 allows you to copy tags between resource groups. The source resource group is the resource group that we want the tags to be copied from and target resource group is the resource group that we intend the tag to be copied to.
Features
The script does not overwrite any tags that are already present in the target resource group.
The script will only copy those tags that are present in the source resource group and not present in the target resource group.
If the same tag (with same tag name) exists in both source and target resources groups, then the script will not overwrite the existing tag in the target resource group. This gives great flexibility of using this script without the concern of overwriting existing valuable tag information.
Usage
This script requires two parameters. These are sourceResourceGroup and targetResourceGroup.
Example:
.\CopyTags "[Source Resource Group Name]", "[Target Resource Group Name]"
Steps
Login to your subscription using powershell commands. Example: Connect-AzAccount -Subscription "[Your subscription name]"