Friday, September 4, 2020

Custom learning using Jetson Nano - How to setup the environment

There are some challenges in setting Jetson Nano device for carrying out custom learning on your dataset. The challenges stem from the incompatibilities between PyTorch, JetPack, TensorRT and other components in your environment. The purpose of this blog is to clear the air and provide a simple step by step guide on how to set up the environment right in the first place. 

Here are the steps:

Writing image to SD card

There are several SD card images that are available on different sites. The best one that works is the one that is provided by the official link (https://developer.nvidia.com/embedded/learn/get-started-jetson-nano-devkit#write) in preparing for environment. You can either go to https://developer.nvidia.com/embedded/learn/get-started-jetson-nano-devkit#write or click the following link to download the image:


Follow along the instructions to flash the image to your SD card mentioned in the above link.

Preparing the Jetson Nano device

Once the image has been flashed to your SD card and you have booted up the Jetson Nano device, you can proceed with the following steps.

1. Run the Software Updater: This is to make sure any components that are updated after the official image is released, have the latest updates.


2. Validate JetPack version: Once you have run the Software Updater, it is time to validate that you have the correct version of Jetpack installed. For that you have to clone https://github.com/jetsonhacks/jetsonUtilities and download the code to be able to run jetsonInfo.py. 

Once you have cloned https://github.com/jetsonhacks/jetsonUtilities, run the following commands:

cd jetsonUtilities
./jetsonInfo.py


There are few things that we need to make note of. The JetPack version is 4.4. This is compatible with TensorRT version 7.1. In my experience most of the problems related to environment setup for custom learning stem from the incompatibility between these two components.

3. Install protobuf: In my experience, I found it helpful to install protobuf before installing any other componets.
Here is the command:

pip install protobuf # for python 2x
pipe3 install protobuf # for python 3x

Note:
If pip is not installed then install pip using the following command
sudo apt install python-pip # for python 2x
sudo apt install python3-pip  # for python 3

Since you might be using python3 for training your dataset and running AI tests, I highly recommend to install protobuf and pip for both python 2x and python 3x.

4. Setting up jetson-inference: The last step is to setup jetson-inference. The steps are clearly mentioned by Dustin Franklin's github repo at https://github.com/dusty-nv/jetson-inference/blob/master/docs/building-repo-2.md
Here are the set of commands as mentioned by Dustin Franklin.
$ sudo apt-get update
$ sudo apt-get install git cmake libpython3-dev python3-numpy
$ git clone --recursive https://github.com/dusty-nv/jetson-inference
$ cd jetson-inference
$ mkdir build
$ cd build
$ cmake ../
$ make -j$(nproc)
$ sudo make install
$ sudo ldconfig

During the steps mentioned in Dustin's set of commands, you will see the option to install PyTorch. If you are planning to train your dataset using PyTorch then you would need install PyTorch. 

Note: During the installation of PyTorch you will get the option top install PytTorch 1.6.0 for Python 3.6 as shown below:


Since the PyTorch is installed for Python 3.6, I used python3 for my custom learning.

Once the steps are complete you can head out to train your models using your datasets. Before that I recommend testing your environment using the steps mentioned in Dustin Franklin's post at: https://github.com/dusty-nv/jetson-inference/blob/master/docs/pytorch-cat-dog.md

Conclusion

In this article we have seen from start to finish how to setup your Jetson Nano environment to carry out custom learning for your models using your datasets. The goal of this article to address the compatibility issues that cause failed attempts to carryout custom learning of your models.