Skip to content

Commit

Permalink
Donkey 4.x is now official (#723)
Browse files Browse the repository at this point in the history
* Update nano install docs - python3-opencv vs libopencv-python

* change slack to discord on readme

* Donkeycar 4.x release.  (#644)

## Major Improvements

- New Datastore.
- More ways to pre-process image data for training. 
- Use the 2.x version of Tensorflow. 
- Lots of other minor improvements.

* Improvements to the car app and handling of KerasPilot parts: (#648)

* Improvements to the car app and handling of KerasPilot parts:
* Created a simpler webserver or joystick car app by modifying and renaming the basic_web into basic template and switched this on as default. Also renamed the target from 'manage.py' to 'drive.py' because there is only driving and no training included
* Simplified the handling of uint8 and float32 numpy image arrays. KerasPilot.run() now expects uint8 data, transforms into float32 and delegates to its children. The corresponding rescaling step is gone from the car app
* smaller updates/fixes to environment and config files
* added support for tflite pilots in makemovie

* Improvements to the car app and handling of KerasPilot parts:
* Added docstring to KerasPilot methods
* Fixed bad merge for model types in complete config

* Add configuration to allow creating sub directories for each tub (like the legacy tub) or by default store all recordings in the data directory directly. (#649)

* Update setup.py with progress module for pi (#650)

* Update setup.py

* Update setup.py

Made changes online.

* Add testing for training: (#651)

* added example tub data as tar.gz in tests with 1000 records
* check validation data size in train
* created new test_train.py
* added support for read-only tub, required to create r/o tub in tmp dir
* changed loss in categorical model to equally weight throttle and steering
* changed mod on scripts to u+x

* Rename `drive.py` to `manage.py` to preserve CLI compat. (#656)

* Train still prompts you to move to then new entry point.
* Also fixes #655

* Cleanup exit handlers in the new datastore. (#657)

* Minor datastore_v2 improvements. (#658)

* Add __exit__ handlers again.
* Use `os.linesep` to deal with line separators consistently.

* Be explicit about newline characters (#660)

* Be explicit about newline characters #2 (#661)

* Remove the extra flush(). Windows is less forgiving (#662)

* Remove redundant code (#663)

* Windows Doc Updates (#668)

Added more options for Installing Donkey Car on Windows.  

- Anaconda
- Native
- Windows Subsystem for Linux (WSL) - experiential

These options give more flexibility and could make deployment easier for some individuals.  I personally always install donkey car natively onto Python installed on the System.  I will be looking more into WSL going forward as it has some interesting benefits.

* Integrate osx into travis. (#665)

* add in --user flag for MakeMovie so that you can select if you want to draw the line on the video or not.  default: true

* updated args.draw_user_input

* update to include recommendations

* added in brake functionality for simulator only

fix typo in drivemode return which checked pilot/throttle instead of brake

* fix autorope/issue #671 (#672)

* Jonathans changes from issue #634 and PR #646 (#676)

* Bump version to 4.0.1 (#677)

* Fix template docopt, as it's called managed.py and not drive.py

* Allow overriding WEB_CONTROL_PORT from env variable

* - Add simulator support to basic.py (#682)

- Update cfg_basic.py with simulator parameters

* Switch to using memory-mapped files when reading. (#691)

* This makes reading a 100x faster.

* new: mqtt telemetry support (#688)

* A configurable training pipeline. (#693)

* Implement a Lazy transformable pipeline.
* Implement basic batching. However, this will need to be improved
  further for models with multiple outputs.
* Replace the old `Sequence` implementation used with a new `Pipeline`.

Test: Ran end to end tests.

* The telemetry mqtt test is breaking w/ a timeout on the connection to the server.

* change the public mqtt server name to one that's listening

* * Overriding inference in TensorRT as pass as it is not needed here. (#698)

* Need to implement inference to run on the normalised image (#699)

* Fix sequence iterators (#704)

* Change training pipeline from tf.Sequence to tf.data (#701)

* Improve pipeline use: move from building list of pipelines of single transforms to building a single pipeline with a list of transforms (actually just looping through function to go from TubRecord -> image -> augment -> normalise -> x and TubRecord -> y).

Fixed TfmIterators and TfmIterables.
* Iterables are the containers and are sized - these are the user objects
* Iterators are protocol objects to allow iteration, they have no logic and are local to the Iterables
* build/map_pipeline both return sized Iterables
* removed all batch logic, this is not required
* still commented but left code that uses generator based pipeline as this is simpler code

Using new small temporary pipeline generator
* this keeps the TubSequence lazy and avoids to roll out the pipeline into a list
* added a test to check consistency of the pipeline
* remove empty (after moved) augmentation file
* removed augmentation from old tub (as it's not needed and we removed the old augmentation)

New pipeline changes:
* moved augmentation into own class that is used above and can be used a  threaded or non-threaded part
* moved train functionality out of template and added 'donkey train', train.py just a simple dummy script for backward compatibility

* Address code reviews:
* Re-base on current dev to use un-altered sequence.py
* Add iterator consistency test to pipeline tests
* Undo changes in fast_stretch.py
* better tf shape manipulation
* small code improvements in training.py
* remove sleep in augment part

* Address code reviews:
* Add clearing of tubrecord list and minor renamings

* * Add support for multi-dimensional model input and making the x, y interface symmetrical on the model / training interface: (#707)

- x/y_transform extract x, y as numpy arrays or floats out of the record
  - x/y_translate convert the numpy arrays of floats into tf-readable dictionaries used in tf data.
* Simplify model interface by implementing output_types() directly in the base class using output_shapes() dictionary.
* Adding developer guide for own model development
* Updated donkey command documentation
* Improve asserts and type hints in keras.py
* Added missing __init__.py in parts module.
* Add cool ascii text for donkey init and update yml and setup files including mypy
* Remove model training test from Travis and change the test to relative convergence. This avoids random fall overs in CI.
* Added test of tf.data as used in the training pipeline through re-implementation of data transformation from tub records to tf expected dictionaries, for all currently supported models.

* Minor changes for 4.1 in tub conversion script and developer doc (#708)

* Minor changes for 4.1
* Update conversion script to translate discontinuous data.
* In developer guide add disclaimer for version and correct intra-page links.

* * Update doc with donkey train command.
* Update doc with developer section for building own models in donkey 4.1
* Integrate changes from PR feedback

* Incorporate PR feedback
* Add empty records concept to tub
* Minor updates to conversion script

* Incorporate PR feedback
* Add empty record type into conversion script

* docs: fix simple typo, unfarmilar -> unfamiliar (#714)

There is a small typo in docs/guide/host_pc/setup_windows.md.

Should read `unfamiliar` rather than `unfarmilar`.

* Added the ability to train PyTorch models (#706)

* Added in PyTorch and PyTorch Lightning to train a DC model

Successfully able to train a ResNet18-based model using PyTorch
Lightning.

* Removed hard-coded max number of epochs (used for debugging)

* Added an inference transform to ResNet18 to convert PIL -> tensor

* Unsqueezed input tensor during inference for batch dimension

* Reshaped ResNet output from (1, 2) to (2,)

* Added the ability to resume training from a checkpoint

* Added helper print message when tensorboard logging is enabled

* Updated docopt arguments for train.py. Made checkpoint optional

* Changed TorchTubDataset from sub-classing Dataset to IterableDataset

This was done in response to #706 (comment)

* Renamed load_image_arr to load_image. Updated load_pil_image

load_pil_image will now handle converting the image to greyscale
(vs. this being done in load_image).

* Updated enviroments for Mac and Ubuntu. Set Python=3.7

* Updated installation documentation. Added script to setup Nano

Updated the installation instructions for Ubuntu, Mac, and Windows.
Clarified a common issue that occurs when running pip install -e .[pc]
with ZSH.

Also added a script to setup the Jetson Nano and updated the documentation
for the Nano (it previously was installing tensorflow 1.x).

* Added torch flag to setup.py to install pytorch

* Moved pytorch training into base.py and removed from train.py

* Moved Jetson Nano python package installation into requirements.txt

* Formatted with PEP8 to clean up pytorch code

* Updated docs to provide work-around for ZSH pip install -e .[pc]

* Removed duplicate dependencies in conda env files

* ResNet18 torch model now returns training loss history

* Added test file for PyTorch training

Still need to make sure this passes Travis CI.

* Added lightning_logs to .gitignore

* You can now specify the default AI framework to use in config.py

This reduces the number of command line arguments you are required
to provide.

* get_model_by_type for PyTorch now lazy imports ResNet18

* Added help message to torch_train. Got rid of linear model type

* Updated pytorch tests and fixed some syntax errors

* ResNet18 example input shape updated to be (B, 3, 224, 224)

Also now passing output_shape to load_resnet18 to modify how many
output classes are used

* No longer pinning requirement versions for Jetson Nano

* Fixed formatting in setup.py

* update configuration so that IMAGE_H and IMAGE_W are passed through to the simulator (#674)

Co-authored-by: Jordan Whited <[email protected]>
Co-authored-by: Tawn Kramer <[email protected]>
Co-authored-by: wallarug <[email protected]>
Co-authored-by: sctse999 <[email protected]>
Co-authored-by: DGarbanzo <[email protected]>
Co-authored-by: Craig <[email protected]>
Co-authored-by: DocGarbanzo <[email protected]>
Co-authored-by: Meir Tseitlin <[email protected]>
Co-authored-by: Tim Gates <[email protected]>
Co-authored-by: Eric Wiener <[email protected]>
  • Loading branch information
11 people authored Jan 4, 2021
1 parent 1e6d8a3 commit 22e5542
Show file tree
Hide file tree
Showing 89 changed files with 5,052 additions and 3,400 deletions.
4 changes: 4 additions & 0 deletions .gitignore
Original file line number Diff line number Diff line change
Expand Up @@ -23,3 +23,7 @@ build

# codecov
htmlcov/

# PyTorch
lightning_logs
tb_logs
43 changes: 19 additions & 24 deletions .travis.yml
Original file line number Diff line number Diff line change
@@ -1,37 +1,33 @@
---
language: python
# this is required for python 3.7
dist: xenial
# This list should match the versions listed in setup.py
python:
- 3.6
- 3.7
# travis pipeline
dist: focal

os:
- linux
- osx

install:
- sudo apt-get update -qq
- wget https://repo.continuum.io/miniconda/Miniconda3-latest-Linux-x86_64.sh -O miniconda.sh
- if [ "$TRAVIS_OS_NAME" = "linux" ]; then
sudo apt-get update -qq;
wget https://repo.continuum.io/miniconda/Miniconda3-latest-Linux-x86_64.sh -O miniconda.sh;
fi
- if [ "$TRAVIS_OS_NAME" = "osx" ]; then
wget https://repo.continuum.io/miniconda/Miniconda3-latest-MacOSX-x86_64.sh -O miniconda.sh;
fi
- bash miniconda.sh -b -p $HOME/miniconda
- export PATH="$HOME/miniconda/bin:$PATH"
- source "$HOME/miniconda/etc/profile.d/conda.sh"
- hash -r
- conda config --set always_yes yes --set changeps1 no
- conda config --append channels conda-forge
- conda update -q conda
# Useful for debugging any issues with conda
- conda info -a
- conda create -q -n test-environment-$TRAVIS_PYTHON_VERSION python=$TRAVIS_PYTHON_VERSION
- source activate test-environment-$TRAVIS_PYTHON_VERSION
- pip install --upgrade pip
- hash -r
- conda info -a
- pip install -e .[tf]
- pip install -e .[pc]
- pip install -e .[dev]
- pip install -e .[ci]
- pip install -e .[mm1]
- echo "pip freeze virtualenv=test-environment python=${TRAVIS_PYTHON_VERSION}"
- pip freeze > test-environment-${TRAVIS_PYTHON_VERSION}.txt
- cat test-environment-${TRAVIS_PYTHON_VERSION}.txt
- if [ "$TRAVIS_OS_NAME" = "osx" ]; then conda env create -f install/envs/mac.yml ; fi
- if [ "$TRAVIS_OS_NAME" = "linux" ]; then conda env create -f install/envs/ubuntu.yml ; fi
- conda activate donkey
- pip install -e .
- conda env export > environment.yml
- cat environment.yml

script:
- pytest -v donkeycar/tests --cov=./
Expand Down Expand Up @@ -65,7 +61,6 @@ jobs:
- stage: lint
name: black
language: python
python: 3.6
install:
- pip install --upgrade pip
- hash -r
Expand Down
8 changes: 3 additions & 5 deletions README.md
Original file line number Diff line number Diff line change
Expand Up @@ -12,7 +12,7 @@ community contributions.
#### Quick Links
* [Donkeycar Updates & Examples](http://donkeycar.com)
* [Build instructions and Software documentation](http://docs.donkeycar.com)
* [Slack / Chat](https://donkey-slackin.herokuapp.com/)
* [Discord / Chat](https://discord.gg/PN6kFeA)

![donkeycar](./docs/assets/build_hardware/donkey2.png)

Expand All @@ -37,7 +37,7 @@ The donkey car is controlled by running a sequence of events
import time
from donkeycar import Vehicle
from donkeycar.parts.cv import CvCam
from donkeycar.parts.datastore import TubWriter
from donkeycar.parts.datastore_v2 import TubWriter
V = Vehicle()

IMAGE_W = 160
Expand All @@ -53,9 +53,7 @@ while cam.run() is None:
time.sleep(1)

#add tub part to record images
tub = TubWriter(path='./dat',
inputs=['image'],
types=['image_array'])
tub = TubWriter(path='./dat', inputs=['image'], types=['image_array'])
V.add(tub, inputs=['image'], outputs=['num_records'])

#start the drive loop at 10 Hz
Expand Down
1 change: 1 addition & 0 deletions docs/guide/host_pc/setup_mac.md
Original file line number Diff line number Diff line change
Expand Up @@ -37,6 +37,7 @@ conda env create -f install/envs/mac.yml
conda activate donkey
pip install -e .[pc]
```
Note: if you are using ZSH (you'll know if you are), you won't be able to run `pip install -e .[pc]`. You'll need to escape the brackets and run `pip install -e .\[pc\]`.

* Tensorflow GPU

Expand Down
13 changes: 12 additions & 1 deletion docs/guide/host_pc/setup_ubuntu.md
Original file line number Diff line number Diff line change
Expand Up @@ -42,19 +42,30 @@ conda env create -f install/envs/ubuntu.yml
conda activate donkey
pip install -e .[pc]
```
Note: if you are using ZSH (you'll know if you are), you won't be able to run `pip install -e .[pc]`. You'll need to escape the brackets and run `pip install -e .\[pc\]`.

* Optional Install Tensorflow GPU - only for NVidia Graphics cards

You should have an NVidia GPU with the latest drivers. Conda will handle installing the correct cuda and cuddn libraries for the version of tensorflow you are using.

```bash
conda install tensorflow-gpu==1.13.1
conda install tensorflow-gpu==2.2.0
```

* Optional Install Coral edge tpu compiler

If you have a Google Coral edge tpu, you may wish to compile models. You will need to install the edgetpu_compiler exectutable. Follow [their instructions](https://coral.withgoogle.com/docs/edgetpu/compiler/).

* Optionally configure PyTorch to use GPU - only for NVidia Graphics cards

If you have an NVidia card, you should update to the lastest drivers and [install Cuda SDK](https://www.tensorflow.org/install/gpu#windows_setup).

```bash
conda install cudatoolkit=<CUDA Version> -c pytorch
```

You should replace `<CUDA Version>` with your CUDA version. Any version above 10.0 should work. You can find out your CUDA version by running `nvcc --version` or `nvidia-smi`. If the version given by these two commands don't match, go with the version given by `nvidia-smi`.

* Create your local working dir:

```bash
Expand Down
145 changes: 142 additions & 3 deletions docs/guide/host_pc/setup_windows.md
Original file line number Diff line number Diff line change
@@ -1,4 +1,14 @@
## Install Donkeycar on Windows
# Windows

Windows provides a few different methods for setting up and installing Donkey Car.

1. Miniconda
2. Native
3. Windows Subsystem for Linux (WSL) - Experimental

If you are unfamiliar or concerned about the way that you install Donkey Car, please use option 1 above.

## Install Donkeycar on Windows (miniconda)

![donkey](/assets/logos/windows_logo.png)

Expand Down Expand Up @@ -33,19 +43,30 @@ conda env remove -n donkey
* Create the Python anaconda environment

```bash
conda env create -f install\envs\windows.yml
conda env create -f install\envs\ubuntu.yml
conda activate donkey
pip install -e .[pc]
```
Note: if you are using ZSH (you'll know if you are), you won't be able to run `pip install -e .[pc]`. You'll need to escape the brackets and run `pip install -e .\[pc\]`.

* Optionally Install Tensorflow GPU - only for NVidia Graphics cards

If you have an NVidia card, you should update to the lastest drivers and [install Cuda SDK](https://www.tensorflow.org/install/gpu#windows_setup).

```bash
conda install tensorflow-gpu==1.13.1
conda install tensorflow-gpu==2.2.0
```

* Optionally configure PyTorch to use GPU - only for NVidia Graphics cards

If you have an NVidia card, you should update to the lastest drivers and [install Cuda SDK](https://www.tensorflow.org/install/gpu#windows_setup).

```bash
conda install cudatoolkit=<CUDA Version> -c pytorch
```

You should replace `<CUDA Version>` with your CUDA version. Any version above 10.0 should work. You can find out your CUDA version by running `nvcc --version` or `nvidia-smi`.

* Create your local working dir:

```bash
Expand All @@ -57,5 +78,123 @@ donkey createcar --path ~/mycar
> Python libraries
----
### Next let's [install software on Donkeycar](/guide/install_software/#step-2-install-software-on-donkeycar)

---

## Install Donkeycar on Windows (native)

![donkey](/assets/logos/windows_logo.png)

* Install [Python 3.6 (or later)](https://www.python.org/downloads/)

* Install [Git Bash](https://gitforwindows.org/). During install make sure you check Git to run 'from command line and also from 3rd-party software'.

* Open Command Prompt as Administrator via the Start Menu (cmd.exe | right-click | run as administrator)

* Change to a folder that that you would like to use for all your projects

```shell
mkdir projects
cd projects
```

* Get the latest donkey from Github.

```bash
git clone https://github.com/autorope/donkeycar
cd donkeycar
git checkout master
```

> NOTE: The `dev` branch has the latest (unstable) version of donkeycar with experimental features.
* Install Donkeycar into Python

```
pip3 install -e .[pc]
```

* Recommended for GPU Users: Install Tensorflow GPU - only for NVIDIA Graphics cards

If you have an NVIDIA card, you should update to the lastest drivers and [install Cuda SDK](https://www.tensorflow.org/install/gpu#windows_setup).

```bash
pip3 install tensorflow
```

* Create your local working dir:

```bash
donkey createcar --path \Users\<username>\projects\mycar --template complete
```

> **Templates**
> There are a number of different templates to choose from in Donkey Car.
> basic | complete
> You can find all the templates in the [donkeycar/donkeycar/templates](https://github.com/autorope/donkeycar/tree/dev/donkeycar/templates) folder
---
### Next let's [install software on Donkeycar](/guide/install_software/#step-2-install-software-on-donkeycar)
---


## Install Donkeycar on Windows (WSL)

The Windows Subsystem for Linux (WSL) lets developers run a GNU/Linux environment -- including most command-line tools, utilities, and applications -- directly on Windows, unmodified, without the overhead of a traditional virtual machine or dualboot setup.

* Install [Windows Subsystem for Linux](https://docs.microsoft.com/en-us/windows/wsl/install-win10).
1. Turn on Windows 10 "Windows Subsystem for Linux" Feature (Settings > Apps > Programs and Features > Turn Windows features on or off)
2. Download a Linux Distribution from the Microsoft Store (recommend [Ubuntu](https://www.microsoft.com/en-us/p/ubuntu/9nblggh4msv6?activetab=pivot:overviewtab) Latest)
3. Open the Ubuntu App and configure.

* Open the Ubuntu App to get a prompt window via Start Menu | Ubuntu

* Install `git` using `sudo apt install git`

* Install `python3` using `sudo apt install python3`

* Change to a directory that you would like to use as the head of all your projects.

```bash
mkdir projects
cd projects
```

* Get the latest donkey from Github.

```bash
git clone https://github.com/autorope/donkeycar
cd donkeycar
git checkout master
```

> NOTE: The `dev` branch has the latest (unstable) version of donkeycar with experimental features.
* Install Donkeycar into Python

```
pip3 install -e .[pc]
```

* Experimental Support - GPU Users: Install Tensorflow GPU - only for NVIDIA Graphics cards

If you have an NVIDIA card, you should update to the lastest drivers and [install Cuda SDK](https://www.tensorflow.org/install/gpu#windows_setup).

```bash
pip3 install tensorflow
```

* Create your local working dir:

```bash
donkey createcar --path /path/to/projects/mycar --template complete
```

> **Templates**
> There are a number of different templates to choose from in Donkey Car.
> basic | complete
> You can find all the templates in the [donkeycar/donkeycar/templates](https://github.com/autorope/donkeycar/tree/dev/donkeycar/templates) folder
---
### Next let's [install software on Donkeycar](/guide/install_software/#step-2-install-software-on-donkeycar)
59 changes: 43 additions & 16 deletions docs/guide/robot_sbc/setup_jetson_nano.md
Original file line number Diff line number Diff line change
Expand Up @@ -16,10 +16,50 @@ Visit the official [Nvidia Jetson Nano Getting Started Guide](https://developer.

ssh into your vehicle. Use the the terminal for Ubuntu or Mac. [Putty](https://www.chiark.greenend.org.uk/~sgtatham/putty/latest.html) for windows.

Note: you can either proceed with this tutorial, or if you have Jetpack 4.4 installed, you can use a script to automate the setup. The script is located in `donkeycar/install/nano/install-jp44.sh`. You will need to edit line #3 of the script and replace the default password with your password. This script will install all Git repositories into a ~/projects directory. If you wish to use a different directory, you will need to change this as well (replace all instances of ~/projects with your desired folder path).

First install some packages with `apt-get`.
```bash
sudo apt-get update
sudo apt-get upgrade
sudo apt-get install build-essential python3 python3-dev python3-pip python3-pandas python3-h5py libhdf5-serial-dev hdf5-tools nano ntp
sudo apt-get update -y
sudo apt-get upgrade -y
sudo apt-get install -y libhdf5-serial-dev hdf5-tools libhdf5-dev zlib1g-dev zip libjpeg8-dev liblapack-dev libblas-dev gfortran
sudo apt-get install -y python3-dev python3-pip
sudo apt-get install -y libxslt1-dev libxml2-dev libffi-dev libcurl4-openssl-dev libssl-dev libpng-dev libopenblas-dev
sudo apt-get install -y git
sudo apt-get install -y openmpi-doc openmpi-bin libopenmpi-dev libopenblas-dev
```

Next, you will need to install packages with `pip`:
```bash
sudo -H pip3 install -U pip testresources setuptools
sudo -H pip3 install -U futures==3.1.1 protobuf==3.12.2 pybind11==2.5.0
sudo -H pip3 install -U cython==0.29.21
sudo -H pip3 install -U numpy==1.19.0
sudo -H pip3 install -U future==0.18.2 mock==4.0.2 h5py==2.10.0 keras_preprocessing==1.1.2 keras_applications==1.0.8 gast==0.3.3
sudo -H pip3 install -U grpcio==1.30.0 absl-py==0.9.0 py-cpuinfo==7.0.0 psutil==5.7.2 portpicker==1.3.1 six requests==2.24.0 astor==0.8.1 termcolor==1.1.0 wrapt==1.12.1 google-pasta==0.2.0
sudo -H pip3 install -U scipy==1.4.1
sudo -H pip3 install -U pandas==1.0.5
sudo -H pip3 install -U gdown

# This will install tensorflow as a system package
sudo -H pip3 install --pre --extra-index-url https://developer.download.nvidia.com/compute/redist/jp/v44 tensorflow==2.2.0+nv20.6
```

Finally, you can install PyTorch:
```bash
# Install PyTorch v1.7 - torchvision v0.8.1
wget https://nvidia.box.com/shared/static/wa34qwrwtk9njtyarwt5nvo6imenfy26.whl -O torch-1.7.0-cp36-cp36m-linux_aarch64.whl
sudo -H pip3 install ./torch-1.7.0-cp36-cp36m-linux_aarch64.whl

# Install PyTorch Vision
sudo apt-get install libjpeg-dev zlib1g-dev libpython3-dev libavcodec-dev libavformat-dev libswscale-dev

# You can replace the following line with wherever you want to store your Git repositories
mkdir -p ~/projects; cd ~/projects
git clone --branch v0.8.1 https://github.com/pytorch/vision torchvision
cd torchvision
export BUILD_VERSION=0.8.1
sudo python3 setup.py install
```

Optionally, you can install RPi.GPIO clone for Jetson Nano from [here](https://github.com/NVIDIA/jetson-gpio). This is not required for default setup, but can be useful if using LED or other GPIO driven devices.
Expand Down Expand Up @@ -50,19 +90,6 @@ git checkout master
pip install -e .[nano]
```

For Jetpack 4.2,
```
pip install --extra-index-url https://developer.download.nvidia.com/compute/redist/jp/v42 tensorflow-gpu==1.15.0+nv19.11
```

For Jetpack 4.4,
```
pip install --pre --extra-index-url https://developer.download.nvidia.com/compute/redist/jp/v44 'tensorflow<2'
```


Note: This last command can take some time to compile grpcio.

## Step 5: Install PyGame (Optional)

If you plan to use a USB camera, you will also want to setup pygame:
Expand Down
Loading

0 comments on commit 22e5542

Please sign in to comment.