Skip to content
This repository has been archived by the owner on Jul 16, 2024. It is now read-only.

Reorganized documentation and added new articles #297

Merged
merged 35 commits into from
Jan 3, 2024
Merged
Show file tree
Hide file tree
Changes from 2 commits
Commits
Show all changes
35 commits
Select commit Hold shift + click to select a range
42bedaa
added updates docs
mdurrani808 Nov 23, 2023
6067bd4
fix most broken links
mdurrani808 Nov 23, 2023
9ae1388
Fix broken links (#299)
mdurrani808 Dec 21, 2023
bea749d
initial docs for sim overhaul [WIP] (#283)
jheidegger Dec 21, 2023
85729b9
Add build instructions for deploy (#298)
gerth2 Dec 29, 2023
f38f3aa
Update source/docs/programming/photonlib/robot-pose-estimator.rst
mdurrani808 Dec 30, 2023
9769290
Update source/docs/apriltag-pipelines/3D-tracking.rst
mdurrani808 Dec 30, 2023
f0958d2
Update source/docs/apriltag-pipelines/about-apriltags.rst
mdurrani808 Dec 30, 2023
36ad2ed
Update source/docs/apriltag-pipelines/coordinate-systems.rst
mdurrani808 Dec 30, 2023
a0e14f2
Update source/docs/apriltag-pipelines/coordinate-systems.rst
mdurrani808 Dec 30, 2023
18956b5
Update source/docs/apriltag-pipelines/coordinate-systems.rst
mdurrani808 Dec 30, 2023
aedd7d7
Apply suggestions from code review
mdurrani808 Dec 30, 2023
8db2646
added updates docs
mdurrani808 Nov 23, 2023
a118f6a
fix most broken links
mdurrani808 Nov 23, 2023
f5d4563
Update source/docs/programming/photonlib/robot-pose-estimator.rst
mdurrani808 Dec 30, 2023
167b369
Update source/docs/apriltag-pipelines/3D-tracking.rst
mdurrani808 Dec 30, 2023
7c67c54
Update source/docs/apriltag-pipelines/about-apriltags.rst
mdurrani808 Dec 30, 2023
3cda294
Update source/docs/apriltag-pipelines/coordinate-systems.rst
mdurrani808 Dec 30, 2023
8ff6e88
Update source/docs/apriltag-pipelines/coordinate-systems.rst
mdurrani808 Dec 30, 2023
841ed63
Update source/docs/apriltag-pipelines/coordinate-systems.rst
mdurrani808 Dec 30, 2023
dc2d3ea
Apply suggestions from code review
mdurrani808 Dec 30, 2023
f5835e7
Merge branch 'organize' of https://github.com/mdurrani808/photonvisio…
mdurrani808 Dec 31, 2023
e3e56bb
Actually fixed merge conflicts..
mdurrani808 Dec 31, 2023
c993e4b
added supportedhardware to index
mdurrani808 Jan 1, 2024
2d760d5
fixed warnings
mdurrani808 Jan 1, 2024
d6e056e
fix simulation docs
mdurrani808 Jan 1, 2024
f673b5a
fix read the docs build
mdurrani808 Jan 1, 2024
bd35f8a
try 2
mdurrani808 Jan 1, 2024
c3c7ee8
first pass at lint
mdurrani808 Jan 1, 2024
0304bda
can never get lint on the first try
mdurrani808 Jan 1, 2024
1d5f780
fix calib video
mdurrani808 Jan 1, 2024
dc62f49
remove redundant hardware page
mdurrani808 Jan 1, 2024
7b760a2
fix the errors
mdurrani808 Jan 1, 2024
73704fe
matt fixes
mdurrani808 Jan 2, 2024
e3b2359
added updated wording
mdurrani808 Jan 3, 2024
File filter

Filter by extension

Filter by extension


Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
32 changes: 0 additions & 32 deletions azure-pipelines.yml

This file was deleted.

Binary file modified requirements.txt
Binary file not shown.
32 changes: 0 additions & 32 deletions source/azure-pipelines.yml

This file was deleted.

1 change: 0 additions & 1 deletion source/conf.py
Original file line number Diff line number Diff line change
Expand Up @@ -79,7 +79,6 @@ def setup(app):
"sidebar_hide_name": True,
"light_logo": "assets/PhotonVision-Header-onWhite.png",
"dark_logo": "assets/PhotonVision-Header-noBG.png",
"announcement": "If you are new to PhotonVision, click <a href=https://docs.photonvision.org/en/latest/docs/getting-started/installation/index.html>here!</a>.",

"light_css_variables": {
"font-stack": '-apple-system, BlinkMacSystemFont, avenir next, avenir, segoe ui, helvetica neue, helvetica, Ubuntu, roboto, noto, arial, sans-serif;',
Expand Down
Original file line number Diff line number Diff line change
@@ -1,5 +1,22 @@
AprilTag Tuning
===============
2D AprilTag Tuning / Tracking
=============================

Tracking Apriltags
------------------

Before you get started tracking AprilTags, ensure that you have followed the previous sections on installation, wiring and networking. Next, open the Web UI, go to the top right card, and swtich to the "AprilTag" or "Aruco" type. You should see a screen similar to the one below.

.. image:: images/apriltag.png
:align: center

|

You are now able to detect and track AprilTags in 2D (yaw, pitch, roll, etc.). In order to get 3D data from your AprilTags, please see :ref:`here. <docs/apriltag-pipelines/3D-tracking:3D Tracking>`

Tuning AprilTags
----------------

AprilTag pipelines come with reasonable defaults to get you up and running with tracking. However, in order to optimize your performance and accuracy, you must tune your AprilTag pipeline using the settings below. Note that the settings below are different between the AprilTag and Aruco detectors but the concepts are the same.

.. image:: images/apriltag-tune.png
:scale: 45 %
Expand All @@ -8,38 +25,42 @@ AprilTag Tuning
|

Target Family
-------------
Target families are defined by two numbers (before and after the h). The first number is the number of bits the tag is able to encode (which means more tags are available in the respective family) and the second is the hamming distance. Hamming distance describes the ability for error correction while identifying tag ids. A high hamming distance generally means that it will be easier for a tag to be identified even if there are errors. However, as hamming distance increases, the number of available tags decreases. The 2023 FRC game will be using 16h5 tags, which can be found `here <https://github.com/AprilRobotics/apriltag-imgs/tree/master/tag16h5>`_. PhotonVision also supports the usage of 36h11 tags.
^^^^^^^^^^^^^

Target families are defined by two numbers (before and after the h). The first number is the number of bits the tag is able to encode (which means more tags are available in the respective family) and the second is the hamming distance. Hamming distance describes the ability for error correction while identifying tag ids. A high hamming distance generally means that it will be easier for a tag to be identified even if there are errors. However, as hamming distance increases, the number of available tags decreases. The 2024 FRC game will be using 36h11 tags, which can be found `here <https://github.com/AprilRobotics/apriltag-imgs/tree/master/tag36h11>`_.

Decimate
--------
^^^^^^^^

Decimation (also known as down-sampling) is the process of reducing the sampling frequency of a signal (in our case, the image). Increasing decimate will lead to an increased detection rate while decreasing detection distance. We recommend keeping this at the default value.

Blur
----
^^^^
This controls the sigma of Gaussian blur for tag detection. In clearer terms, increasing blur will make the image blurrier, decreasing it will make it closer to the original image. We strongly recommend that you keep blur to a minimum (0) due to it's high performance intensity unless you have an extremely noisy image.


Threads
-------
^^^^^^^

Threads refers to the threads within your coprocessor's CPU. The theoretical maximum is device dependent, but we recommend that users to stick to one less than the amount of CPU threads that your coprocessor has. Increasing threads will increase performance at the cost of increased CPU load, temperature increase, etc. It may take some experimentation to find the most optimal value for your system.

Refine Edges
------------
^^^^^^^^^^^^

The edges of the each polygon are adjusted to "snap to" high color differences surrounding it. It is recommended to use this in tandem with decimate as it can increase the quality of the initial estimate.

Pose Iterations
---------------
^^^^^^^^^^^^^^^

Pose iterations represents the amount of iterations done in order for the AprilTag algorithm to converge on its pose solution(s). A smaller number between 0-100 is recommended. A smaller amount of iterations cause a more noisy set of poses when looking at the tag straight on, while higher values much more consistently stick to a (potentially wrong) pair of poses. WPILib contains many useful filter classes in order to account for a noisy tag reading.

Max Error Bits
--------------
^^^^^^^^^^^^^^

Max error bits, also known as hamming distance, is the number of positions at which corresponding pieces of data / tag are different. Put more generally, this is the number of bits (think of these as squares in the tag) that need to be changed / corrected in the tag to correctly detect it. A higher value means that more tags will be detected while a lower value cuts out tags that could be "questionable" in terms of detection.

We recommend a value of 0 for the 16h5 and 7+ for the 36h11 family.

Decision Margin Cutoff
-----------------------
^^^^^^^^^^^^^^^^^^^^^^
The decision margin cutoff is how much “margin” the detector has left before it rejects a tag; increasing this rejects poorer tags. We recommend you keep this value around a 30.
16 changes: 16 additions & 0 deletions source/docs/apriltag-pipelines/3D-tracking.rst
Original file line number Diff line number Diff line change
@@ -0,0 +1,16 @@
3D Tracking
===========

3D AprilTag tracking will allow you to track XYZ position and orientation of a tag in the camera frame. This is useful for robot pose estimation and other applications like autonomous scoring. In order to use 3D tracking, you must first :ref:`calibrate your camera <docs/calibration/calibration:Calibrating Your Camera>`. Once you have, you need to enable 3D mode in the UI and you will now be able to get 3D pose information from the tag! For information on getting and using this information in your code, see :ref:`the programming reference. <docs/programming/index:Programming Reference>`.
mdurrani808 marked this conversation as resolved.
Show resolved Hide resolved
mdurrani808 marked this conversation as resolved.
Show resolved Hide resolved

Ambiguity
---------
Translating from 2D to 3D using data from the calibration and the four tag corners can lead to "pose ambiguity", where it appears that the AprilTag pose is flipping between two different poses. You can read more about this issue `here. <https://docs.wpilib.org/en/stable/docs/software/vision-processing/apriltag/apriltag-intro.html#d-to-3d-ambiguity>`
mdurrani808 marked this conversation as resolved.
Show resolved Hide resolved

VIDEO HERE
mdurrani808 marked this conversation as resolved.
Show resolved Hide resolved

There a few steps you can take to resolve/mitigate this issue:

1. Mount cameras at oblique angles so it is less likely that the tag will be seen straght on.
2. Use the :ref:`MultiTag system <docs/apriltag-pipelines/multitag:MultiTag Localization>` in order to combine the corners from multiple tags to get a more accurate and unambiguous pose.
3. Reject all tag poses where the ambiguity ratio (availiable via PhotonLib) is greater than 0.2.
13 changes: 13 additions & 0 deletions source/docs/apriltag-pipelines/about-apriltags.rst
Original file line number Diff line number Diff line change
@@ -0,0 +1,13 @@
About Apriltags
===============

.. image:: images/pv-apriltag.png
:align: center
:scale: 20 %

AprilTags are a type of visual fiducial marker that is commonly used within robotics and computer vision applications. Visual fiducial markers are artificial landmarks added to a scene to allow "localization" (finding your current position) via images. In simpler terms, it is something that can act as a known point of reference that you can use to find your current location. They are similar to QR codes in which they encode information, however, they hold much less data. This has the added benefit of being much easier to track from long distances and at low resolutions. By placing AprilTags in known locations around the field and detecting them using PhotonVision, you can easily get full field localization / pose estimation. Alternatively, you can use AprilTags the same way you used retroreflective tape, simply using them to turn to goal without any pose estimation.
mdurrani808 marked this conversation as resolved.
Show resolved Hide resolved

A more technical explanation can be found in the `WPILib documentation <https://docs.wpilib.org/en/latest/docs/software/vision-processing/apriltag/apriltag-intro.html>`_.

.. note:: You can get FIRST's `official PDF of the targets used in 2023 here <https://firstfrc.blob.core.windows.net/frc2023/FieldAssets/TeamVersions/AprilTags-UserGuideandImages.pdf>`_.

34 changes: 34 additions & 0 deletions source/docs/apriltag-pipelines/coordinate-systems.rst
Original file line number Diff line number Diff line change
@@ -0,0 +1,34 @@
Coordiante Systems
==================

Field and Robot Coordiante Frame
--------------------------------

PhotonVision follows the WPILib conventions for the robot and field coordinate-systems, as defined `here <https://docs.wpilib.org/en/stable/docs/software/advanced-controls/geometry/coordinate-systems.html>`_.

You define the camera to robot transform in the robot coordinate frame.

Camera Coordinate Frame
-----------------------

The camera coordinate system is defined as follows, relative to the camera sensor itself:
mdurrani808 marked this conversation as resolved.
Show resolved Hide resolved

* The origin is the center.
* The x-axis points to the left (when looking at the camera sensor from the front)
mdurrani808 marked this conversation as resolved.
Show resolved Hide resolved
* The y-axis points up.
* The z-axis points out.
mdurrani808 marked this conversation as resolved.
Show resolved Hide resolved

AprilTag Coordinate Frame
-------------------------

The AprilTag coordinate system is defined as follows, relative to the center of the AprilTag itself:
mdurrani808 marked this conversation as resolved.
Show resolved Hide resolved

* The origin is the center.
* The x-axis points to the right when looking at the tag straight on.
mdurrani808 marked this conversation as resolved.
Show resolved Hide resolved
* The y-axis points upwards.

mdurrani808 marked this conversation as resolved.
Show resolved Hide resolved

.. image:: images/apriltag-coords.png
:align: center
:scale: 50%
:alt: AprilTag Coordinate System
15 changes: 15 additions & 0 deletions source/docs/apriltag-pipelines/detector-types.rst
Original file line number Diff line number Diff line change
@@ -0,0 +1,15 @@
AprilTag Pipeline Types
=======================

PhotonVision offers two different AprilTag pipeline types based on different implementations of the underlying algorithm. Each one has its advantages / disadvatages, which are detailed below.

.. note:: Note that both of these pipeline types detect AprilTag markers and are just two different algorithms for doing so.

AprilTag
--------

The AprilTag pipeline type is based on the `AprilTag <https://april.eecs.umich.edu/software/apriltag.html>`_ library from the University of Michigan and we recommend it for most use cases. It is (to our understanding) most accurate pipeline type, but is also ~2x slower than AruCo. This was the pipeline type used by teams in the 2023 season and is well tested.

AruCo
-----
The AruCo pipeline is based on the `AruCo <https://docs.opencv.org/4.8.0/d9/d6a/group__aruco.html>`_ library implementation from OpenCV. It is ~2x higher fps and ~2x lower latency than the AprilTag pipeline type, but is less accurate. We recommend this pipeline type for teams that need to run at a higher framerate or have a lower powered device. This pipeline type is new for the 2024 season and is not as well tested as AprilTag.
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
mdurrani808 marked this conversation as resolved.
Show resolved Hide resolved
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
11 changes: 11 additions & 0 deletions source/docs/apriltag-pipelines/index.rst
Original file line number Diff line number Diff line change
@@ -0,0 +1,11 @@
AprilTag Detection
==================

.. toctree::

about-apriltags
detector-types
2D-tracking-tuning
3D-tracking
multitag
coordinate-systems
2 changes: 2 additions & 0 deletions source/docs/apriltag-pipelines/multitag.rst
Original file line number Diff line number Diff line change
@@ -0,0 +1,2 @@
MultiTag Localization
=====================
Original file line number Diff line number Diff line change
Expand Up @@ -37,14 +37,14 @@ Accurate camera calibration is required in order to get accurate pose measuremen
Following the ideas above should help in getting an accurate calibration.

Calibration Steps
=================
-----------------

Your camera can be calibrated using either the utility built into PhotonVision, which performs all the calculations on your coprocessor, or using a website such as `calibdb <https://calibdb.net/>`, which uses a USB webcam connected to your laptop. The integrated calibration utility is currently the only one that works with ribbon-cable CSI cameras or Limelights, but for USB webcams, calibdb is the preferred option.
Your camera can be calibrated using either the utility built into PhotonVision, which performs all the calculations on your coprocessor, or using a website such as `calibdb <https://calibdb.net/>`_, which uses a USB webcam connected to your laptop. The integrated calibration utility is currently the only one that works with ribbon-cable CSI cameras or Limelights, but for USB webcams, calibdb is the preferred option.

Calibrating using calibdb
-------------------------

Calibdb uses a modified chessboard/aruco marker combination target called `ChArUco targets <https://docs.opencv.org/3.4/df/d4a/tutorial_charuco_detection.html>`. The website currently only supports Chrome browser.
Calibdb uses a modified chessboard/aruco marker combination target called `ChArUco targets. <https://docs.opencv.org/4.8.0/df/d4a/tutorial_charuco_detection.html>`_ The website currently only supports Chrome browser.

Download and print out (or display on a monitor) the calibration by clicking Show Pattern. Click "Calibrate" and align your camera with the ghost overlay of the calibration board. The website automatically calculates the next position and displays it for you. When complete, download the calibration (do **not** use the OpenCV format). Reconnect your camera to your coprocessor and navigate to the PhotonVision web interface's camera tab. Ensure the correct camera is selected, and click the "Import from CalibDB" button. Your calibration data will be automatically saved and applied!

Expand Down Expand Up @@ -82,7 +82,7 @@ Now, we'll capture images of our chessboard from various angles. The most import
Accessing Calibration Images
----------------------------

For advanced users, these calibrations can be later accessed by :ref:`exporting your config directory <docs/hardware/config:Directory Structure>` and viewing the camera's config.json file. Furthermore, the most recent snapshots will be saved to the calibImgs directory. The example images below are from `the calibdb website <https://calibdb.net>` -- focus on how the target is oriented, as the same general tips for positioning apply for chessboard targets as for ChArUco.
For advanced users, these calibrations can be later accessed by :ref:`exporting your config directory <docs/additional-resources/config:Directory Structure>` and viewing the camera's config.json file. Furthermore, the most recent snapshots will be saved to the calibImgs directory. The example images below are from `the calibdb website <https://calibdb.net>` -- focus on how the target is oriented, as the same general tips for positioning apply for chessboard targets as for ChArUco.

.. image:: images/calibImgs.png
:width: 600
Expand Down
2 changes: 1 addition & 1 deletion source/docs/examples/aimingatatarget.rst
Original file line number Diff line number Diff line change
Expand Up @@ -13,7 +13,7 @@ Knowledge and Equipment Needed
Code
-------

Now that you have properly set up your vision system and have tuned a pipeline, you can now aim your robot/turret at the target using the data from PhotonVision. This data is reported over NetworkTables and includes: latency, whether there is a target detected or not, pitch, yaw, area, skew, and target pose relative to the robot. This data will be used/manipulated by our vendor dependency, PhotonLib. The documentation for the Network Tables API can be found :ref:`here <docs/programming/nt-api:Getting Target Information>` and the documentation for PhotonLib :ref:`here <docs/programming/photonlib/adding-vendordep:What is PhotonLib?>`.
Now that you have properly set up your vision system and have tuned a pipeline, you can now aim your robot/turret at the target using the data from PhotonVision. This data is reported over NetworkTables and includes: latency, whether there is a target detected or not, pitch, yaw, area, skew, and target pose relative to the robot. This data will be used/manipulated by our vendor dependency, PhotonLib. The documentation for the Network Tables API can be found :ref:`here <docs/additional-resources/nt-api:Getting Target Information>` and the documentation for PhotonLib :ref:`here <docs/programming/photonlib/adding-vendordep:What is PhotonLib?>`.

For this simple example, only yaw is needed.

Expand Down
19 changes: 0 additions & 19 deletions source/docs/examples/apriltag.rst

This file was deleted.

1 change: 0 additions & 1 deletion source/docs/examples/index.rst
Original file line number Diff line number Diff line change
Expand Up @@ -7,6 +7,5 @@ Code Examples
aimingatatarget
gettinginrangeofthetarget
aimandrange
apriltag
simaimandrange
simposeest
Loading
Loading