diff --git a/docs/source/cad_resources/index.rst b/docs/source/cad_resources/index.rst index a0f1c27d..b84cd7a5 100644 --- a/docs/source/cad_resources/index.rst +++ b/docs/source/cad_resources/index.rst @@ -32,7 +32,7 @@ Software for Intermediate Users * :doc:`Autodesk Fusion 360 ` (Free to *FIRST* teams) (desktop) * :doc:`Dassault Systemes 3DEXPERIENCE ` (Free to *FIRST* teams) (cloud) * :doc:`PTC OnShape ` (Free to *FIRST* teams) (Cloud) -* `Trimble SketchUp `__ (Free plans available) (desktop) +* `Trimble SketchUp `__ (Free plans available) (desktop) Software for Professional Users diff --git a/docs/source/conf.py b/docs/source/conf.py index aea2acf9..96d29d90 100644 --- a/docs/source/conf.py +++ b/docs/source/conf.py @@ -272,6 +272,7 @@ "https://ftc-ml.firstinspires.org", r'https://github.com/.*#', r'https://www.solidworks.com/', + r'https://sketchup.com/', r'https://april.eecs.umich.edu/' ] diff --git a/docs/source/ftc_ml/implement/android_studios/android-studios.rst b/docs/source/ftc_ml/implement/android_studios/android-studios.rst index 9ca11df7..65aca802 100644 --- a/docs/source/ftc_ml/implement/android_studios/android-studios.rst +++ b/docs/source/ftc_ml/implement/android_studios/android-studios.rst @@ -1,6 +1,13 @@ Android Studio ================ +.. warning:: + This Tutorial is outdated due to the TensorFlow updates for the + VisionPortal. We are working on updating this tutorial, please + bear with us as we update it. For more information on TensorFlow + for Java, see the VisionPortal + :ref:`TensorFlow Processor Initialization `. + It is assumed that you already know how to use Android Studio. If not, be sure to check out the :ref:`Android Studio Guide ` document before proceeding. diff --git a/docs/source/ftc_ml/implement/blocks/blocks.rst b/docs/source/ftc_ml/implement/blocks/blocks.rst index caadab2d..a263224d 100644 --- a/docs/source/ftc_ml/implement/blocks/blocks.rst +++ b/docs/source/ftc_ml/implement/blocks/blocks.rst @@ -1,6 +1,13 @@ Blocks ======= +.. warning:: + This Tutorial is outdated due to the TensorFlow updates for the + VisionPortal. We are working on updating this tutorial, please + bear with us as we update it. For more information on using + custom TensorFlow models with Blocks, see the tutorial + `Custom TFOD Model with Blocks `__ + It is assumed that you already know how to use Blocks. If not, be sure to check out the :ref:`Blocks Programming Guide ` diff --git a/docs/source/ftc_ml/implement/obj/obj.rst b/docs/source/ftc_ml/implement/obj/obj.rst index 44d74338..d28bbafc 100644 --- a/docs/source/ftc_ml/implement/obj/obj.rst +++ b/docs/source/ftc_ml/implement/obj/obj.rst @@ -1,6 +1,13 @@ OnBot Java (OBJ) ================= +.. warning:: + This Tutorial is outdated due to the TensorFlow updates for the + VisionPortal. We are working on updating this tutorial, please + bear with us as we update it. For more information on TensorFlow + for Java, see the VisionPortal + :ref:`TensorFlow Processor Initialization `. + It is assumed that you already know how to use OnBot Java. If not, be sure to check out the :ref:`OnBot Java Guide ` diff --git a/docs/source/index.rst b/docs/source/index.rst index b0c7aa7b..a60f8daf 100644 --- a/docs/source/index.rst +++ b/docs/source/index.rst @@ -245,6 +245,16 @@ to see why. AprilTags + .. div:: col-sm pl-1 pr-1 + + .. button-ref:: programming_resources/vision/tensorflow_cs_2023/tensorflow-cs-2023 + :ref-type: doc + :color: black + :outline: + :expand: + + TensorFlow + .. div:: col-sm pl-1 pr-1 .. button-ref:: programming_resources/index diff --git a/docs/source/programming_resources/index.rst b/docs/source/programming_resources/index.rst index f7af0c5e..8b1d7f16 100644 --- a/docs/source/programming_resources/index.rst +++ b/docs/source/programming_resources/index.rst @@ -80,6 +80,21 @@ Topics for programming with AprilTags Understanding AprilTag Values <../apriltag/understanding_apriltag_detection_values/understanding-apriltag-detection-values> AprilTag Test Images <../apriltag/opmode_test_images/opmode-test-images> +TensorFlow Programming +~~~~~~~~~~~~~~~~~~~~~~ + +Topics for programming with TensorFlow Object Detection (TFOD) + +.. toctree:: + :maxdepth: 1 + :titlesonly: + + vision/tensorflow_cs_2023/tensorflow-cs-2023 + vision/tensorflow_pp_2022/tensorflow_pp_2022 + vision/tensorflow_ff_2021/tensorflow-ff-2021 + vision/blocks_tfod_opmode/blocks-tfod-opmode + vision/java_tfod_opmode/java-tfod-opmode + Vision Programming ~~~~~~~~~~~~~~~~~~~ @@ -90,10 +105,6 @@ Learning more about using vision :titlesonly: vision/vision_overview/vision-overview - vision/tensorflow_pp_2022/tensorflow_pp_2022 - vision/blocks_tfod_opmode/blocks-tfod-opmode - vision/java_tfod_opmode/java-tfod-opmode - vision/tensorflow_ff_2021/tensorflow-ff-2021 vision/webcam_controls/index Camera Calibration diff --git a/docs/source/programming_resources/tutorial_specific/blocks/blocks_reference/Blocks-Reference-Material.rst b/docs/source/programming_resources/tutorial_specific/blocks/blocks_reference/Blocks-Reference-Material.rst index 7f69c3e3..929a373e 100644 --- a/docs/source/programming_resources/tutorial_specific/blocks/blocks_reference/Blocks-Reference-Material.rst +++ b/docs/source/programming_resources/tutorial_specific/blocks/blocks_reference/Blocks-Reference-Material.rst @@ -32,12 +32,12 @@ Technology Forum ~~~~~~~~~~~~~~~~ Registered teams can create user accounts on the FIRST Tech Challenge -forum. Teams can use the forum to ask questions and receive support from -the FIRST Tech Challenge community. +Community forum. Teams can use the forum to ask questions and receive +support from the FIRST Tech Challenge community. The technology forum can be found at the following address: -https://ftcforum.firstinspires.org/forum/ftc-technology?156-FTC-Technology +- https://ftc-community.firstinspires.org REV Robotics Expansion Hub Documentation ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ diff --git a/docs/source/programming_resources/tutorial_specific/onbot_java/onbot_java_reference/OnBot-Java-Reference-Info.rst b/docs/source/programming_resources/tutorial_specific/onbot_java/onbot_java_reference/OnBot-Java-Reference-Info.rst index cac62c46..76a82a7b 100644 --- a/docs/source/programming_resources/tutorial_specific/onbot_java/onbot_java_reference/OnBot-Java-Reference-Info.rst +++ b/docs/source/programming_resources/tutorial_specific/onbot_java/onbot_java_reference/OnBot-Java-Reference-Info.rst @@ -36,7 +36,7 @@ the FIRST Tech Challenge community. The technology forum can be found at the following address: -https://ftcforum.firstinspires.org/forum/ftc-technology?156-FTC-Technology +- http://ftc-community.firstinspires.org REV Robotics Expansion Hub Documentation ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ diff --git a/docs/source/programming_resources/vision/blocks_tfod_opmode/blocks-tfod-opmode.rst b/docs/source/programming_resources/vision/blocks_tfod_opmode/blocks-tfod-opmode.rst index 1765506a..fb11fbaf 100644 --- a/docs/source/programming_resources/vision/blocks_tfod_opmode/blocks-tfod-opmode.rst +++ b/docs/source/programming_resources/vision/blocks_tfod_opmode/blocks-tfod-opmode.rst @@ -1,308 +1,289 @@ -Blocks Sample Op Mode for TensorFlow Object Detection -======================================================== +Blocks Sample OpMode for TFOD +============================= -Creating the Op Mode -~~~~~~~~~~~~~~~~~~~~ +Introduction +------------ -You can use the sample “ConceptTensorFlowObjectDetection” as a template -to create your own Blocks op mode that uses the TensorFlow technology to -“look for” any game elements, and determine the relative location of any -identified elements. +This tutorial describes the FTC Blocks Sample OpMode for TensorFlow +Object Detection (TFOD). This Sample, called +“ConceptTensorFlowObjectDetection”, can recognize one or more official +game elements and provide their visible size and position. -- If you are using a **webcam** connected to the Robot Controller - device, select “ConceptTensorFlowObjectDetectionWebcam” as the sample - op mode from the dropdown list on the Create New Op Mode dialog box. -- If you are using an Android smartphone’s **built-in camera**, select - “ConceptTensorFlowObjectDetection” as the sample op mode from the - dropdown list on the Create New Op Mode dialog box. +For the 2023-2024 game CENTERSTAGE, the game element is a hexagonal +white **Pixel**. The FTC SDK software contains a TFOD model of this +object, ready for recognition. That model was created with the +:doc:`Machine Learning Toolchain <../../../ftc_ml/index>`. -Press “OK” to create the new op mode. +For extra points, teams may instead use their own custom TFOD models of +**Team Props**. That option is described +`here `__. -.. figure:: images/blocksConceptTensorFlowWebcam.png +Creating the OpMode +------------------- + +At the FTC Blocks browser interface, click on the “Create New OpMode” +button to display the Create New OpMode dialog box. + +Specify a name for your new OpMode. Select +“ConceptTensorFlowObjectDetection” as the Sample OpMode that will be the +template for your new OpMode. + +If no webcam is configured for your REV Control Hub, the dialog box will +display a warning message (shown here). You can ignore this warning +message if you will use the built-in camera of an Android RC phone. +Click “OK” to create your new OpMode. + +.. figure:: images/030-Create-New-OpMode.png :align: center + :width: 75% + :alt: Creating a new OpMode - Create an op mode with ConceptTensorFlowObjectDetection - as its template. + Creating a New OpMode -Your new op mode should appear in the editing pane of the Blocks -Development Tool screen. +The new OpMode should appear in edit mode in your browser. -.. figure:: images/005_Blocks_TFOD_webcam_open.png +.. figure:: images/040-Sample-OpMode.png :align: center + :width: 75% + :alt: Sample OpMode - Your newly created op mode will have the ConceptTensorFlowObjectDetection - blocks included. + Sample OpMode -Initializing the System -~~~~~~~~~~~~~~~~~~~~~~~ +By default, the Sample OpMode assumes you are using a webcam, configured +as “Webcam 1”. If you are using the built-in camera on your Android RC +phone, change the USE_WEBCAM Boolean from ``true`` to ``false`` (green +arrow above). -Let’s take a look at the initial blocks in the op mode. The first block -in the op mode (excluding the comment blocks) initializes the Vuforia -library on the Android Robot Controller. This is needed because the -TensorFlow Lite library will receive image data from the Vuforia -library. Also, in the screenshot below, the Vuforia system will use an -externally connected webcam named “Webcam 1” (which should match the -camera name in your robot’s configuration file). +Adjusting the Zoom Factor +------------------------- -.. figure:: images/010_Blocks_TFOD_webcam_init.png - :align: center +If the object to be recognized will be more than roughly 2 feet (61 cm) +from the camera, you might want to set the digital zoom factor to a +value greater than 1. This tells TensorFlow to use an artificially +magnified portion of the image, which may offer more accurate +recognitions at greater distances. - Initialize the Vuforia and TensorFlow libraries. - -You can initialize both the Vuforia and the TensorFlow libraries in the -same op mode. This is useful, for example, if you would like to use the -TensorFlow library to recognize the Duck and then use the Vuforia -library to help the robot autonomously navigate on the game field. - -Note that in this example the ObjectTracker parameter is set to true for -this block, so an *object tracker* will be used, in addition to the -TensorFlow interpreter, to keep track of the locations of detected -objects. The object tracker *interpolates* object recognitions so that -results are smoother than they would be if the system were to solely -rely on the TensorFlow interpreter. - -Also note that the Minimum Confidence level is set to 70%. This means -that the TensorFlow library needs to have a confidence level of 70% or -higher in order to consider an object as being detected in its field of -view. You can adjust this parameter to a higher value if you would like -the system to be more selective in identifying an object. - -The confidence level for a detected target will be displayed near the -bounding box of the identified object (when the object tracker is -enabled) on the Robot Controller. For example, a value of “0.92” -indicates a 92% confidence that the object has been identified -correctly. - -When an object is identified by the TensorFlow library, the op mode can -read the “Left”, “Right”, “Top” and “Bottom” values associated with the -detected object. These values correspond to the location of the left, -right, top and bottom boundaries of the detection box for that object. -These values are in pixel coordinates of the image from the camera. - -The origin of the coordinate system is in the upper left-hand corner of -the image. The horizontal (x) coordinate value increases as you move -from the left to the right of the image. The vertical (y) coordinate -value increases as you move from the top to the bottom of the image. - -.. figure:: images/landscapeCoordinate.png +.. figure:: images/150-setZoom.png :align: center + :width: 75% + :alt: Setting Zoom - The origin of the image coordinate system is located in upper left hand - corner. + Setting the Zoom Factor -In the landscape image above, the approximate coordinate values for the -Left, Top, Right, and Bottom boundaries are 455, 191, 808, and 547 -respectively (pixel coordinates). The width and height for the landscape -image above is 1280 and 720 respectively. +Pull out the **``setZoom``** Block, found in the toolbox or palette +called “Vision”, under “TensorFlow” and “TfodProcessor” (see green oval +above). Change the magnification value as desired (green arrow). -Activating TensorFlow -~~~~~~~~~~~~~~~~~~~~~ +On REV Control Hub, the “Vision” menu appears only when the active robot +configuration contains a webcam, even if not plugged in. -In this example, the op mode activates the TensorFlow object detector -before waiting for the start command from the Driver Station. This is -done so that the user can access the “Camera Stream” preview from the -Driver Station menu while it waits for the start command. Also note that -in this example, the op mode does not activate the Vuforia tracking -feature, it only activates TensorFlow object detection. If you want to -incorporate Vuforia image detection and tracking you will also need to -activate (and later deactivate when you are done) the Vuforia tracking -feature. +This ``setZoom`` Block can be placed in the INIT section of your OpMode, -.. figure:: images/020_Blocks_TFOD_webcam_activate.png - :align: center +- immediately after the call to the ``initTfod`` Function, or +- as the very last Block inside the ``initTfod`` Function. + +This Block is **not** part of the Processor Builder pattern, so the Zoom +factor can be set to other values during the OpMode, if desired. - Activate TensorFlow Object Detection. +The “zoomed” region can be observed in the DS preview (Camera Stream) +and the RC preview (LiveView), surrounded by a greyed-out area that is +**not evaluated** by the TFOD Processor. -Setting the Zoom Factor -~~~~~~~~~~~~~~~~~~~~~~~ +Other Adjustments +----------------- -When TensorFlow receives an image from the robot’s camera, the library -downgrades the resolution of the image (presumably to achieve a higher -detection rate). As a result, if a target is at a distance of around 24” -(61cm) or more, the detection accuracy of the system tends to diminish. -This degradation can occur, even if you have a very accurate inference -model. +The Sample OpMode uses a default **minimum confidence** level of 75%. +The TensorFlow Processor needs to have a confidence level of 75% or +higher, to consider an object as “recognized” in its field of view. -You can specify a zoom factor in your op mode to offset the effect of -this automatic scaling by the TensorFlow library. If you specify a zoom -factor, the image will be cropped by this factor and this artificially -magnified image will be passed to the TensorFlow library. The net result -is that the robot is able to detect and track an object at a -significantly larger distance. The webcams and built-in Android cameras -that are typically used by teams have high enough resolution to -allow TensorFlow to “see” an artificially magnified target clearly. +You can see the object name and actual confidence (as a **decimal**, +e.g. 0.75) near the Bounding Box, in the Driver Station preview (Camera +Stream) and Robot Controller preview (Liveview). -.. figure:: images/030_Blocks_TFOD_webcam_zoom.png +.. figure:: images/160-min-confidence.png :align: center + :width: 75% + :alt: Setting Minimum Confidence + + Setting the Minimum Confidence - Set Zoom Factor +Pull out the **``setMinResultConfidence``** Block, found in the toolbox +or palette called “Vision”, under “TensorFlow” and “TfodProcessor”. +Adjust this parameter to a higher value if you would like the processor +to be more selective in identifying an object. -If a zoom factor has been set, then the Camera Stream preview on the -Driver Station will show the cropped area that makes up the artificially -magnified image. +Another option is to define, or clip, a **custom area for TFOD +evaluation**, unlike ``setZoom`` which is always centered. -.. figure:: images/035_TFOD.png +.. figure:: images/170-clipping-margins.png :align: center - - Camera stream preview indicating magnified area (at a distance of about 4 feet or 1.2 meters). + :width: 75% + :alt: Setting Clipping Margins + Setting Clipping Margins -Iterating and Processing List of Recognized Objects -~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ +From the same Blocks palette, pull out the **``setClippingMargins``** +Block. Adjust the four margins as desired, in units of pixels. -The op mode will then iterate until a Stop command is received. At the -beginning of each iteration, the op mode will check with the object -detector to see how many objects it recognizes in its field of view. In -the screenshot below, the variable “recognitions” is set to a list of -objects that were recognized using the TensorFlow technology. +These Blocks can be placed in the INIT section of your OpMode, -.. figure:: images/040_Blocks_TFOD_webcam_loop.png - :align: center +- immediately after the call to the ``initTfod`` Function, or +- as the very last Blocks inside the ``initTfod`` Function. - The op mode gets a list of recognized objects with each iteration of the - while loop. +As with ``setZoom``, these Blocks are **not** part of the Processor +Builder pattern, so they can be set to other values during the OpMode, +if desired. -If the list is empty (i.e., if no objects were detected) the op mode -sends a telemetry message to the Driver Station indicating that no items -were detected. +Command Flow in this Sample +--------------------------- -If the list is not empty, then the op mode iterates through the list and -calls a function “displayInfo” to display information via telemetry -about each detected object. +After the ``waitForStart`` Block, this OpMode contains the main program +loop: -Modifying the Sample Op Mode to Indicate Duck Detected -~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ +.. figure:: images/180-main-loop.png + :align: center + :width: 75% + :alt: Main Loop -This sample op mode uses TensorFlow blocks for the Freight Frenzy -season. Let’s modify the op mode so it will set a variable to indicate -whether a Duck was detected, and show a Telemetry message accordingly. -Using the Blocks editor, under Variables, create a new variable -“isDuckDetected”. Initialize it to “false”, just before the “for each -item” block that will examine the list of recognitions. + OpMode Main Loop -.. figure:: images/050_Blocks_TFOD_webcam_variable.png - :align: center +This loop repeatedly calls a Blocks Function called +**``telemetryTfod``**. That Function is the heart of the OpMode, seeking +and evaluating recognized TFOD objects, and displaying DS Telemetry +about those objects. It will be discussed below, in the next section. - Reset the variable to false with each cycle of the “while” loop. +The main loop also allows the user to press the ``Dpad Down`` button on +the gamepad, to temporarily stop the streaming session. This +``.stopStreaming`` Block pauses the flow and processing of camera +frames, thus **conserving CPU resources**. -Next, use the Blocks editor to modify the function “displayInfo” as -follows. If the label reads “Duck” then set the variable isDuckDetected -to “true”, and send a telemetry message to indicate a Duck has been -recognized. Otherwise, or ELSE, set the variable to “false” and don’t -display the message. +Pressing the ``Dpad Up`` button (``.resumeStreaming``) allows the +processing to continue. The on-and-off actions can be observed in the RC +preview (LiveView), described further below. -.. figure:: images/060_Blocks_TFOD_webcam_detected.png - :align: center +These two commands appear here in this Sample OpMode, to spread +awareness of one tool for managing CPU and bandwidth resources. The FTC +VisionPortal offers over 10 such controls, :ref:`described here +`. - Set variable and show message if Duck detected. +Processing TFOD Recognitions +---------------------------- -Save the op mode and re-run it. The op mode should display the new -message, if a Duck is detected. Note that if TensorFlow detects multiple -objects, the order of the detected objects can change with each -iteration of your op mode. +The Function called **``telemetryTfod``** is the heart of the OpMode, +seeking and evaluating recognized TFOD objects, and displaying DS +Telemetry about those objects. -.. figure:: images/070_TFOD-Sample-Webcam-DS-Telemetry.png +.. figure:: images/190-telemetryTfod.png :align: center + :width: 75% + :alt: Telemetry TFOD - The modified op mode should show a telemetry message if the Duck is detected. - -You can continue modifying this sample op mode, to suit your team’s -autonomous strategy. For example, you might want to store (in a -Variable) which Barcode position had the Duck. + Telemetry TFOD -Also, you must decide how the loop should actually stop repeating, -assuming the Duck’s position is discovered. (It now loops until Stop is -pressed.) For example, the loop could stop after the camera has viewed -all 3 Barcode positions. Or, if the camera’s view includes more than one -Barcode position, perhaps the Duck’s bounding box location can provide -the info you need. +The first Block uses the TFOD Processor to gather and store all +recognitions in a List, called ``myTfodRecognitions``. -In any case, when the op mode exits the loop, your new Variable should -hold the location of the Duck, which tells you the preferred scoring -level on the Alliance Shipping Hub. You op mode can continue running, -using that information. +The green “FOR Loop” iterates through that List, handling each item, one +at a time. Here the “handling” is simply displaying certain TFOD fields +to DS Telemetry. -Important Note Regarding Image Orientation -~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ +For competition, you want to do more than display Telemetry, and you +want to exit the main loop at some point. These code modifications are +discussed in another section below. -If you are using a webcam with your Robot Controller, then the camera -orientation is fixed in landscape mode. However, if you are using a -smartphone camera, the system will interpret images based on the phone’s -orientation (Portrait or Landscape) at the time that the TensorFlow -object detector is created and initialized. +Testing the OpMode +------------------ -Note that for Freight Frenzy, the default TensorFlow inference model is -optimized for a camera in landscape mode. This means that it is better -to orient your camera in landscape mode if you use this default -inference model because you will get more reliable detections. +Click the “Save OpMode” button, then run the OpMode from the Driver +Station. The Robot Controller should use the CENTERSTAGE TFOD model to +recognize and track the white Pixel. -If you execute the TensorFlowObjectDetection ``.initialize`` block while -the phone is in Portrait mode, then the images will be processed in -Portrait mode. +For a preview during the INIT phase, touch the Driver Station’s 3-dot +menu and select **Camera Stream**. -.. figure:: images/tfodPortrait.png +.. figure:: images/200-Sample-DS-Camera-Stream.png :align: center + :width: 75% + :alt: Sample DS Camera Stream - If you initialize the detector in Portrait mode, then the images are - processed in Portrait mode. + Sample DS Camera Stream -The “Left” and “Right” values of an object’s bounding box correspond to -horizontal coordinate values, while the “Top” and “Bottom” values of an -object’s bounding box correspond to vertical coordinate values. +Camera Stream is not live video; tap to refresh the image. Use the small +white arrows at lower right to expand or revert the preview size. To +close the preview, choose 3-dots and Camera Stream again. -.. figure:: images/tfodBoundaries.png +After touching the DS START button, the OpMode displays Telemetry for +any recognized Pixel(s): + +.. figure:: images/210-Sample-DS-Telemetry.png :align: center + :width: 75% + :alt: Sample DS Telemetry + + Sample DS Telemetry - The “Left” and “Top” boundaries of a detection box when the image is in - Portrait mode. +The above Telemetry shows the label name, and TFOD confidence level. It +also gives the **center location** and **size** (in pixels) of the +Bounding Box, which is the colored rectangle surrounding the recognized +object. -If you want to use your smartphone in Landscape mode, then make sure -that your phone is in Landscape mode when the TensorFlow object detector -is initialized. You may find that the Landscape mode is preferable for -this season’s game since it offers a wider field of view. +The pixel origin (0, 0) is at the top left corner of the image. -.. figure:: images/tfodLandscape.png +Before and after touching DS START, the Robot Controller provides a +video preview called **LiveView**. + +.. figure:: images/240-Sample-RC-LiveView.png :align: center + :width: 75% + :alt: Sample RC LiveView + + Sample RC LiveView - The system can also be run in Landscape mode. +For Control Hub (with no built-in screen), plug in an HDMI monitor or +learn about ```scrcpy`` `__. The +above image is a LiveView screenshot via ``scrcpy``. -If the phone is in Landscape mode when the object detector is -initialized, then the images will be interpreted in Landscape mode. +If you don’t have a physical Pixel on hand, try pointing the camera at +this image: -.. figure:: images/tfodBoundariesLandscape.png +.. figure:: images/300-Sample-Pixel.png :align: center + :width: 75% + :alt: Sample Pixel - The “Left” and “Top” boundaries of a detection box when the image is in Landscape mode. + Sample Pixel -Note that Android devices can be locked into Portrait Mode so that the -screen image will not rotate even if the phone is held in a Landscape -orientation. If your phone is locked in Portrait Mode, then the -TensorFlow object detector will interpret all images as Portrait images. -If you would like to use the phone in Landscape mode, then you need to -make sure your phone is set to “Auto-rotate” mode. In Auto-rotate mode, -if the phone is held in a Landscape orientation, then the screen will -auto rotate to display the contents in Landscape form. +Modifying the Sample +-------------------- -.. figure:: images/autorotate.png - :align: center +In this Sample OpMode, the main loop ends only upon touching the DS Stop +button. For competition, teams should **modify this code** in at least +two ways: - Auto-rotate must be enabled in order to operate in Landscape mode. +- for a significant recognition, take action or store key information – + inside the FOR loop -Deactivating TensorFlow -~~~~~~~~~~~~~~~~~~~~~~~ +- end the main loop based on your criteria, to continue the OpMode -When the example op mode is no longer active (i.e. when the user has -pressed the square Stop button on the Driver Station) the op mode will -attempt to deactivate the TensorFlow library before it’s done. It’s -important to deactivate the library to free up system resources. +As an example, you might set a Boolean variable ``isPixelDetected`` to +``true``, if a significant recognition has occurred. -.. figure:: images/080_Blocks_TFOD_webcam_deactivate.png - :align: center +You might also evaluate and store which randomized Spike Mark (red or +blue tape stripe) holds the white Pixel. + +Regarding the main loop, it could end after the camera views all three +Spike Marks, or after your code provides a high-confidence result. If +the camera’s view includes more than one Spike Mark position, perhaps +the white Pixel’s **Bounding Box** size and location could be useful. +Teams should consider how long to seek an acceptable recognition, and +what to do otherwise. - Deactivate TensorFlow +In any case, the OpMode should exit the main loop and continue running, +using any stored information. +Best of luck this season! +============ -=================== +Questions, comments and corrections to westsiderobotics@verizon.net -Updated 10/20/21 diff --git a/docs/source/programming_resources/vision/blocks_tfod_opmode/images/0010-create-sample.png b/docs/source/programming_resources/vision/blocks_tfod_opmode/images/0010-create-sample.png deleted file mode 100644 index 166b0cab..00000000 Binary files a/docs/source/programming_resources/vision/blocks_tfod_opmode/images/0010-create-sample.png and /dev/null differ diff --git a/docs/source/programming_resources/vision/blocks_tfod_opmode/images/0020-create-sample-webcam.png b/docs/source/programming_resources/vision/blocks_tfod_opmode/images/0020-create-sample-webcam.png deleted file mode 100644 index 374aef25..00000000 Binary files a/docs/source/programming_resources/vision/blocks_tfod_opmode/images/0020-create-sample-webcam.png and /dev/null differ diff --git a/docs/source/programming_resources/vision/blocks_tfod_opmode/images/0030-sample-open-half.png b/docs/source/programming_resources/vision/blocks_tfod_opmode/images/0030-sample-open-half.png deleted file mode 100644 index 3bfb42aa..00000000 Binary files a/docs/source/programming_resources/vision/blocks_tfod_opmode/images/0030-sample-open-half.png and /dev/null differ diff --git a/docs/source/programming_resources/vision/blocks_tfod_opmode/images/0030-sample-open.png b/docs/source/programming_resources/vision/blocks_tfod_opmode/images/0030-sample-open.png deleted file mode 100644 index 109ddbaf..00000000 Binary files a/docs/source/programming_resources/vision/blocks_tfod_opmode/images/0030-sample-open.png and /dev/null differ diff --git a/docs/source/programming_resources/vision/blocks_tfod_opmode/images/0040-TFOD-menu.png b/docs/source/programming_resources/vision/blocks_tfod_opmode/images/0040-TFOD-menu.png deleted file mode 100644 index 0c22d4a7..00000000 Binary files a/docs/source/programming_resources/vision/blocks_tfod_opmode/images/0040-TFOD-menu.png and /dev/null differ diff --git a/docs/source/programming_resources/vision/blocks_tfod_opmode/images/0050-Vuforia-initialize.png b/docs/source/programming_resources/vision/blocks_tfod_opmode/images/0050-Vuforia-initialize.png deleted file mode 100644 index 55f4e866..00000000 Binary files a/docs/source/programming_resources/vision/blocks_tfod_opmode/images/0050-Vuforia-initialize.png and /dev/null differ diff --git a/docs/source/programming_resources/vision/blocks_tfod_opmode/images/0052-Vuforia-init-monitor-false.png b/docs/source/programming_resources/vision/blocks_tfod_opmode/images/0052-Vuforia-init-monitor-false.png deleted file mode 100644 index cc0c07b8..00000000 Binary files a/docs/source/programming_resources/vision/blocks_tfod_opmode/images/0052-Vuforia-init-monitor-false.png and /dev/null differ diff --git a/docs/source/programming_resources/vision/blocks_tfod_opmode/images/005_Blocks_TFOD_webcam_open.png b/docs/source/programming_resources/vision/blocks_tfod_opmode/images/005_Blocks_TFOD_webcam_open.png deleted file mode 100644 index 78c0060b..00000000 Binary files a/docs/source/programming_resources/vision/blocks_tfod_opmode/images/005_Blocks_TFOD_webcam_open.png and /dev/null differ diff --git a/docs/source/programming_resources/vision/blocks_tfod_opmode/images/0060-Vuforia-webcam-init-monitor.png b/docs/source/programming_resources/vision/blocks_tfod_opmode/images/0060-Vuforia-webcam-init-monitor.png deleted file mode 100644 index 80cc2307..00000000 Binary files a/docs/source/programming_resources/vision/blocks_tfod_opmode/images/0060-Vuforia-webcam-init-monitor.png and /dev/null differ diff --git a/docs/source/programming_resources/vision/blocks_tfod_opmode/images/0062-Vuforia-webcam-init-monitor-false.png b/docs/source/programming_resources/vision/blocks_tfod_opmode/images/0062-Vuforia-webcam-init-monitor-false.png deleted file mode 100644 index cf7029c3..00000000 Binary files a/docs/source/programming_resources/vision/blocks_tfod_opmode/images/0062-Vuforia-webcam-init-monitor-false.png and /dev/null differ diff --git a/docs/source/programming_resources/vision/blocks_tfod_opmode/images/0070-TFOD-initialize.png b/docs/source/programming_resources/vision/blocks_tfod_opmode/images/0070-TFOD-initialize.png deleted file mode 100644 index d7e13369..00000000 Binary files a/docs/source/programming_resources/vision/blocks_tfod_opmode/images/0070-TFOD-initialize.png and /dev/null differ diff --git a/docs/source/programming_resources/vision/blocks_tfod_opmode/images/0100-RC-TFOD-preview.png b/docs/source/programming_resources/vision/blocks_tfod_opmode/images/0100-RC-TFOD-preview.png deleted file mode 100644 index 448ec373..00000000 Binary files a/docs/source/programming_resources/vision/blocks_tfod_opmode/images/0100-RC-TFOD-preview.png and /dev/null differ diff --git a/docs/source/programming_resources/vision/blocks_tfod_opmode/images/010_Blocks_TFOD_webcam_init.png b/docs/source/programming_resources/vision/blocks_tfod_opmode/images/010_Blocks_TFOD_webcam_init.png deleted file mode 100644 index 3121f8d6..00000000 Binary files a/docs/source/programming_resources/vision/blocks_tfod_opmode/images/010_Blocks_TFOD_webcam_init.png and /dev/null differ diff --git a/docs/source/programming_resources/vision/blocks_tfod_opmode/images/0110-DS-TFOD-preview-half.png b/docs/source/programming_resources/vision/blocks_tfod_opmode/images/0110-DS-TFOD-preview-half.png deleted file mode 100644 index 125b0236..00000000 Binary files a/docs/source/programming_resources/vision/blocks_tfod_opmode/images/0110-DS-TFOD-preview-half.png and /dev/null differ diff --git a/docs/source/programming_resources/vision/blocks_tfod_opmode/images/0110-DS-TFOD-preview.png b/docs/source/programming_resources/vision/blocks_tfod_opmode/images/0110-DS-TFOD-preview.png deleted file mode 100644 index 1189b913..00000000 Binary files a/docs/source/programming_resources/vision/blocks_tfod_opmode/images/0110-DS-TFOD-preview.png and /dev/null differ diff --git a/docs/source/programming_resources/vision/blocks_tfod_opmode/images/0112-DS-TFOD-preview-alternate.png b/docs/source/programming_resources/vision/blocks_tfod_opmode/images/0112-DS-TFOD-preview-alternate.png deleted file mode 100644 index 4962fda9..00000000 Binary files a/docs/source/programming_resources/vision/blocks_tfod_opmode/images/0112-DS-TFOD-preview-alternate.png and /dev/null differ diff --git a/docs/source/programming_resources/vision/blocks_tfod_opmode/images/0130-RC-axis-system.png b/docs/source/programming_resources/vision/blocks_tfod_opmode/images/0130-RC-axis-system.png deleted file mode 100644 index 8caed6af..00000000 Binary files a/docs/source/programming_resources/vision/blocks_tfod_opmode/images/0130-RC-axis-system.png and /dev/null differ diff --git a/docs/source/programming_resources/vision/blocks_tfod_opmode/images/0160-TFOD-initialize.png b/docs/source/programming_resources/vision/blocks_tfod_opmode/images/0160-TFOD-initialize.png deleted file mode 100644 index 2111deb0..00000000 Binary files a/docs/source/programming_resources/vision/blocks_tfod_opmode/images/0160-TFOD-initialize.png and /dev/null differ diff --git a/docs/source/programming_resources/vision/blocks_tfod_opmode/images/0170-waitForStart.png b/docs/source/programming_resources/vision/blocks_tfod_opmode/images/0170-waitForStart.png deleted file mode 100644 index 12dcdf2a..00000000 Binary files a/docs/source/programming_resources/vision/blocks_tfod_opmode/images/0170-waitForStart.png and /dev/null differ diff --git a/docs/source/programming_resources/vision/blocks_tfod_opmode/images/0180-repeat-while.png b/docs/source/programming_resources/vision/blocks_tfod_opmode/images/0180-repeat-while.png deleted file mode 100644 index a7efcd9f..00000000 Binary files a/docs/source/programming_resources/vision/blocks_tfod_opmode/images/0180-repeat-while.png and /dev/null differ diff --git a/docs/source/programming_resources/vision/blocks_tfod_opmode/images/0190-get-Recognitions-1.png b/docs/source/programming_resources/vision/blocks_tfod_opmode/images/0190-get-Recognitions-1.png deleted file mode 100644 index d6750768..00000000 Binary files a/docs/source/programming_resources/vision/blocks_tfod_opmode/images/0190-get-Recognitions-1.png and /dev/null differ diff --git a/docs/source/programming_resources/vision/blocks_tfod_opmode/images/0200-get-Recognitions-2.png b/docs/source/programming_resources/vision/blocks_tfod_opmode/images/0200-get-Recognitions-2.png deleted file mode 100644 index cbdc7a77..00000000 Binary files a/docs/source/programming_resources/vision/blocks_tfod_opmode/images/0200-get-Recognitions-2.png and /dev/null differ diff --git a/docs/source/programming_resources/vision/blocks_tfod_opmode/images/020_Blocks_TFOD_webcam_activate.png b/docs/source/programming_resources/vision/blocks_tfod_opmode/images/020_Blocks_TFOD_webcam_activate.png deleted file mode 100644 index b7fbd0ec..00000000 Binary files a/docs/source/programming_resources/vision/blocks_tfod_opmode/images/020_Blocks_TFOD_webcam_activate.png and /dev/null differ diff --git a/docs/source/programming_resources/vision/blocks_tfod_opmode/images/0210-IF-no-recognitions.png b/docs/source/programming_resources/vision/blocks_tfod_opmode/images/0210-IF-no-recognitions.png deleted file mode 100644 index dbf2e888..00000000 Binary files a/docs/source/programming_resources/vision/blocks_tfod_opmode/images/0210-IF-no-recognitions.png and /dev/null differ diff --git a/docs/source/programming_resources/vision/blocks_tfod_opmode/images/0220-ELSE-recognitions-1.png b/docs/source/programming_resources/vision/blocks_tfod_opmode/images/0220-ELSE-recognitions-1.png deleted file mode 100644 index 27352953..00000000 Binary files a/docs/source/programming_resources/vision/blocks_tfod_opmode/images/0220-ELSE-recognitions-1.png and /dev/null differ diff --git a/docs/source/programming_resources/vision/blocks_tfod_opmode/images/0230-ELSE-recognitions-2.png b/docs/source/programming_resources/vision/blocks_tfod_opmode/images/0230-ELSE-recognitions-2.png deleted file mode 100644 index 2b780775..00000000 Binary files a/docs/source/programming_resources/vision/blocks_tfod_opmode/images/0230-ELSE-recognitions-2.png and /dev/null differ diff --git a/docs/source/programming_resources/vision/blocks_tfod_opmode/images/0240-ELSE-recognitions-3.png b/docs/source/programming_resources/vision/blocks_tfod_opmode/images/0240-ELSE-recognitions-3.png deleted file mode 100644 index 8f607a7a..00000000 Binary files a/docs/source/programming_resources/vision/blocks_tfod_opmode/images/0240-ELSE-recognitions-3.png and /dev/null differ diff --git a/docs/source/programming_resources/vision/blocks_tfod_opmode/images/0250-ELSE-recognitions-4.png b/docs/source/programming_resources/vision/blocks_tfod_opmode/images/0250-ELSE-recognitions-4.png deleted file mode 100644 index 63f89dca..00000000 Binary files a/docs/source/programming_resources/vision/blocks_tfod_opmode/images/0250-ELSE-recognitions-4.png and /dev/null differ diff --git a/docs/source/programming_resources/vision/blocks_tfod_opmode/images/0260-Telemetry-update.png b/docs/source/programming_resources/vision/blocks_tfod_opmode/images/0260-Telemetry-update.png deleted file mode 100644 index abfbcd09..00000000 Binary files a/docs/source/programming_resources/vision/blocks_tfod_opmode/images/0260-Telemetry-update.png and /dev/null differ diff --git a/docs/source/programming_resources/vision/blocks_tfod_opmode/images/0270-deactivate-TFOD.png b/docs/source/programming_resources/vision/blocks_tfod_opmode/images/0270-deactivate-TFOD.png deleted file mode 100644 index f36aa060..00000000 Binary files a/docs/source/programming_resources/vision/blocks_tfod_opmode/images/0270-deactivate-TFOD.png and /dev/null differ diff --git a/docs/source/programming_resources/vision/blocks_tfod_opmode/images/030-Create-New-OpMode.png b/docs/source/programming_resources/vision/blocks_tfod_opmode/images/030-Create-New-OpMode.png new file mode 100644 index 00000000..1b543afd Binary files /dev/null and b/docs/source/programming_resources/vision/blocks_tfod_opmode/images/030-Create-New-OpMode.png differ diff --git a/docs/source/programming_resources/vision/blocks_tfod_opmode/images/0300-displayInfo.png b/docs/source/programming_resources/vision/blocks_tfod_opmode/images/0300-displayInfo.png deleted file mode 100644 index d2a17344..00000000 Binary files a/docs/source/programming_resources/vision/blocks_tfod_opmode/images/0300-displayInfo.png and /dev/null differ diff --git a/docs/source/programming_resources/vision/blocks_tfod_opmode/images/030_Blocks_TFOD_webcam_zoom.png b/docs/source/programming_resources/vision/blocks_tfod_opmode/images/030_Blocks_TFOD_webcam_zoom.png deleted file mode 100644 index 2af87d87..00000000 Binary files a/docs/source/programming_resources/vision/blocks_tfod_opmode/images/030_Blocks_TFOD_webcam_zoom.png and /dev/null differ diff --git a/docs/source/programming_resources/vision/blocks_tfod_opmode/images/0310-recognition-label.png b/docs/source/programming_resources/vision/blocks_tfod_opmode/images/0310-recognition-label.png deleted file mode 100644 index a7c0a71a..00000000 Binary files a/docs/source/programming_resources/vision/blocks_tfod_opmode/images/0310-recognition-label.png and /dev/null differ diff --git a/docs/source/programming_resources/vision/blocks_tfod_opmode/images/0320-recognition-left-top.png b/docs/source/programming_resources/vision/blocks_tfod_opmode/images/0320-recognition-left-top.png deleted file mode 100644 index 921753aa..00000000 Binary files a/docs/source/programming_resources/vision/blocks_tfod_opmode/images/0320-recognition-left-top.png and /dev/null differ diff --git a/docs/source/programming_resources/vision/blocks_tfod_opmode/images/0330-recognition-right-bottom.png b/docs/source/programming_resources/vision/blocks_tfod_opmode/images/0330-recognition-right-bottom.png deleted file mode 100644 index 63abbbb4..00000000 Binary files a/docs/source/programming_resources/vision/blocks_tfod_opmode/images/0330-recognition-right-bottom.png and /dev/null differ diff --git a/docs/source/programming_resources/vision/blocks_tfod_opmode/images/035_TFOD.png b/docs/source/programming_resources/vision/blocks_tfod_opmode/images/035_TFOD.png deleted file mode 100644 index c14991ea..00000000 Binary files a/docs/source/programming_resources/vision/blocks_tfod_opmode/images/035_TFOD.png and /dev/null differ diff --git a/docs/source/programming_resources/vision/blocks_tfod_opmode/images/040-Sample-OpMode.png b/docs/source/programming_resources/vision/blocks_tfod_opmode/images/040-Sample-OpMode.png new file mode 100644 index 00000000..ac557b83 Binary files /dev/null and b/docs/source/programming_resources/vision/blocks_tfod_opmode/images/040-Sample-OpMode.png differ diff --git a/docs/source/programming_resources/vision/blocks_tfod_opmode/images/0400-DS-TFOD-recognition-circle-half.png b/docs/source/programming_resources/vision/blocks_tfod_opmode/images/0400-DS-TFOD-recognition-circle-half.png deleted file mode 100644 index 6a3b1944..00000000 Binary files a/docs/source/programming_resources/vision/blocks_tfod_opmode/images/0400-DS-TFOD-recognition-circle-half.png and /dev/null differ diff --git a/docs/source/programming_resources/vision/blocks_tfod_opmode/images/0400-DS-TFOD-recognition-circle.png b/docs/source/programming_resources/vision/blocks_tfod_opmode/images/0400-DS-TFOD-recognition-circle.png deleted file mode 100644 index 23f59c59..00000000 Binary files a/docs/source/programming_resources/vision/blocks_tfod_opmode/images/0400-DS-TFOD-recognition-circle.png and /dev/null differ diff --git a/docs/source/programming_resources/vision/blocks_tfod_opmode/images/040_Blocks_TFOD_webcam_loop.png b/docs/source/programming_resources/vision/blocks_tfod_opmode/images/040_Blocks_TFOD_webcam_loop.png deleted file mode 100644 index 78b17581..00000000 Binary files a/docs/source/programming_resources/vision/blocks_tfod_opmode/images/040_Blocks_TFOD_webcam_loop.png and /dev/null differ diff --git a/docs/source/programming_resources/vision/blocks_tfod_opmode/images/0450-RC-screenshot-Stones.png b/docs/source/programming_resources/vision/blocks_tfod_opmode/images/0450-RC-screenshot-Stones.png deleted file mode 100644 index 44cb3607..00000000 Binary files a/docs/source/programming_resources/vision/blocks_tfod_opmode/images/0450-RC-screenshot-Stones.png and /dev/null differ diff --git a/docs/source/programming_resources/vision/blocks_tfod_opmode/images/0452-RC-screenshot-Stones-zoom.png b/docs/source/programming_resources/vision/blocks_tfod_opmode/images/0452-RC-screenshot-Stones-zoom.png deleted file mode 100644 index db0424ec..00000000 Binary files a/docs/source/programming_resources/vision/blocks_tfod_opmode/images/0452-RC-screenshot-Stones-zoom.png and /dev/null differ diff --git a/docs/source/programming_resources/vision/blocks_tfod_opmode/images/0460-DS-screenshot-preview-half.png b/docs/source/programming_resources/vision/blocks_tfod_opmode/images/0460-DS-screenshot-preview-half.png deleted file mode 100644 index fe674297..00000000 Binary files a/docs/source/programming_resources/vision/blocks_tfod_opmode/images/0460-DS-screenshot-preview-half.png and /dev/null differ diff --git a/docs/source/programming_resources/vision/blocks_tfod_opmode/images/0460-DS-screenshot-preview.png b/docs/source/programming_resources/vision/blocks_tfod_opmode/images/0460-DS-screenshot-preview.png deleted file mode 100644 index ad26c9e2..00000000 Binary files a/docs/source/programming_resources/vision/blocks_tfod_opmode/images/0460-DS-screenshot-preview.png and /dev/null differ diff --git a/docs/source/programming_resources/vision/blocks_tfod_opmode/images/0462-DS-screenshot-preview-zoom.png b/docs/source/programming_resources/vision/blocks_tfod_opmode/images/0462-DS-screenshot-preview-zoom.png deleted file mode 100644 index f49a4581..00000000 Binary files a/docs/source/programming_resources/vision/blocks_tfod_opmode/images/0462-DS-screenshot-preview-zoom.png and /dev/null differ diff --git a/docs/source/programming_resources/vision/blocks_tfod_opmode/images/0470-DS-screenshot-data-half.png b/docs/source/programming_resources/vision/blocks_tfod_opmode/images/0470-DS-screenshot-data-half.png deleted file mode 100644 index 250711ab..00000000 Binary files a/docs/source/programming_resources/vision/blocks_tfod_opmode/images/0470-DS-screenshot-data-half.png and /dev/null differ diff --git a/docs/source/programming_resources/vision/blocks_tfod_opmode/images/0470-DS-screenshot-data.png b/docs/source/programming_resources/vision/blocks_tfod_opmode/images/0470-DS-screenshot-data.png deleted file mode 100644 index e9c32258..00000000 Binary files a/docs/source/programming_resources/vision/blocks_tfod_opmode/images/0470-DS-screenshot-data.png and /dev/null differ diff --git a/docs/source/programming_resources/vision/blocks_tfod_opmode/images/0500-pseudo-first.png b/docs/source/programming_resources/vision/blocks_tfod_opmode/images/0500-pseudo-first.png deleted file mode 100644 index 713fc2c4..00000000 Binary files a/docs/source/programming_resources/vision/blocks_tfod_opmode/images/0500-pseudo-first.png and /dev/null differ diff --git a/docs/source/programming_resources/vision/blocks_tfod_opmode/images/050_Blocks_TFOD_webcam_variable.png b/docs/source/programming_resources/vision/blocks_tfod_opmode/images/050_Blocks_TFOD_webcam_variable.png deleted file mode 100644 index 351ef8bf..00000000 Binary files a/docs/source/programming_resources/vision/blocks_tfod_opmode/images/050_Blocks_TFOD_webcam_variable.png and /dev/null differ diff --git a/docs/source/programming_resources/vision/blocks_tfod_opmode/images/0510-pseudo-analyze-half.png b/docs/source/programming_resources/vision/blocks_tfod_opmode/images/0510-pseudo-analyze-half.png deleted file mode 100644 index 1bb2cf98..00000000 Binary files a/docs/source/programming_resources/vision/blocks_tfod_opmode/images/0510-pseudo-analyze-half.png and /dev/null differ diff --git a/docs/source/programming_resources/vision/blocks_tfod_opmode/images/0510-pseudo-analyze.png b/docs/source/programming_resources/vision/blocks_tfod_opmode/images/0510-pseudo-analyze.png deleted file mode 100644 index e33d0604..00000000 Binary files a/docs/source/programming_resources/vision/blocks_tfod_opmode/images/0510-pseudo-analyze.png and /dev/null differ diff --git a/docs/source/programming_resources/vision/blocks_tfod_opmode/images/0520-get-recognitions.png b/docs/source/programming_resources/vision/blocks_tfod_opmode/images/0520-get-recognitions.png deleted file mode 100644 index 8f23ca51..00000000 Binary files a/docs/source/programming_resources/vision/blocks_tfod_opmode/images/0520-get-recognitions.png and /dev/null differ diff --git a/docs/source/programming_resources/vision/blocks_tfod_opmode/images/0530-analyze-recognitions.png b/docs/source/programming_resources/vision/blocks_tfod_opmode/images/0530-analyze-recognitions.png deleted file mode 100644 index 6e624c2a..00000000 Binary files a/docs/source/programming_resources/vision/blocks_tfod_opmode/images/0530-analyze-recognitions.png and /dev/null differ diff --git a/docs/source/programming_resources/vision/blocks_tfod_opmode/images/0540-get-box-data.png b/docs/source/programming_resources/vision/blocks_tfod_opmode/images/0540-get-box-data.png deleted file mode 100644 index a39d9af3..00000000 Binary files a/docs/source/programming_resources/vision/blocks_tfod_opmode/images/0540-get-box-data.png and /dev/null differ diff --git a/docs/source/programming_resources/vision/blocks_tfod_opmode/images/0550-central-process.png b/docs/source/programming_resources/vision/blocks_tfod_opmode/images/0550-central-process.png deleted file mode 100644 index c4db74ad..00000000 Binary files a/docs/source/programming_resources/vision/blocks_tfod_opmode/images/0550-central-process.png and /dev/null differ diff --git a/docs/source/programming_resources/vision/blocks_tfod_opmode/images/0560-loop-to-target.png b/docs/source/programming_resources/vision/blocks_tfod_opmode/images/0560-loop-to-target.png deleted file mode 100644 index d0b22a16..00000000 Binary files a/docs/source/programming_resources/vision/blocks_tfod_opmode/images/0560-loop-to-target.png and /dev/null differ diff --git a/docs/source/programming_resources/vision/blocks_tfod_opmode/images/0580-pseudo-initialize.png b/docs/source/programming_resources/vision/blocks_tfod_opmode/images/0580-pseudo-initialize.png deleted file mode 100644 index bb514a0f..00000000 Binary files a/docs/source/programming_resources/vision/blocks_tfod_opmode/images/0580-pseudo-initialize.png and /dev/null differ diff --git a/docs/source/programming_resources/vision/blocks_tfod_opmode/images/0590-pseudo-deactivate.png b/docs/source/programming_resources/vision/blocks_tfod_opmode/images/0590-pseudo-deactivate.png deleted file mode 100644 index bb1fb6d0..00000000 Binary files a/docs/source/programming_resources/vision/blocks_tfod_opmode/images/0590-pseudo-deactivate.png and /dev/null differ diff --git a/docs/source/programming_resources/vision/blocks_tfod_opmode/images/0600-pseudo-complete.png b/docs/source/programming_resources/vision/blocks_tfod_opmode/images/0600-pseudo-complete.png deleted file mode 100644 index 3770a33e..00000000 Binary files a/docs/source/programming_resources/vision/blocks_tfod_opmode/images/0600-pseudo-complete.png and /dev/null differ diff --git a/docs/source/programming_resources/vision/blocks_tfod_opmode/images/060_Blocks_TFOD_webcam_detected.png b/docs/source/programming_resources/vision/blocks_tfod_opmode/images/060_Blocks_TFOD_webcam_detected.png deleted file mode 100644 index 3075ade0..00000000 Binary files a/docs/source/programming_resources/vision/blocks_tfod_opmode/images/060_Blocks_TFOD_webcam_detected.png and /dev/null differ diff --git a/docs/source/programming_resources/vision/blocks_tfod_opmode/images/070_TFOD-Sample-Webcam-DS-Telemetry.png b/docs/source/programming_resources/vision/blocks_tfod_opmode/images/070_TFOD-Sample-Webcam-DS-Telemetry.png deleted file mode 100644 index 76c196f0..00000000 Binary files a/docs/source/programming_resources/vision/blocks_tfod_opmode/images/070_TFOD-Sample-Webcam-DS-Telemetry.png and /dev/null differ diff --git a/docs/source/programming_resources/vision/blocks_tfod_opmode/images/080_Blocks_TFOD_webcam_deactivate.png b/docs/source/programming_resources/vision/blocks_tfod_opmode/images/080_Blocks_TFOD_webcam_deactivate.png deleted file mode 100644 index 2d48742b..00000000 Binary files a/docs/source/programming_resources/vision/blocks_tfod_opmode/images/080_Blocks_TFOD_webcam_deactivate.png and /dev/null differ diff --git a/docs/source/programming_resources/vision/blocks_tfod_opmode/images/150-setZoom.png b/docs/source/programming_resources/vision/blocks_tfod_opmode/images/150-setZoom.png new file mode 100644 index 00000000..6ed836b8 Binary files /dev/null and b/docs/source/programming_resources/vision/blocks_tfod_opmode/images/150-setZoom.png differ diff --git a/docs/source/programming_resources/vision/blocks_tfod_opmode/images/160-min-confidence.png b/docs/source/programming_resources/vision/blocks_tfod_opmode/images/160-min-confidence.png new file mode 100644 index 00000000..f05d34af Binary files /dev/null and b/docs/source/programming_resources/vision/blocks_tfod_opmode/images/160-min-confidence.png differ diff --git a/docs/source/programming_resources/vision/blocks_tfod_opmode/images/170-clipping-margins.png b/docs/source/programming_resources/vision/blocks_tfod_opmode/images/170-clipping-margins.png new file mode 100644 index 00000000..2fec67b4 Binary files /dev/null and b/docs/source/programming_resources/vision/blocks_tfod_opmode/images/170-clipping-margins.png differ diff --git a/docs/source/programming_resources/vision/blocks_tfod_opmode/images/180-main-loop.png b/docs/source/programming_resources/vision/blocks_tfod_opmode/images/180-main-loop.png new file mode 100644 index 00000000..10e18f77 Binary files /dev/null and b/docs/source/programming_resources/vision/blocks_tfod_opmode/images/180-main-loop.png differ diff --git a/docs/source/programming_resources/vision/blocks_tfod_opmode/images/190-telemetryTfod.png b/docs/source/programming_resources/vision/blocks_tfod_opmode/images/190-telemetryTfod.png new file mode 100644 index 00000000..cd07d688 Binary files /dev/null and b/docs/source/programming_resources/vision/blocks_tfod_opmode/images/190-telemetryTfod.png differ diff --git a/docs/source/programming_resources/vision/blocks_tfod_opmode/images/200-Sample-DS-Camera-Stream.png b/docs/source/programming_resources/vision/blocks_tfod_opmode/images/200-Sample-DS-Camera-Stream.png new file mode 100644 index 00000000..5b26217c Binary files /dev/null and b/docs/source/programming_resources/vision/blocks_tfod_opmode/images/200-Sample-DS-Camera-Stream.png differ diff --git a/docs/source/programming_resources/vision/blocks_tfod_opmode/images/210-Sample-DS-Telemetry.png b/docs/source/programming_resources/vision/blocks_tfod_opmode/images/210-Sample-DS-Telemetry.png new file mode 100644 index 00000000..a920c3bc Binary files /dev/null and b/docs/source/programming_resources/vision/blocks_tfod_opmode/images/210-Sample-DS-Telemetry.png differ diff --git a/docs/source/programming_resources/vision/blocks_tfod_opmode/images/240-Sample-RC-LiveView.png b/docs/source/programming_resources/vision/blocks_tfod_opmode/images/240-Sample-RC-LiveView.png new file mode 100644 index 00000000..731ca5ea Binary files /dev/null and b/docs/source/programming_resources/vision/blocks_tfod_opmode/images/240-Sample-RC-LiveView.png differ diff --git a/docs/source/programming_resources/vision/blocks_tfod_opmode/images/300-Sample-Pixel.png b/docs/source/programming_resources/vision/blocks_tfod_opmode/images/300-Sample-Pixel.png new file mode 100644 index 00000000..8680e269 Binary files /dev/null and b/docs/source/programming_resources/vision/blocks_tfod_opmode/images/300-Sample-Pixel.png differ diff --git a/docs/source/programming_resources/vision/blocks_tfod_opmode/images/autorotate.png b/docs/source/programming_resources/vision/blocks_tfod_opmode/images/autorotate.png deleted file mode 100644 index ced6443f..00000000 Binary files a/docs/source/programming_resources/vision/blocks_tfod_opmode/images/autorotate.png and /dev/null differ diff --git a/docs/source/programming_resources/vision/blocks_tfod_opmode/images/blocksConceptTensorFlowWebcam.png b/docs/source/programming_resources/vision/blocks_tfod_opmode/images/blocksConceptTensorFlowWebcam.png deleted file mode 100644 index d7ed77df..00000000 Binary files a/docs/source/programming_resources/vision/blocks_tfod_opmode/images/blocksConceptTensorFlowWebcam.png and /dev/null differ diff --git a/docs/source/programming_resources/vision/blocks_tfod_opmode/images/blocksInit.png b/docs/source/programming_resources/vision/blocks_tfod_opmode/images/blocksInit.png deleted file mode 100644 index 48c815d9..00000000 Binary files a/docs/source/programming_resources/vision/blocks_tfod_opmode/images/blocksInit.png and /dev/null differ diff --git a/docs/source/programming_resources/vision/blocks_tfod_opmode/images/blocksMyExample.png b/docs/source/programming_resources/vision/blocks_tfod_opmode/images/blocksMyExample.png deleted file mode 100644 index 9ef815ce..00000000 Binary files a/docs/source/programming_resources/vision/blocks_tfod_opmode/images/blocksMyExample.png and /dev/null differ diff --git a/docs/source/programming_resources/vision/blocks_tfod_opmode/images/blocksTensorFlowActivate.png b/docs/source/programming_resources/vision/blocks_tfod_opmode/images/blocksTensorFlowActivate.png deleted file mode 100644 index c49fbb4d..00000000 Binary files a/docs/source/programming_resources/vision/blocks_tfod_opmode/images/blocksTensorFlowActivate.png and /dev/null differ diff --git a/docs/source/programming_resources/vision/blocks_tfod_opmode/images/blocksTensorFlowDeactivate.png b/docs/source/programming_resources/vision/blocks_tfod_opmode/images/blocksTensorFlowDeactivate.png deleted file mode 100644 index 4f6a7560..00000000 Binary files a/docs/source/programming_resources/vision/blocks_tfod_opmode/images/blocksTensorFlowDeactivate.png and /dev/null differ diff --git a/docs/source/programming_resources/vision/blocks_tfod_opmode/images/landscapeCoordinate.png b/docs/source/programming_resources/vision/blocks_tfod_opmode/images/landscapeCoordinate.png deleted file mode 100644 index 0df9cc9d..00000000 Binary files a/docs/source/programming_resources/vision/blocks_tfod_opmode/images/landscapeCoordinate.png and /dev/null differ diff --git a/docs/source/programming_resources/vision/blocks_tfod_opmode/images/magnifiedArea.png b/docs/source/programming_resources/vision/blocks_tfod_opmode/images/magnifiedArea.png deleted file mode 100644 index 3cbe4340..00000000 Binary files a/docs/source/programming_resources/vision/blocks_tfod_opmode/images/magnifiedArea.png and /dev/null differ diff --git a/docs/source/programming_resources/vision/blocks_tfod_opmode/images/modifiedBlocksExample.png b/docs/source/programming_resources/vision/blocks_tfod_opmode/images/modifiedBlocksExample.png deleted file mode 100644 index e9dbe146..00000000 Binary files a/docs/source/programming_resources/vision/blocks_tfod_opmode/images/modifiedBlocksExample.png and /dev/null differ diff --git a/docs/source/programming_resources/vision/blocks_tfod_opmode/images/onbotConceptTensorFlow.png b/docs/source/programming_resources/vision/blocks_tfod_opmode/images/onbotConceptTensorFlow.png deleted file mode 100644 index a839b977..00000000 Binary files a/docs/source/programming_resources/vision/blocks_tfod_opmode/images/onbotConceptTensorFlow.png and /dev/null differ diff --git a/docs/source/programming_resources/vision/blocks_tfod_opmode/images/onbotMyExample.png b/docs/source/programming_resources/vision/blocks_tfod_opmode/images/onbotMyExample.png deleted file mode 100644 index 955041fb..00000000 Binary files a/docs/source/programming_resources/vision/blocks_tfod_opmode/images/onbotMyExample.png and /dev/null differ diff --git a/docs/source/programming_resources/vision/blocks_tfod_opmode/images/otherTargetZones.png b/docs/source/programming_resources/vision/blocks_tfod_opmode/images/otherTargetZones.png deleted file mode 100644 index 9c075ecb..00000000 Binary files a/docs/source/programming_resources/vision/blocks_tfod_opmode/images/otherTargetZones.png and /dev/null differ diff --git a/docs/source/programming_resources/vision/blocks_tfod_opmode/images/quadAndSingle.png b/docs/source/programming_resources/vision/blocks_tfod_opmode/images/quadAndSingle.png deleted file mode 100644 index 633795da..00000000 Binary files a/docs/source/programming_resources/vision/blocks_tfod_opmode/images/quadAndSingle.png and /dev/null differ diff --git a/docs/source/programming_resources/vision/blocks_tfod_opmode/images/randomization.png b/docs/source/programming_resources/vision/blocks_tfod_opmode/images/randomization.png deleted file mode 100644 index 473c476d..00000000 Binary files a/docs/source/programming_resources/vision/blocks_tfod_opmode/images/randomization.png and /dev/null differ diff --git a/docs/source/programming_resources/vision/blocks_tfod_opmode/images/setZoom.png b/docs/source/programming_resources/vision/blocks_tfod_opmode/images/setZoom.png deleted file mode 100644 index 1c020598..00000000 Binary files a/docs/source/programming_resources/vision/blocks_tfod_opmode/images/setZoom.png and /dev/null differ diff --git a/docs/source/programming_resources/vision/blocks_tfod_opmode/images/singleAndQuad.png b/docs/source/programming_resources/vision/blocks_tfod_opmode/images/singleAndQuad.png deleted file mode 100644 index 171e32f4..00000000 Binary files a/docs/source/programming_resources/vision/blocks_tfod_opmode/images/singleAndQuad.png and /dev/null differ diff --git a/docs/source/programming_resources/vision/blocks_tfod_opmode/images/targetZoneA.png b/docs/source/programming_resources/vision/blocks_tfod_opmode/images/targetZoneA.png deleted file mode 100644 index 5d7bbe81..00000000 Binary files a/docs/source/programming_resources/vision/blocks_tfod_opmode/images/targetZoneA.png and /dev/null differ diff --git a/docs/source/programming_resources/vision/blocks_tfod_opmode/images/tfodBoundaries.png b/docs/source/programming_resources/vision/blocks_tfod_opmode/images/tfodBoundaries.png deleted file mode 100644 index cb03f52f..00000000 Binary files a/docs/source/programming_resources/vision/blocks_tfod_opmode/images/tfodBoundaries.png and /dev/null differ diff --git a/docs/source/programming_resources/vision/blocks_tfod_opmode/images/tfodBoundariesLandscape.png b/docs/source/programming_resources/vision/blocks_tfod_opmode/images/tfodBoundariesLandscape.png deleted file mode 100644 index e322ef35..00000000 Binary files a/docs/source/programming_resources/vision/blocks_tfod_opmode/images/tfodBoundariesLandscape.png and /dev/null differ diff --git a/docs/source/programming_resources/vision/blocks_tfod_opmode/images/tfodBoundariesLandscape.tmp b/docs/source/programming_resources/vision/blocks_tfod_opmode/images/tfodBoundariesLandscape.tmp deleted file mode 100644 index 971fadd1..00000000 Binary files a/docs/source/programming_resources/vision/blocks_tfod_opmode/images/tfodBoundariesLandscape.tmp and /dev/null differ diff --git a/docs/source/programming_resources/vision/blocks_tfod_opmode/images/tfodLandscape.png b/docs/source/programming_resources/vision/blocks_tfod_opmode/images/tfodLandscape.png deleted file mode 100644 index a766aa83..00000000 Binary files a/docs/source/programming_resources/vision/blocks_tfod_opmode/images/tfodLandscape.png and /dev/null differ diff --git a/docs/source/programming_resources/vision/blocks_tfod_opmode/images/tfodPortrait.png b/docs/source/programming_resources/vision/blocks_tfod_opmode/images/tfodPortrait.png deleted file mode 100644 index 37df92c4..00000000 Binary files a/docs/source/programming_resources/vision/blocks_tfod_opmode/images/tfodPortrait.png and /dev/null differ diff --git a/docs/source/programming_resources/vision/blocks_tfod_opmode/images/whileLoop.png b/docs/source/programming_resources/vision/blocks_tfod_opmode/images/whileLoop.png deleted file mode 100644 index 217c89cc..00000000 Binary files a/docs/source/programming_resources/vision/blocks_tfod_opmode/images/whileLoop.png and /dev/null differ diff --git a/docs/source/programming_resources/vision/java_tfod_opmode/java-tfod-opmode.rst b/docs/source/programming_resources/vision/java_tfod_opmode/java-tfod-opmode.rst index accf8d7f..d0552c74 100644 --- a/docs/source/programming_resources/vision/java_tfod_opmode/java-tfod-opmode.rst +++ b/docs/source/programming_resources/vision/java_tfod_opmode/java-tfod-opmode.rst @@ -1,6 +1,13 @@ Java Sample Op Mode for TFOD ============================= +.. warning:: + This Tutorial is outdated due to the TensorFlow updates for the + VisionPortal. We are working on updating this tutorial, please + bear with us as we update it. For more information on TensorFlow + for Java, see the VisionPortal + :ref:`TensorFlow Processor Initialization `. + Creating the Op Mode ~~~~~~~~~~~~~~~~~~~~ diff --git a/docs/source/programming_resources/vision/tensorflow_cs_2023/images/TrainingBlownOut.psd b/docs/source/programming_resources/vision/tensorflow_cs_2023/images/TrainingBlownOut.psd new file mode 100644 index 00000000..35dcb8c0 Binary files /dev/null and b/docs/source/programming_resources/vision/tensorflow_cs_2023/images/TrainingBlownOut.psd differ diff --git a/docs/source/programming_resources/vision/tensorflow_cs_2023/images/angled_pixel_detection.png b/docs/source/programming_resources/vision/tensorflow_cs_2023/images/angled_pixel_detection.png new file mode 100644 index 00000000..68c5d9ed Binary files /dev/null and b/docs/source/programming_resources/vision/tensorflow_cs_2023/images/angled_pixel_detection.png differ diff --git a/docs/source/programming_resources/vision/tensorflow_cs_2023/images/easypixeldetect.png b/docs/source/programming_resources/vision/tensorflow_cs_2023/images/easypixeldetect.png new file mode 100644 index 00000000..3ce36736 Binary files /dev/null and b/docs/source/programming_resources/vision/tensorflow_cs_2023/images/easypixeldetect.png differ diff --git a/docs/source/programming_resources/vision/tensorflow_cs_2023/images/lowanglepixel.png b/docs/source/programming_resources/vision/tensorflow_cs_2023/images/lowanglepixel.png new file mode 100644 index 00000000..f198bb39 Binary files /dev/null and b/docs/source/programming_resources/vision/tensorflow_cs_2023/images/lowanglepixel.png differ diff --git a/docs/source/programming_resources/vision/tensorflow_cs_2023/images/negatives.png b/docs/source/programming_resources/vision/tensorflow_cs_2023/images/negatives.png new file mode 100644 index 00000000..29344769 Binary files /dev/null and b/docs/source/programming_resources/vision/tensorflow_cs_2023/images/negatives.png differ diff --git a/docs/source/programming_resources/vision/tensorflow_cs_2023/images/pixel.png b/docs/source/programming_resources/vision/tensorflow_cs_2023/images/pixel.png new file mode 100644 index 00000000..8d8d9fde Binary files /dev/null and b/docs/source/programming_resources/vision/tensorflow_cs_2023/images/pixel.png differ diff --git a/docs/source/programming_resources/vision/tensorflow_cs_2023/images/pixel.psd b/docs/source/programming_resources/vision/tensorflow_cs_2023/images/pixel.psd new file mode 100644 index 00000000..7b4eb93a Binary files /dev/null and b/docs/source/programming_resources/vision/tensorflow_cs_2023/images/pixel.psd differ diff --git a/docs/source/programming_resources/vision/tensorflow_cs_2023/images/pixeldetect1.png b/docs/source/programming_resources/vision/tensorflow_cs_2023/images/pixeldetect1.png new file mode 100644 index 00000000..74d083b5 Binary files /dev/null and b/docs/source/programming_resources/vision/tensorflow_cs_2023/images/pixeldetect1.png differ diff --git a/docs/source/programming_resources/vision/tensorflow_cs_2023/images/pixeldetect2.png b/docs/source/programming_resources/vision/tensorflow_cs_2023/images/pixeldetect2.png new file mode 100644 index 00000000..a8e8ae3c Binary files /dev/null and b/docs/source/programming_resources/vision/tensorflow_cs_2023/images/pixeldetect2.png differ diff --git a/docs/source/programming_resources/vision/tensorflow_cs_2023/images/pixeldetect3.png b/docs/source/programming_resources/vision/tensorflow_cs_2023/images/pixeldetect3.png new file mode 100644 index 00000000..ce416f07 Binary files /dev/null and b/docs/source/programming_resources/vision/tensorflow_cs_2023/images/pixeldetect3.png differ diff --git a/docs/source/programming_resources/vision/tensorflow_cs_2023/images/pixeldetect4.png b/docs/source/programming_resources/vision/tensorflow_cs_2023/images/pixeldetect4.png new file mode 100644 index 00000000..146fd300 Binary files /dev/null and b/docs/source/programming_resources/vision/tensorflow_cs_2023/images/pixeldetect4.png differ diff --git a/docs/source/programming_resources/vision/tensorflow_cs_2023/images/pixelnodetect1.png b/docs/source/programming_resources/vision/tensorflow_cs_2023/images/pixelnodetect1.png new file mode 100644 index 00000000..cc6a1aed Binary files /dev/null and b/docs/source/programming_resources/vision/tensorflow_cs_2023/images/pixelnodetect1.png differ diff --git a/docs/source/programming_resources/vision/tensorflow_cs_2023/images/pixelnodetect2.png b/docs/source/programming_resources/vision/tensorflow_cs_2023/images/pixelnodetect2.png new file mode 100644 index 00000000..9b81d145 Binary files /dev/null and b/docs/source/programming_resources/vision/tensorflow_cs_2023/images/pixelnodetect2.png differ diff --git a/docs/source/programming_resources/vision/tensorflow_cs_2023/images/ribsexposed.png b/docs/source/programming_resources/vision/tensorflow_cs_2023/images/ribsexposed.png new file mode 100644 index 00000000..2493c0e0 Binary files /dev/null and b/docs/source/programming_resources/vision/tensorflow_cs_2023/images/ribsexposed.png differ diff --git a/docs/source/programming_resources/vision/tensorflow_cs_2023/images/trainingblownout.png b/docs/source/programming_resources/vision/tensorflow_cs_2023/images/trainingblownout.png new file mode 100644 index 00000000..c649d3d6 Binary files /dev/null and b/docs/source/programming_resources/vision/tensorflow_cs_2023/images/trainingblownout.png differ diff --git a/docs/source/programming_resources/vision/tensorflow_cs_2023/tensorflow-cs-2023.rst b/docs/source/programming_resources/vision/tensorflow_cs_2023/tensorflow-cs-2023.rst new file mode 100644 index 00000000..20d2c8b4 --- /dev/null +++ b/docs/source/programming_resources/vision/tensorflow_cs_2023/tensorflow-cs-2023.rst @@ -0,0 +1,462 @@ +TensorFlow for CENTERSTAGE presented by RTX +=========================================== + +What is TensorFlow? +~~~~~~~~~~~~~~~~~~~ + +*FIRST* Tech Challenge teams can use `TensorFlow Lite +`__, a lightweight version of Google’s +`TensorFlow `__ machine learning technology that +is designed to run on mobile devices such as an Android smartphone or the `REV +Control Hub `__. A *trained +TensorFlow model* was developed to recognize the white ``Pixel`` game piece used in +the **2023-2024 CENTERSTAGE presented by RTX** challenge. + +.. figure:: images/pixel.png + :align: center + :alt: CENTERSTAGE Pixel + :height: 400px + + This season’s TFOD model can recognize a white Pixel + +TensorFlow Object Detection (TFOD) has been integrated into the control system +software to identify a white ``Pixel`` during a match. The SDK (SDK +version 9.0) contains TFOD Sample OpModes and Detection Models that can +recognize the white ``Pixel`` at various poses (but not all). + +Also, *FIRST* Tech Challenge Teams can use the :doc:`Machine Learning Toolchain +<../../../ftc_ml/index>` tool to train their own TFOD models. This allows teams +to recognize custom objects they place on Spike Marks in place of white ``Pixels`` +prior to the start of the match (also known as *Team Game Elements*). This +training should take into account certain conditions such as distance from +camera to target, angle, lighting, and especially backgrounds. Teams can +receive technical support using the Machine Learning Toolchain through the +`Machine Learning Forum `__. + +How Might a Team Use TensorFlow this season? +~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ + +For this season’s challenge the field is randomized during the Pre-Match stage. +This randomization causes the white ``Pixel`` placed on Spike Marks to be placed on +either the Left, Center, or Right Spike Mark. During Autonomous, Robots must +independently determine which of the three Spike Marks (Left, Center, Right) +the white ``Pixel`` was placed on. To do this, robots using a Webcam or a camera on +a Robot Controller Smartphone can inspect Spike Mark locations to determine if +a white ``Pixel`` is present. Once the robot has correctly identified which Spike +Mark the white ``Pixel`` is present on, the robot can then perform additional +actions based on that position that will yield additional points. + +Teams also have the opportunity to replace the white ``Pixel`` with an object of +their own creation, within a few guidelines specified in the Game Manual. This +object, or Team Game Element, can be optimized to help the team identify it +more easily and custom TensorFlow inference models can be created to facilitate +recognition. As the field is randomized, the team's Team Game Element will be +placed on the Spike Marks as the white ``Pixel`` would have, and the team must +identify and use the Team Game Element the same as if it were a white ``Pixel`` on +a Spike Mark. + +Sample OpModes +~~~~~~~~~~~~~~ + +Teams have the option of using a custom inference model with the *FIRST* Tech +Challenge software or to use the game-specific default model provided. As noted +above, the *FIRST* Machine Learning Toolchain is a streamlined tool for training +your own TFOD models. + +The FIRST Tech Challenge software (Robot Controller App and Android Studio +Project) includes sample op modes (Blocks and Java versions) that demonstrate +how to use **the default inference model**. These tutorials show how to use +the sample op modes, using examples from previous *FIRST* Tech Challenge +seasons, but demonstrate the process for use in any season. + +- :doc:`Blocks Sample OpMode for TensorFlow Object Detection <../blocks_tfod_opmode/blocks-tfod-opmode>` +- :doc:`Java Sample OpMode for TFOD <../java_tfod_opmode/java-tfod-opmode>` + +Using the sample OpModes, teams can practice identifying white ``Pixels`` placed +on Spike Marks. The sample OpMode ``ConceptTensorFlowObjectDetectionEasy`` is +a simple OpMode to use to detect a ``Pixel`` - it is a very basic OpMode simplified +for beginner teams to perform basic ``Pixel`` detection. + +.. figure:: images/easypixeldetect.png + :align: center + :alt: Pixel Detection + :width: 75% + + Example Detection of a Pixel + +It is important to note that if the detection of the object is below the +minimum confidence threshold, the detection will not be shown - it is important +to set the minimum detection threshold appropriately. + +.. note:: + The default minimum confidence threshold provided in the Sample OpMode (75%) + is only provided as an example; depending on local conditions (lighting, + image wear, etc...) it may be necessary to lower the minimum confidence in + order to increase TensorFlow's likelihood to see all possible image + detections. However, due to its simplified nature it is not possible to + change the minimum confidence using the ``Easy`` OpMode. Instead, you will + have to use the normal OpMode. + +Notes on Training the CENTERSTAGE Model +~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ + +The ``Pixel`` game piece posed an interesting challenge for TensorFlow Object +Detection (TFOD). As is warned in the Machine Learning Toolkit documentation, +TFOD is not very good with recognizing and differentiating simple geometric +shapes, nor distinguishing between specific colors; instead, TFOD is good at +detecting *patterns*. TFOD needs to be able to recognize a unique *pattern*, +and while there is a small amount of patterning in the ribbing of the +``Pixel``, in various lighting conditions it's dubious how much the ribbing +will be able to be seen. Even in the image at the top of this document, the +ribbing can only be seen due to the specific shadows that the game piece has +been provided. Even in optimal testing environments, it was difficult to +capture video of the object that nicely highlighted the ribbing enough for +TensorFlow to use for pattern recognition. This highlighted the inability to +guarantee optimal ``Pixel`` characteristics in unknown lighting environments +for TFOD. + +Another challenge with training the model had to do with how the ``Pixel`` +looks at different pose angles. When the camera is merely a scant few inches +from the floor, the ``Pixel`` can almost look like a solid object; at times +there may be sufficient shadows to see that there is a hole in the center of +the object, but not always. However, if the camera was several inches off the +floor the ``Pixel`` looked differently, as the mat or colored tape could be +seen through the hole in the middle of the object. This confused the neural +network and made it extremely difficult to train, and the resulting models +eventually recognized any "sufficiently light colored blob" as a ``Pixel``. +This was not exactly ideal. + +Even with the best of images, the Machine Learning algorithms had a difficult +time determining what *was* a ``Pixel`` and what wasn't. What ended up working +was providing NOT ONLY images of the ``Pixel`` in different poses, but also +several white objects that WERE NOT a ``Pixel``. This was fundamental to +helping TensorFlow train itself to understand that "All ``Pixels`` are White +Objects, but not all White Objects are ``Pixels``." + +To provide some additional context on this, here are a few examples of labeled +frames that illustrate the challenges and techniques in dealing with the +``Pixel`` game piece. + +.. only:: html + + .. grid:: 1 2 2 2 + :gutter: 2 + + .. grid-item-card:: + :class-header: sd-bg-dark font-weight-bold sd-text-white + :class-body: sd-text-left body + + Training Frame 1 + + ^^^ + + .. figure:: images/trainingblownout.png + :align: center + :alt: Pixel that's saturated + :width: 100 % + + +++ + + Pixel Saturation (No Ribs) + + .. grid-item-card:: + :class-header: sd-bg-dark font-weight-bold sd-text-white + :class-body: sd-text-left body + + (Rejected) Training Frame 2 + + ^^^ + + .. figure:: images/lowanglepixel.png + :align: center + :alt: Pixel at low angle + :width: 100 % + + +++ + + Camera Too Low (White Blob) + + .. grid-item-card:: + :class-header: sd-bg-dark font-weight-bold sd-text-white + :class-body: sd-text-left body + + Training Frame 3 + + ^^^ + + .. figure:: images/ribsexposed.png + :align: center + :alt: Rare good image + :width: 100 % + + +++ + + Actual Good Image with Ribbing (Rare) + + .. grid-item-card:: + :class-header: sd-bg-dark font-weight-bold sd-text-white + :class-body: sd-text-left body + + Training Frame 4 + + ^^^ + + .. figure:: images/negatives.png + :align: center + :alt: Pixel with non-pixel objects + :width: 100 % + + +++ + + Pixel with non-Pixel Objects + +.. only:: latex + + .. list-table:: Examples of Challenging Scenarios + :class: borderless + + * - .. image:: images/trainingblownout.png + - .. image:: images/lowanglepixel.png + * - .. image:: images/ribsexposed.png + - .. image:: images/negatives.png + + +Using the Default CENTERSTAGE Model +~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ + +In the previous section it's described how the height of the camera from the floor +has a huge effect on how the ``Pixel`` is seen; too low and the object can look +like a single "blob" of color, and too high and the object will look similar to +a white donut. When training the model, it was decided that the Donut approach was +the best - train the model to recognize the ``Pixel`` from above to provide a +clear and consistent view of the ``Pixel``. Toss in some angled shots as well, along +with some additional extra objects just to give TensorFlow some perspective, and +a model is born. **But wait, how does that affect detection of the Pixel from the +robot's starting configuration?** + +In CENTERSTAGE, using the default CENTERSTAGE model, it is unlikely that a +robot will be able to get a consistent detection of a White ``Pixel`` from the +starting location. In order to get a good detection, the robot's camera needs +to be placed fairly high up, and angled down to be able to see the gray tile, +blue tape, or red tape peeking out of the center of the ``Pixel``. Thanks to +the center structure on the field this season, it's doubtful that a team will +want to have an exceptionally tall robot - likely no more than 14 inches tall, +but most will want to be under 12 inches to be safe (depending on your strategy +- please don't let this article define your game strategy!). The angle that +your robot's camera will have with the Pixel in the starting configuration +makes this seem unlikely. + +Here are several images of detected and non-detected ``Pixels``. Notice that +the center of the object must be able to see through to what's under the +``Pixel`` in order for the object to be detected as a ``Pixel``. + +.. only:: html + + .. grid:: 1 2 2 2 + :gutter: 2 + + .. grid-item-card:: + :class-header: sd-bg-dark font-weight-bold sd-text-white + :class-body: sd-text-left body + + Non-Detected Pixel #1 + + ^^^ + + .. figure:: images/pixelnodetect1.png + :align: center + :alt: Pixel Not Detected 1 + :width: 100 % + + +++ + + Pixel Not Detected, Angle Too Low + + .. grid-item-card:: + :class-header: sd-bg-dark font-weight-bold sd-text-white + :class-body: sd-text-left body + + Non-Detected Pixel #2 + + ^^^ + + .. figure:: images/pixelnodetect2.png + :align: center + :alt: Pixel Not Detected 2 + :width: 100 % + + +++ + + Pixel Not Detected, Angle Too Low + + .. grid-item-card:: + :class-header: sd-bg-dark font-weight-bold sd-text-white + :class-body: sd-text-left body + + Detected Pixel #1 + + ^^^ + + .. figure:: images/pixeldetect1.png + :align: center + :alt: Pixel Detected 1 + :width: 100 % + + +++ + + Pixel Detected, Min Angle + + .. grid-item-card:: + :class-header: sd-bg-dark font-weight-bold sd-text-white + :class-body: sd-text-left body + + Detected Pixel #2 + + ^^^ + + .. figure:: images/pixeldetect2.png + :align: center + :alt: Pixel Detected 2 + :width: 100 % + + +++ + + Pixel Detected, Better Angle + + .. grid-item-card:: + :class-header: sd-bg-dark font-weight-bold sd-text-white + :class-body: sd-text-left body + + Detected Pixel #3 + + ^^^ + + .. figure:: images/pixeldetect3.png + :align: center + :alt: Pixel Detected 3 + :width: 100 % + + +++ + + Pixel Detected, Min Angle on Tape + + .. grid-item-card:: + :class-header: sd-bg-dark font-weight-bold sd-text-white + :class-body: sd-text-left body + + Detected Pixel #4 + + ^^^ + + .. figure:: images/pixeldetect4.png + :align: center + :alt: Pixel Detected 4 + :width: 100 % + + +++ + + Pixel Detected, Top-Down View + +.. only:: latex + + .. list-table:: Examples of Detected and Non-Detected Pixels + :class: borderless + + * - .. image:: images/pixelnodetect1.png + - .. image:: images/pixelnodetect2.png + * - .. image:: images/pixeldetect1.png + - .. image:: images/pixeldetect2.png + * - .. image:: images/pixeldetect3.png + - .. image:: images/pixeldetect4.png + +Therefore, there are two options for detecting the ``Pixel``: + +1. The camera can be on a retractable/moving system, so that the camera is elevated to + a desirable height during the start of Autonomous, and then retracts before moving + around. + +2. The robot will have to drive closer to the Spike Marks in order to be able to + properly detect the ``Pixels``. + +For the second option (driving closer), the camera's field of view might pose a +challenge if it's desirable for all three Spike Marks to be always in view. If +using a Logitech C270 camera, perhaps using a Logitech C920 with a wider field +of view might help to some degree. This completely depends on the height of the +camera and how far the robot must be driven in order to properly recognize a +``Pixel``. Teams can also simply choose to point their webcam to the CENTER and +LEFT Spike Marks, for example, and drive closer to those targets, and if a +``Pixel`` is not detected then by process of elimination it must be on the +RIGHT Spike Mark. + +Selecting objects for the Team Prop +~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ + +Selecting objects to use for your custom Team Prop can seem daunting. Questions +swirl like "What shapes are going to be recognized best?", "If I cannot have +multiple colors, how do I make patterns?", and "How do I make this easier on myself?". +Hopefully this section will help you understand a little more about TensorFlow +and how to get the most out of it. + +First, it's important to note that TensorFlow has the following quirks/behaviors: + +- In order to run TensorFlow on mobile phones, *FIRST* Tech Challenge uses a very small core + model resolution. This means the image is downscaled from the high definition + webcam image to one that is only 300x300 pixels. This means that medium and + small objects within the webcam images may be reduced to very small + indistinguishable clusters of pixels in the target image. Keep the objects in + the view of the camera large, and train for a wide range of image sizes. +- TensorFlow is not really good at differentiating simple geometric shapes. TensorFlow + Object Detection is an object classifier, and similar geometric shapes will + classify similarly. Humans are much better at differentiating geometric shapes than + neural net algorithms, like TensorFlow, at the present. +- TensorFlow is great at pattern detection, but that means that within the footprint + of the object you need one or more repeating or unique patterns. The larger the + pattern the easier it will be for TensorFlow to detect the pattern at a + distance. + +So what kinds of patterns are good for TensorFlow? Let's explore a few examples: + +1. Consider the shape of a `chess board Rook + `__. + The Rook itself is mostly uniform all around, no matter how you rotate the + object it more or less looks the same. Not much patterning there. However, + the top of the Rook is very unique and patterned. Exaggerating the + "battlements", the square-shaped parts of the top of the Rook, can provide + unique patterning that TensorFlow can distinguish. + +2. Consider the outline of a `chess Knight + `__, + as the "head" of the Knight is facing to the right or to the left. That + profile is very distinguishable as the head of a horse. That specific animal + is one that `model zoos + `__ + have been optimized for, so it's definitely a shape that TensorFlow can be + trained to recognize. + +3. Consider the patterning in a fancy `wrought-iron fence + `__. If made + thick enough, those repeating patterns can be recognized by a TensorFlow + model. Like the Chess Board Rook, it might be wise to make the object round + so that the pattern is similar and repeats now matter how the object is + rotated. If allowed, having multiple shades of color can also help make a + more-unique patterning on the object (e.g. multiple shades of red, likely + must consult the `Q&A `__). + +4. TensorFlow can be used to + `Detect Plants `__ + and all of the plants are a single color. Similar techniques can be reverse-engineered + (make objects of different "patterns" similar to plants) to create an object that + can be detected and differentiated from other objects on the game field. + +Hopefully this gives you quite a few ideas for how to approach this challenge! + +Using Custom TensorFlow models in Blocks and Java +~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ + +Instructions on using Custom TensorFlow Models in +:ref:`Blocks `, +:ref:`OnBot-Java `, +and :ref:`Android Studio ` can be found +in the :doc:`FTC-ML documentation <../../../ftc_ml/index>`, in the +:doc:`Implementing in Robot Code <../../../ftc_ml/implement/index>` section. + diff --git a/docs/source/programming_resources/vision/tensorflow_ff_2021/tensorflow-ff-2021.rst b/docs/source/programming_resources/vision/tensorflow_ff_2021/tensorflow-ff-2021.rst index 5dae8544..562d7a7d 100644 --- a/docs/source/programming_resources/vision/tensorflow_ff_2021/tensorflow-ff-2021.rst +++ b/docs/source/programming_resources/vision/tensorflow_ff_2021/tensorflow-ff-2021.rst @@ -81,7 +81,7 @@ located. Click on the following links to learn more about these sample Op Modes. - :ref:`Blocks TensorFlow Object Detection - Example ` + Example ` - :ref:`Java TensorFlow Object Detection Example `