diff --git a/docs/source/ai/innovation_corner/images/new_prompt.png b/docs/source/ai/innovation_corner/images/new_prompt.png new file mode 100644 index 00000000..355e85d5 Binary files /dev/null and b/docs/source/ai/innovation_corner/images/new_prompt.png differ diff --git a/docs/source/ai/innovation_corner/innovation-corner.rst b/docs/source/ai/innovation_corner/innovation-corner.rst new file mode 100644 index 00000000..633c9c79 --- /dev/null +++ b/docs/source/ai/innovation_corner/innovation-corner.rst @@ -0,0 +1,248 @@ +AI Innovation Corner +==================== + +These articles are part of the *FIRST* Tech Challenge AI Innovation Corner. +This is a place where we'll post custom and curated articles relevant to +*FIRST* Tech Challenge as it relates to AI and its impact on our daily +lives and the world around us. We would like to thank Google for their +generous contributions to *FIRST* Tech Challenge to increase access to +our program in underserved communities and for providing +sponsorship and occasional technical direction for this content. + +Articles are ordered on this page chronologically, with the newest content +at the top of the page expanded. Just click to expand any other articles +you'd like to see. + +.. dropdown:: Week of 10/21/2024 "AI for Social Media" + :open: + + .. _ai_for_social_media: + + **AI for Social Media** + + This week in this AI Innovation Corner we’re going to be exploring using AI + for helping you take the first steps that every *FIRST* Tech Challenge team + needs to do, which is to increase your team’s social media presence. + Specifically, we’re going to focus on social media campaigns and team + marketing, which can be a critical step to increase the visibility of your + team in the community. This can help you with recruitment of team members + and mentors, spreading awareness of yourselves and *FIRST*, and is one great + step towards helping you with fundraising (which we’ll cover in more detail + in a future article). + + If you’ve never started a social media campaign for a *FIRST* Tech Challenge + team, you might have some ideas about how to get started but you need some + direction and some more “concrete” examples pertaining to where to start, + what to create content about, and even where to post/host your content. + Gemini can be a great start in your brainstorming process. This simple + prompt can yield loads of valuable feedback: + + * *I have a high school FIRST Tech Challenge Team. What are things I need to + consider when starting my first social media campaign?* + + Once you’ve got a presence and start working through content, through social + media you’ll engage with lots of folks. It can sometimes be difficult to + find the right way to interact with the community at large, and responders + who love (or are critical of) your content. AI to the rescue! AI can help + you find different ways of interacting - though remember the ultimate + responsibility on what to post is up to you, AI should only be used as a + means of brainstorming and/or refining what you want to say. + + * *Write a friendly, one-sentence response to the following social media + comment that shows appreciation for the comment and encourages the user to + turn on post notifications for more content from our FIRST Tech Challenge + team.* + + Can’t think of prompts to ask/use, or are you looking for some inspiration? + AI can help you there as well. You can ask AI to help you with building AI + prompts (asking the AI what to ask the AI is a pro tip!), brainstorming ways + to use AI for social media, and ways to accentuate what you do on social + media. Be sure to keep “ *FIRST* Tech Challenge ” in your prompts so that the AI + relates what you’re asking it to the competition so that the suggestions it + makes are relevant to your team and your activities. + + * *Can you tell me some common AI prompts for generating social media posts + for my FIRST Tech Challenge team?* + + Finally, it’s important to track your social media progress in order to + determine what’s going well, optimize your strategy for content in the + future, and determining where you’re getting the most “bang” for your + content-production “buck.” We can use the following prompt to ask AI to give + us some tips on the best ways to track our online footprint: + + * *How can I track my team's progress on social media?* + + Remember, social media is a marathon, not a race. It requires careful + planning, continuous improvement, consistency in creating content, and the + desire to build your team’s brand. Hard work and determination can pay off, + and AI can help guide you along the way! + +.. dropdown:: Week of 09/30/2024 "AI Competition Manual Assistant" + + .. _competition_manual_assistant: + + **AI Competition Manual Assistant** + + In our first article, the Google AI Studio was introduced as a + tool to interact with Google’s Gemini AI. Gemini is one of several flagship + Large Language Models (LLM’s) that have been meticulously trained on massive + amounts of text data to learn the patterns and relationships between units + of language - these models have actually learned how to recognize text-based + language, read and understand data, and synthesize what it learned to + predict and interpret future data. This is the exact process humans make in + learning and understanding the world around us! In Google AI Studio, users + can interact with the Gemini AI through “prompts” to perform tasks for them. + Prompts are instructions or queries given to an AI in order to generate a + response - the quality of the response is often directly related to the + quality of the prompt. Through these prompts, Gemini can provide responses + based on the massive dataset that it has been pre-trained with, or users can + also provide additional documents, text, or media that the AI has never seen + before. These multimodal prompts, or prompts that include multiple types of + content, can be very beneficial in interacting with an AI using content that + is specific to a niche area like *FIRST* Tech Challenge. Can you think of ways + to put this ability to good use in *FIRST* Tech Challenge? + + In *FIRST* Tech Challenge, one of the first tasks teams have to do is to read + and understand the *FIRST* Tech Challenge Competition Manual. This can be a + very painstaking task, and even a skilled reader can miss subtle nuances + provided by the manual. However, an AI can break down and analyze the manual + in a matter of seconds, usually preserving the nuance provided in the + document. Users can then interact with the AI that has analyzed the + Competition Manual, and prompt the AI to provide insights - these questions + might involve locating specific information likely found in the Competition + Manual, summarize important rules or processes, or even involve asking the + AI to make a best guess. Through a process known as “role playing” the user + can prompt the AI to take on a role or persona and direct the AI to follow + specific rules as it interacts with the user in subsequent prompts. The + remainder of this article is a tutorial on how to set up a “role playing” + session with the Google Gemini AI through Google AI Studio to analyze and + answer questions based on the *FIRST* Tech Challenge 2024-2025 Competition + Manual for the INTO THE DEEP presented by RTX season. While some of the + nuanced elements (like AI prompting) will be shallowly covered in this + article, it is something we’ll cover a lot more in future articles. + + Creating an AI expert using Google AI Studio is fairly straightforward - the + hard part is creating the proper prompt, and there we’ve got you covered. + + **Step 1** - First, log into `Google AI Studio + `_. You can do this by clicking the “Sign in + to Google AI Studio” button on the front page of the Google AI Studio home + page. You will need a Google account in order to do this - getting one is + left as an exercise to the reader. The Google account is used to store your + Google AI Studio prompt sessions and any content you upload to the model, + and to track usage of the Gemini APIs. + + **Step 2** - Let’s download the *FIRST* Tech Challenge Competition Manual to your + local computer. You can always find the latest Competition Manual PDF at the + following link: + + * https://ftc-resources.firstinspires.org/file/ftc/game/manual + + **Step 3** - In the left navigation pane towards the top of the pane, there + is a circle with a plus inside it with the text “Create new prompt” next to + it. Clicking on this button will start a new prompt - though if you’re + using Google AI Studio for the first time it’s likely a new prompt is + already open. + + Now that we have a new prompt, you can give the prompt a name. This will + allow the prompt to be saved in your "My Library" so you can come back and + interact with the prompt later without having to recreate the prompt session + every time. + + In the bottom center of the workspace is a text field where you can enter in + your prompt (it has a default prompt of “Type something”). BEFORE we enter + our prompt, we want to add our Competition Manual PDF document. To add the + document, click the “Plus” icon to the right of the prompt area. This will + give you several options, choose “Upload to Drive”. You can either click the + “Browse” button to browse for the PDF of the Competition Manual that you + downloaded, or you can drag the file into the window. This adds the + Competition Manual to your prompt, it may take a minute or two to upload the + PDF so please be patient. + + .. figure:: images/new_prompt.* + :align: center + :alt: Google AI Studio Screen + :width: 75% + + Creating a prompt in Google AI Studio + + **Step 4** - Now that we have our document uploaded, we now want to enter our + prompt. This prompt directs the AI in how to manage its responses, what + information to use when developing a response, and sets up the role that the + AI will attempt to play. Enter the following prompt and press the “Run” + button: + + * *You are a helpful AI assistant providing answers to questions about the + provided PDF. Do not use any prior knowledge; you have everything you need + to answer questions in the one PDF provided. Cite all references.* + + Once the AI processes the initial prompt, we can then ask questions that the + AI will use the Competition Manual to answer. Depending on the question, it + may take the AI between several seconds up to a couple minutes to answer - + be patient! Here are several questions you can ask (remember to press the + “Run” button after asking each question): + + Example sample questions: + + * How many SAMPLES is a ROBOT allowed to CONTROL at a time? + * What are the different ways to score points? + * How large can a ROBOT be in its STARTING CONFIGURATION? + * Which awards are best for advancement? + * How do I write a strong engineering portfolio? + + Some prompts that require a lot of complex understanding or strategy can yield + results that are not correct, especially if there is information “understood + but not supplied.” For example, the following prompts provide some correct and + some incorrect information: + + Examples of difficult questions: + + * What is the maximum score for an alliance? + * Can ROBOTS pick up an opposing ALLIANCE'S SAMPLES? + * How many matches does a team play at an event? + + This example was specific to FIRST Tech Challenge, but this process can be + used for virtually any documents or media. Using AI as an analysis assistant + can help you summarize news articles, find specific instructions in user + manuals, review books, and more! Remember that the quality of the responses + the AI provides is directly related to the quality of the prompt provided - + even so, the AI isn’t always going to be able to provide correct answers so + it’s up to you to verify the correctness of all answers provided by an AI. + +.. dropdown:: Week of 09/09/2024 "AI Innovation Corner - Google AI Studio" + + .. _googleAIstudio: + + **AI Innovation Corner - Google AI Studio** + + This first article launched as part of the *Tech Tips of the Week*, but is + the official first article for the AI Innovation Corner. + + This week’s Tech Tip of the Week launches a new initiative in *FIRST* Tech + Challenge, an AI Innovation Corner. Generative AI has taken the world by + storm, becoming commonplace now in everything from personal assistants, + search engines, recipe curation, music innovation, and vehicle maintenance! + Machine Learning AI has been a part of *FIRST* Tech Challenge in some way for + the past six years, and we’re now transitioning to help teams learn how to + use and incorporate Generative AI in their *FIRST* Tech Challenge experience + (while we’re learning ourselves!). + + The first step (or *FIRST* step?) to getting the most out of AI is choosing a + model. What do I mean by model? Every AI is a neural network that has been + trained with specific knowledge with the ability to do specific things based + on that knowledge. Each version of this neural network is stored in a “model”. + Each different company has different models available for different purposes, + though most models are variations on their flagship model (Gemini from Google, + ChatGPT 4-o from OpenAI, Claude from Anthropic, and so on). Each company has + different web-based and API interfaces for interacting with their models, and + everyone has their favorite. In *FIRST* Tech Challenge, the standard tool we use + is `Google AI Studio `__ to interact with Gemini. + + Google AI Studio is free to use, but requires a Google account to access - + virtually all models require a login or API token of some kind to use. Google + AI Studio is our favorite for its list of examples (Prompt Gallery) and its + easy to use interface to save prompt sessions and resume them later. With + Google AI Studio, you also can select the specific model you want to use, and + when available you can choose to use preview versions of up and coming models. + + diff --git a/docs/source/apriltag/vision_portal/apriltag_localization/apriltag-localization.rst b/docs/source/apriltag/vision_portal/apriltag_localization/apriltag-localization.rst new file mode 100644 index 00000000..7c18c425 --- /dev/null +++ b/docs/source/apriltag/vision_portal/apriltag_localization/apriltag-localization.rst @@ -0,0 +1,411 @@ +AprilTag Localization +===================== + +Introduction +------------ + +In *FIRST Tech Challenge* (FTC), **localization** uses sensor inputs to +determine the robot's current place **on the game field**. + +Since 2023, an FTC OpMode can read the **pose** (position and orientation) of +an AprilTag, **relative to the camera**. An OpMode can also read that +AprilTag's **global** pose (on the FTC game field), stored as metadata. + +.. figure:: images/05-ITD-tags.png + :align: center + :width: 85% + :alt: Field Locations of AprilTags in INTO THE DEEP + + Field locations of AprilTags in INTO THE DEEP + +This means it's possible to calculate the camera's **global** pose -- namely +its position and orientation on the game field. + +Furthermore, if the camera's pose is specified in the robot's reference frame, +then an OpMode can determine the **global pose of the robot** (on the game +field). + +.. figure:: images/06-Res-Q-field-axes.png + :align: center + :width: 75% + :alt: Field Coordinate System + + FTC Field Coordinate System + +This **localization** is a calculation to determine the robot's global position +and rotation, based on sensing one or more fixed landmarks -- AprilTags in this +case. + +This capability is provided in 2024 with FTC SDK version 10.0, including a +Sample OpMode, thanks to `Dryw Wade `_. This +tutorial describes how to use that OpMode. + +Configuration +------------- + +*Skip this section if ...* + +* *the active robot configuration already contains "Webcam 1", or* +* *using the built-in camera of an Android phone as Robot Controller.* + +Before starting the programming, REV Control Hub users should make a robot +configuration that includes the USB webcam to be used for AprilTag +localization. + +For now, use the default webcam name, "Webcam 1". If a different name is +preferred, edit the Sample OpMode to agree with the exact webcam name in the +robot configuration. + +**Save and activate** that configuration; its name should appear on the paired +Driver Station screen. + +Open the Sample OpMode +---------------------- + +To learn about opening the Sample OpMode, select and read the Blocks **or** +Java section below: + +.. tab-set:: + .. tab-item:: Blocks + :sync: blocks + + On a laptop or desktop computer connected via Wi-Fi to the Robot + Controller, open the Chrome browser. Go to the REV Control Hub's address + http://192.168.43.1:8080 (or http://192.168.49.1:8080 for Android RC + phone) and click the Blocks tab. + + Click ``Create New OpMode``\ , enter a new name such as + "AprilTagLocalization_Darlene_v01", and select the Sample OpMode + ``ConceptAprilTagLocalization``. + + If using the built-in camera of an RC phone, change ``true`` to ``false`` + at the OpMode's first Block called ``set USE_WEBCAM``. + + Save the OpMode, time to try it! + + .. tab-item:: Java + :sync: java + + Open your choice of OnBot Java or Android Studio. + + In the ``teamcode`` folder, add/create a new OpMode with a name such as + "AprilTagLocalization_Oscar_v01.java", and select the Sample OpMode + ``ConceptAprilTagLocalization.java``. + + If using the built-in camera of an RC phone, change ``true`` to ``false`` + at about line 71 (\ ``USE_WEBCAM``\ ). + + Click "Build", time to try it! + +Run the Sample OpMode +--------------------- + +On the Driver Station, select the TeleOp OpMode that you just saved or built. + +Aim the camera at an AprilTag from the current FTC game. + +.. figure:: images/07-full-tag-11.png + :align: center + :width: 85% + :alt: Full AprilTag Image + + Full AprilTag Image + +For real results, testing should be done on an FTC game field with one or more +legal AprilTags posted in their correct positions. + +For simulated/casual testing, use a loose paper AprilTag of the correct size. +Or it may be on a computer screen, with the image zoomed to the **correct +physical size** (4 x 4 inches, in this example): + +.. figure:: images/08-tag-11.png + :align: center + :width: 85% + :alt: Partial AprilTag Sheet + + Partial AprilTag Sheet + +**Touch INIT only.** No telemetry will appear, but at this moment the DS +**Camera Stream** preview can be accessed. See the next section re. previews. + +After using the preview to aim at the AprilTag, touch the DS Start arrow. The +OpMode should give Telemetry showing the **localization results**: + +.. figure:: images/10-DS-screen.png + :align: center + :width: 75% + :alt: Driver Station Sample Output + + Driver Station Sample Output + +These details will be covered in a later section. In this example, the camera +is 12 inches directly in front of AprilTag #11 from INTO THE DEEP. + +Slowly move the camera around, keeping the AprilTag fully in the camera's view. +The telemetry will update with the camera's location on the field. + +It's working! Your OpMode can determine the **global pose** of the camera. A +later section describes how to get the global **robot pose**\ , based on the +camera's placement on the robot. + +*Skip the next two sections, if you already know how to use FTC previews.* + +DS Preview +---------- + +Before describing the telemetry data, this page offers two sections on seeing +the camera's view of the AprilTag with **previews**. Previewing is essential +for working with robot vision. + +On the Driver Station (DS), remain in INIT -- don't touch the Start button. + +At the top right corner, touch the 3-dots menu, then ``Camera Stream``. This +shows the camera's view; tap the image to refresh it. + +.. figure:: images/20-CameraStream.png + :align: center + :width: 85% + :alt: DS Camera Stream + + Example of DS Camera Stream + +For a BIG preview, touch the arrows at the bottom right corner. + +Or, select Camera Stream again, to return to the previous screen and its +Telemetry. + +RC Preview +---------- + +The Robot Controller (RC) device also makes a preview, called ``LiveView``. +This is full video, and is shown automatically on the screen of an RC phone. + +.. figure:: images/30-LiveView.png + :align: center + :width: 85% + :alt: Control Hub Preview + + Control Hub Preview + +The above preview is from a REV Control Hub. + +It has no physical screen, so you must plug in an HDMI monitor **or** use +open-source `scrcpy `_ (called +"screen copy") to see the preview on a laptop or computer that's connected via +Wi-Fi to the Control Hub. + +Basic Telemetry Data +-------------------- + +Let's look closer at the DS telemetry: + +.. figure:: images/40-telemetry.png + :align: center + :width: 85% + :alt: DS Telemetry Example + + DS Telemetry Example + +In this example, the camera is 12 inches directly in front of AprilTag #11 from +INTO THE DEEP. + +.. figure:: images/45-ITD-tag-numbers.png + :align: center + :width: 85% + :alt: Tag Locations for INTO THE DEEP + + Specific Tag Locations for INTO THE DEEP + +The center of AprilTag #11 is at position X = -72 inches from the center of the +field. This telemetry gives the camera's X position as (approximately) -60 +inches, namely 12 inches in front of that tag. + +The center of AprilTag #11 is at position Y = 48 inches from the center of the +field. This telemetry gives the camera's Y position as (approximately) 48 +inches, namely directly aligned (horizontally) with that tag. + +The center of AprilTag #11 is at position Z = 5.9 inches (above the mat). +This telemetry gives the camera's Z position as (approximately) 5.9 inches, +namely directly aligned (vertically) with that tag. + +The camera lens is parallel to the AprilTag, so the Pitch, Roll and Yaw values +should be orthogonal (0 or a multiple of 90 degrees). This telemetry confirms +the parallel orientation, with PRY values (approximately) 0 or 90 degrees. + +Reference Frames +---------------- + +In the above example. the yaw angle is given as (approximately) -90 degrees. +But the camera is facing in the negative X direction, thus has a heading or yaw +angle of -180 degrees in the official FTC `field coordinate system +`_ +: + +.. figure:: images/50-field-axes.png + :align: center + :width: 85% + :alt: FTC Field Coordinate System + + FTC Field Coordinate System + +This sample OpMode uses a reference frame (coordinate system) that may be +different than what you expect from other FTC navigation applications, +including `IMU or robot axes +`_\ +, odometry device axes, and the FTC field system (shown above). These +differences typically result in basic and obvious changes in axis direction, +axis swapping, and orthogonal angles (90-degree increments). + +Learn and incorporate these differences into your OpMode, for the given +scenario of your AprilTag localization. Manually adjust values as needed to +accomplish your specific navigation goals. + +**Evaluate the accuracy and reliability of AprilTag navigation**\ , with and +without offsets, smoothing and other adjustments. Some FTC teams use multiple +data sources for navigation. Base your robot strategy on capabilities +**demonstrated** through extensive testing and refinement. + +Camera Placement on Robot +------------------------- + +The Sample OpMode provides fields to specify the location and orientation of +the camera on the robot. The returned data will then represent the global +**robot pose** rather than the camera's pose. + +Subject to the reference frame caveat noted above, do your best to follow these +commented instructions, in the Blocks and Java Sample OpModes: + +.. + + Setting these values requires a definition of the axes of the camera and robot: + + **Camera axes:** + + * *Origin location:* Center of the lens + * *Axes orientation:* +x right, +y down, +z forward (from camera's perspective) + + **Robot axes:** (this is typical, but you can define this however you want) + + * *Origin location:* Center of the robot at field height + * *Axes orientation:* +x right, +y forward, +z upward + + **Position:** + + * If all values are zero (no translation), that implies the camera is at the + center of the robot. Suppose your camera is positioned 5 inches to the + left, 7 inches forward, and 12 inches above the ground - you would need to + set the position to (-5, 7, 12). + + **Orientation:** + + * If all values are zero (no rotation), that implies the camera is pointing + straight up. In most cases, you'll need to set the pitch to -90 degrees + (rotation about the x-axis), meaning the camera is horizontal. Use a yaw + of 0 if the camera is pointing forwards, +90 degrees if it's pointing + straight left, -90 degrees for straight right, etc. You can also set the + roll to +/-90 degrees if it's vertical, or 180 degrees if it's + upside-down. + +To see the commands for setting **camera pose** (on the robot), select and read +the Blocks **or** Java section below: + +.. tab-set:: + .. tab-item:: Blocks + :sync: blocks + + .. figure:: images/60-camera-pose.png + :align: center + :width: 85% + :alt: Camera Pose Blocks + + The third Block called ``.setCameraPose`` can be found in the toolbox + under Vision/AprilTag/AprilTagProcessor.Builder. + + .. tab-item:: Java + :sync: java + + These lines show that the camera placement on the robot becomes part of + the AprilTag Processor, through the Java Builder pattern. + + .. code-block:: java + + import org.firstinspires.ftc.robotcore.external.navigation.Position; + import org.firstinspires.ftc.robotcore.external.navigation.YawPitchRollAngles; + . + Position cameraPosition = new Position(DistanceUnit.INCH, 0, 0, 0, 0); + YawPitchRollAngles cameraOrientation = new YawPitchRollAngles(AngleUnit.DEGREES, 0, -90, 0, 0); + . + AprilTagProcessor aprilTag = new AprilTagProcessor.Builder() + .setCameraPose(cameraPosition, cameraOrientation) + .build(); + +Reading Global Pose +------------------- + +To see the commands for reading **global robot pose** data, select and read the +Blocks **or** Java section below: + +.. tab-set:: + .. tab-item:: Blocks + :sync: blocks + + These green Blocks can be assigned to position Variables, for later use. + + .. figure:: images/70-robot-position.png + :align: center + :width: 85% + :alt: Robot Position Blocks + + Robot Position Blocks + + These green Blocks can be assigned to orientation Variables, for later + use. + + .. figure:: images/75-robot-orientation.png + :align: center + :width: 85% + :alt: Robot Orientation Blocks + + Robot Orientation Blocks + + .. tab-item:: Java + :sync: java + + These lines demonstrate assigning position and orientation values to + variables, for later use. These are typically "instant" values inside a + ``for`` loop, as used in the Sample OpMode. + + .. code-block:: java + + import org.firstinspires.ftc.vision.apriltag.AprilTagDetection; + . + AprilTagDetection detection; + . + double myX = detection.robotPose.getPosition().x; + double myY = detection.robotPose.getPosition().y; + double myZ = detection.robotPose.getPosition().z; + . + double myPitch = detection.robotPose.getOrientation().getPitch(AngleUnit.DEGREES); + double myRoll = detection.robotPose.getOrientation().getRoll(AngleUnit.DEGREES); + double myYaw = detection.robotPose.getOrientation().getYaw(AngleUnit.DEGREES); + +Summary +------- + +The 2024 FTC software allows **robot localization** using a camera and fixed +AprilTags on the field. This is done by combining three elements: + + +* basic AprilTag pose data, +* the tag's built-in metadata, and +* the camera's pose on the robot. + +AprilTag localization uses a reference frame (coordinate system) that may +differ from others, such as IMU or robot axes, odometry device axes, and the +FTC field system. Adjust as needed. + +Evaluate this navigation tool against other choices, and plan a robot strategy +based on demonstrated capability. + +Best of luck as you develop FTC robot navigation to reach your goals! + diff --git a/docs/source/apriltag/vision_portal/apriltag_localization/images/05-ITD-tags.png b/docs/source/apriltag/vision_portal/apriltag_localization/images/05-ITD-tags.png new file mode 100644 index 00000000..e346f5c6 Binary files /dev/null and b/docs/source/apriltag/vision_portal/apriltag_localization/images/05-ITD-tags.png differ diff --git a/docs/source/apriltag/vision_portal/apriltag_localization/images/06-Res-Q-field-axes.png b/docs/source/apriltag/vision_portal/apriltag_localization/images/06-Res-Q-field-axes.png new file mode 100644 index 00000000..c1889547 Binary files /dev/null and b/docs/source/apriltag/vision_portal/apriltag_localization/images/06-Res-Q-field-axes.png differ diff --git a/docs/source/apriltag/vision_portal/apriltag_localization/images/07-full-tag-11.png b/docs/source/apriltag/vision_portal/apriltag_localization/images/07-full-tag-11.png new file mode 100644 index 00000000..11fe23af Binary files /dev/null and b/docs/source/apriltag/vision_portal/apriltag_localization/images/07-full-tag-11.png differ diff --git a/docs/source/apriltag/vision_portal/apriltag_localization/images/08-tag-11.png b/docs/source/apriltag/vision_portal/apriltag_localization/images/08-tag-11.png new file mode 100644 index 00000000..ae9b5998 Binary files /dev/null and b/docs/source/apriltag/vision_portal/apriltag_localization/images/08-tag-11.png differ diff --git a/docs/source/apriltag/vision_portal/apriltag_localization/images/10-DS-screen.png b/docs/source/apriltag/vision_portal/apriltag_localization/images/10-DS-screen.png new file mode 100644 index 00000000..f342f14d Binary files /dev/null and b/docs/source/apriltag/vision_portal/apriltag_localization/images/10-DS-screen.png differ diff --git a/docs/source/apriltag/vision_portal/apriltag_localization/images/20-CameraStream.png b/docs/source/apriltag/vision_portal/apriltag_localization/images/20-CameraStream.png new file mode 100644 index 00000000..d836b8d1 Binary files /dev/null and b/docs/source/apriltag/vision_portal/apriltag_localization/images/20-CameraStream.png differ diff --git a/docs/source/apriltag/vision_portal/apriltag_localization/images/30-LiveView.png b/docs/source/apriltag/vision_portal/apriltag_localization/images/30-LiveView.png new file mode 100644 index 00000000..518692c7 Binary files /dev/null and b/docs/source/apriltag/vision_portal/apriltag_localization/images/30-LiveView.png differ diff --git a/docs/source/apriltag/vision_portal/apriltag_localization/images/40-telemetry.png b/docs/source/apriltag/vision_portal/apriltag_localization/images/40-telemetry.png new file mode 100644 index 00000000..f260edf4 Binary files /dev/null and b/docs/source/apriltag/vision_portal/apriltag_localization/images/40-telemetry.png differ diff --git a/docs/source/apriltag/vision_portal/apriltag_localization/images/45-ITD-tag-numbers.png b/docs/source/apriltag/vision_portal/apriltag_localization/images/45-ITD-tag-numbers.png new file mode 100644 index 00000000..f6b5d4c8 Binary files /dev/null and b/docs/source/apriltag/vision_portal/apriltag_localization/images/45-ITD-tag-numbers.png differ diff --git a/docs/source/apriltag/vision_portal/apriltag_localization/images/50-field-axes.png b/docs/source/apriltag/vision_portal/apriltag_localization/images/50-field-axes.png new file mode 100644 index 00000000..c4540c52 Binary files /dev/null and b/docs/source/apriltag/vision_portal/apriltag_localization/images/50-field-axes.png differ diff --git a/docs/source/apriltag/vision_portal/apriltag_localization/images/60-camera-pose.png b/docs/source/apriltag/vision_portal/apriltag_localization/images/60-camera-pose.png new file mode 100644 index 00000000..8b00927b Binary files /dev/null and b/docs/source/apriltag/vision_portal/apriltag_localization/images/60-camera-pose.png differ diff --git a/docs/source/apriltag/vision_portal/apriltag_localization/images/70-robot-position.png b/docs/source/apriltag/vision_portal/apriltag_localization/images/70-robot-position.png new file mode 100644 index 00000000..8d44bdde Binary files /dev/null and b/docs/source/apriltag/vision_portal/apriltag_localization/images/70-robot-position.png differ diff --git a/docs/source/apriltag/vision_portal/apriltag_localization/images/75-robot-orientation.png b/docs/source/apriltag/vision_portal/apriltag_localization/images/75-robot-orientation.png new file mode 100644 index 00000000..7f6da8b5 Binary files /dev/null and b/docs/source/apriltag/vision_portal/apriltag_localization/images/75-robot-orientation.png differ diff --git a/docs/source/conf.py b/docs/source/conf.py index 3842eb44..9f602b75 100644 --- a/docs/source/conf.py +++ b/docs/source/conf.py @@ -323,9 +323,10 @@ def setup(app): # Set Cookie Banner to disabled by default cookiebanner_enabled = False +html_context = dict() + # Configure for local official-esque builds if(os.environ.get("LOCAL_DOCS_BUILD") == "true"): - html_context = dict() html_context['display_lower_left'] = True html_context['current_version'] = version @@ -346,6 +347,11 @@ def setup(app): cookiebanner_enabled = True extensions.append('sphinx_sitemap') html_baseurl = os.environ.get("FTCDOCS_URL", default="") + html_context['display_github'] = True + html_context['github_user'] = 'FIRST-Tech-Challenge' + html_context['github_repo'] = 'ftcdocs' + html_context['github_version'] = 'main/docs/source/' + # Configure RTD Theme html_theme_options = { diff --git a/docs/source/index.rst b/docs/source/index.rst index 206f7038..eed00b8a 100644 --- a/docs/source/index.rst +++ b/docs/source/index.rst @@ -43,6 +43,7 @@ to see why. game_specific_resources/blog/blog tech_tips/tech-tips + ai/innovation_corner/innovation-corner manuals/game_manuals/game_manuals Game Q&A Forum game_specific_resources/playing_field_resources/playing_field_resources @@ -83,6 +84,7 @@ to see why. VisionPortal Overview Webcams for VisionPortal Understanding AprilTag Values + AprilTag Localization AprilTag Test Images .. toctree:: diff --git a/docs/source/programming_resources/index.rst b/docs/source/programming_resources/index.rst index 4c4ecfc9..ba6c300d 100644 --- a/docs/source/programming_resources/index.rst +++ b/docs/source/programming_resources/index.rst @@ -79,6 +79,7 @@ Topics for programming with AprilTags VisionPortal Overview <../apriltag/vision_portal/visionportal_overview/visionportal-overview> Webcams for VisionPortal Understanding AprilTag Values <../apriltag/understanding_apriltag_detection_values/understanding-apriltag-detection-values> + AprilTag Localization <../apriltag/vision_portal/apriltag_localization/apriltag-localization> AprilTag Test Images <../apriltag/opmode_test_images/opmode-test-images> TensorFlow Programming diff --git a/docs/source/programming_resources/tutorial_specific/android_studio/installing_android_studio/Installing-Android-Studio.rst b/docs/source/programming_resources/tutorial_specific/android_studio/installing_android_studio/Installing-Android-Studio.rst index 9609fcf6..6d832c24 100644 --- a/docs/source/programming_resources/tutorial_specific/android_studio/installing_android_studio/Installing-Android-Studio.rst +++ b/docs/source/programming_resources/tutorial_specific/android_studio/installing_android_studio/Installing-Android-Studio.rst @@ -24,15 +24,21 @@ to verify that your system satisfies the list of minimum requirements: * `MacOS `__ * `Linux `__ -Java Development Kit -~~~~~~~~~~~~~~~~~~~~ -Earlier versions of Android Studio required that the user install the -Java Development Kit software separately. Current versions of Android -Studio incorporate the Java development software as part of the entire -install package. It is no longer necessary (or recommended) to install -the Java Development Kit separately. Instead, it is recommended that you -use the Java Development Kit that is included with Android Studio. +.. caution:: + + With the introduction of **Android Studio Ladybug**, the JDK that is packaged with + Android Studio is incompatible with the FtcRobotController workspace. If you install + or update an existing installation to Android Studio Ladybug, you will need to install + JDK 17 separately. + + Upon initial load of the FtcRobotController workspace using Android Studio Ladybug, + an error will be displayed during the Gradle sync and Android Studio will recommend that + you upgrade Gradle. Do not upgrade Gradle. + + For more detailed instructions see: Configuring + + Downloading and Installing Android Studio ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ @@ -64,3 +70,76 @@ Once the setup package has downloaded, launch the application and follow the on-screen instructions to install Android Studio. +Configuring Android Studio (Ladybug and later) +~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ + +.. note:: + + See the Caution above for why this is necessary. + +.. note:: + + Android Studio Ladybug updates the underlying JetBrains IntelliJ version such that + the interface is a VSCode look alike. The screenshots in this documentation use the + JetBrains/Android Studio Classic UI which is no longer supported natively by JetBrains. + To follow along, users should install the `Classic UI `__ plugin. + + +#. `Install JDK 17 `__ + If you did not already have this installed independently of Android Studio. + e.g. If you were using Android Studio’s bundled JDK, then when Ladybug is installed Android Studio + will unhelpfully overwrite your old bundled JDK version. Note there’s a bug in the Settings → Build Tools → Gradle dialog + that may make you think your old version of the JDK is there, but it is not. You must use an unbundled version of the JDK. + +#. Go to File -> Settings and under Build, Execution, Deployment -> Build Tools -> Gradle use the Add JDK from disk option + to select the newly installed JDK 17. In the image below take careful note of the directory paths + for the options labeled jbr-17 and jbr-21. Note that they are the same. This is the aforementioned UI bug, + and that is Android Studio overwriting your old JDK. In this image you’ll see I’ve selected the JDK that + was installed independently. + +.. image:: images/AndroidStudioSelectJdk.png + :align: center + +| + +Do Not Upgrade Gradle +~~~~~~~~~~~~~~~~~~~~~ + +If you have upgraded Android Studio from an earlier version to Ladybug, or you did not install and +configure the JDK prior to loading a FtcRobotController workspace, then Android Studio may present an +error and recommend that you upgrade Gradle. + +.. image:: images/AndroidStudioUpgradeGradle.png + :align: center + +| + +Do not do this. The FtcRobotController build is incompatible with upgraded Gradle. If you do, you +will presented with another, even more, indecipherable error. + +To recover, you need to rollback the changes that Android Studio made upon that click. +To do that select Git -> Uncommitted Changes -> Show Shelf + +.. image:: images/AndroidStudioRecoverUpgrade.png + :align: center + +| + +That will show the changes you have in your workspace. You want to rollback the 4 gradle files shown in the +following image. You can either select the Changes checkbox to select all files, or individually select the +gradle files. Note that if you have changes in your workspace that haven’t been committed, you want to be +careful not to select those files or you may lose work. + +.. image:: images/AndroidStudioRollback.png + :align: center + +| + +Once you have the proper files selected, click the Rollback button. + +Resync and that should revert you to the error that prompted you to upgrade Gradle in the first place. +From there follow the instructions above to install JDK 17. + + + + diff --git a/docs/source/programming_resources/tutorial_specific/android_studio/installing_android_studio/images/AndroidStudioRecoverUpgrade.png b/docs/source/programming_resources/tutorial_specific/android_studio/installing_android_studio/images/AndroidStudioRecoverUpgrade.png new file mode 100644 index 00000000..168bf022 Binary files /dev/null and b/docs/source/programming_resources/tutorial_specific/android_studio/installing_android_studio/images/AndroidStudioRecoverUpgrade.png differ diff --git a/docs/source/programming_resources/tutorial_specific/android_studio/installing_android_studio/images/AndroidStudioRollback.png b/docs/source/programming_resources/tutorial_specific/android_studio/installing_android_studio/images/AndroidStudioRollback.png new file mode 100644 index 00000000..e6c02ba5 Binary files /dev/null and b/docs/source/programming_resources/tutorial_specific/android_studio/installing_android_studio/images/AndroidStudioRollback.png differ diff --git a/docs/source/programming_resources/tutorial_specific/android_studio/installing_android_studio/images/AndroidStudioSelectJdk.png b/docs/source/programming_resources/tutorial_specific/android_studio/installing_android_studio/images/AndroidStudioSelectJdk.png new file mode 100644 index 00000000..cbfc4266 Binary files /dev/null and b/docs/source/programming_resources/tutorial_specific/android_studio/installing_android_studio/images/AndroidStudioSelectJdk.png differ diff --git a/docs/source/programming_resources/tutorial_specific/android_studio/installing_android_studio/images/AndroidStudioUpgradeGradle.png b/docs/source/programming_resources/tutorial_specific/android_studio/installing_android_studio/images/AndroidStudioUpgradeGradle.png new file mode 100644 index 00000000..2a8fa617 Binary files /dev/null and b/docs/source/programming_resources/tutorial_specific/android_studio/installing_android_studio/images/AndroidStudioUpgradeGradle.png differ