diff --git a/website/docs/ARK/mps/request_mps/mps_cli_guide.mdx b/website/docs/ARK/mps/request_mps/mps_cli_guide.mdx
index 1080dd73e..5c979dda0 100644
--- a/website/docs/ARK/mps/request_mps/mps_cli_guide.mdx
+++ b/website/docs/ARK/mps/request_mps/mps_cli_guide.mdx
@@ -79,240 +79,155 @@ Project Aria MPS CLI settings can be customized via the mps.ini file. This file
- Setting
- |
- Description
- |
- Default Value
- |
+ Setting |
+ Description |
+ Default Value |
- General settings
- |
+ General settings |
- log_dir
- |
- Where log files are saved for each run. The filename is the timestamp from when the request tool started running.
- |
- /tmp/logs/projectaria/mps/
- |
+ log_dir |
+ Where log files are saved for each run. The filename is the timestamp from when the request tool started running. |
+ /tmp/logs/projectaria/mps/ |
- status_check_interval
- |
- How long the MPS CLI waits to check the status of data during the Processing stage.
- |
- 30 secs
- |
+ status_check_interval |
+ How long the MPS CLI waits to check the status of data during the Processing stage. |
+ 30 secs |
- HASH
- |
+ HASH |
- concurrent_hashes
- |
- Maximum number of files that can be concurrently hashed
- |
- 4
- |
+ concurrent_hashes |
+ Maximum number of files that can be concurrently hashed |
+ 4 |
- chunk_size
- |
- Chunk size to use for hashing
- |
- 10MB
- |
+ chunk_size |
+ Chunk size to use for hashing |
+ 10MB |
- Encryption
- |
+ Encryption |
- chunk_size
- |
- Chunk size to use for encryption
- |
- 50MB
- |
+ chunk_size |
+ Chunk size to use for encryption |
+ 50MB |
- concurrent_encryptions
- |
- Maximum number of files that can be concurrently encrypted
- |
- 4
- |
+ concurrent_encryptions |
+ Maximum number of files that can be concurrently encrypted |
+ 4 |
- delete_encrypted_files
- |
- Whether to delete the encrypted files after upload is done. If you set this to false local disk usage will double, due to an encrypted copy of each file.
- |
- True.
- |
+ delete_encrypted_files |
+ Whether to delete the encrypted files after upload is done. If you set this to false local disk usage will double, due to an encrypted copy of each file. |
+ True. |
- Health Check
- |
+ Health Check |
- concurrent_health_checks
- |
- Maximum number of VRS file healthchecks that can be run concurrently
- |
- 2
- |
+ concurrent_health_checks |
+ Maximum number of VRS file healthchecks that can be run concurrently |
+ 2 |
- Uploads
- |
+ Uploads |
- backoff
- |
- The exponential back off factor for retries during failed uploads. The wait time between successive retries will increase with this factor.
- |
- 1.5
- |
+ backoff |
+ The exponential back off factor for retries during failed uploads. The wait time between successive retries will increase with this factor. |
+ 1.5 |
- interval
- |
- Base delay between retries.
- |
- 20 secs
- |
+ interval |
+ Base delay between retries. |
+ 20 secs |
- retries
- |
- Maximum number of retries before giving up.
- |
- 10
- |
+ retries |
+ Maximum number of retries before giving up. |
+ 10 |
- concurrent_uploads
- |
- Maximum number of concurrent uploads.
- |
- 4
- |
+ concurrent_uploads |
+ Maximum number of concurrent uploads. |
+ 4 |
- max_chunk_size
- |
- Maximum chunk size that can be used during uploads.
- |
- 100 MB
- |
+ max_chunk_size |
+ Maximum chunk size that can be used during uploads. |
+ 100 MB |
- min_chunk_size
- |
- The minimum upload chunk size.
- |
- 5 MB
- |
+ min_chunk_size |
+ The minimum upload chunk size. |
+ 5 MB |
- smoothing_window_size
- |
- Size of the smoothing window to adjust the chunk size. This value defines the number of uploaded chunks that will be used to determine the next chunk size.
- |
- 10
- |
+ smoothing_window_size |
+ Size of the smoothing window to adjust the chunk size. This value defines the number of uploaded chunks that will be used to determine the next chunk size. |
+ 10 |
- target_chunk_upload_secs
- |
- Target time to upload a single chunk. If the chunks in a smoothing window take longer, we reduce the chunk size. If it takes less time, we increase the chunk size.
- |
- 3 secs
- |
+ target_chunk_upload_secs |
+ Target time to upload a single chunk. If the chunks in a smoothing window take longer, we reduce the chunk size. If it takes less time, we increase the chunk size. |
+ 3 secs |
- GraphQL (Query the MPS backend for MPS Status)
- |
+ GraphQL (Query the MPS backend for MPS Status) |
- backoff
- |
- This the exponential back off factor for retries for failed queries. The wait time between successive retries will increase with this factor
- |
- 1.5
- |
+ backoff |
+ This the exponential back off factor for retries for failed queries. The wait time between successive retries will increase with this factor |
+ 1.5 |
- interval
- |
- Base delay between retries
- |
- 4 secs
- |
+ interval |
+ Base delay between retries |
+ 4 secs |
- retries
- |
- Maximum number of retries before giving up
- |
- 3
- |
+ retries |
+ Maximum number of retries before giving up |
+ 3 |
- Download
- |
+ Download |
- backoff
- |
- This the exponential back off factor for retries during failed downloads. The wait time between successive retries will increase with this factor.
- |
- 1.5
- |
+ backoff |
+ This the exponential back off factor for retries during failed downloads. The wait time between successive retries will increase with this factor. |
+ 1.5 |
- interval
- |
- Base delay between retries
- |
- 20 secs
- |
+ interval |
+ Base delay between retries |
+ 20 secs |
- retries
- |
- Maximum number of retries before giving up
- |
- 10
- |
+ retries |
+ Maximum number of retries before giving up |
+ 10 |
- chunk_size
- |
- The chunk size to use for downloads
- |
- 10MB
- |
+ chunk_size |
+ The chunk size to use for downloads |
+ 10MB |
- concurrent_downloads
- |
- Number of concurrent downloads
- |
- 10
- |
+ concurrent_downloads |
+ Number of concurrent downloads |
+ 10 |
- delete_zip
- |
- The server will send the results in a zip file. This flag controls whether to delete the zip file after extraction or not
- |
- True
- |
+ delete_zip |
+ The server will send the results in a zip file. This flag controls whether to delete the zip file after extraction or not |
+ True |
diff --git a/website/docs/ARK/sw_release_notes.mdx b/website/docs/ARK/sw_release_notes.mdx
index e79388581..c74b2a3c2 100644
--- a/website/docs/ARK/sw_release_notes.mdx
+++ b/website/docs/ARK/sw_release_notes.mdx
@@ -308,7 +308,7 @@ MPS requests using the Desktop app have been slightly restructured, you no longe
The Streaming button in the dashboard has been renamed to Preview, to better reflect the capability provided by the Desktop app. Use the [Client SDK with CLI](/ARK/sdk/sdk.mdx) to stream data.
-Desktop app logs are now stored in ~/.aria/logs/aria_desktop_app_{date}_{time}.log
+Desktop app logs are now stored in `~/.aria/logs/aria_desktop_app_{date}_{time}.log`
* Please note, the streaming preview available through the Desktop app is optimized for Profile 12.
diff --git a/website/docs/ARK/troubleshooting/desktop_app_logs.mdx b/website/docs/ARK/troubleshooting/desktop_app_logs.mdx
index 9dca76156..c474e888f 100644
--- a/website/docs/ARK/troubleshooting/desktop_app_logs.mdx
+++ b/website/docs/ARK/troubleshooting/desktop_app_logs.mdx
@@ -32,7 +32,7 @@ open /Applications/Aria.app --args --log-output
```
3. The Aria Desktop app should then open with logging enabled
-4. The logs will be stored in ~/.aria/logs/aria_desktop_app_{date}_{time}.log
+4. The logs will be stored in `~/.aria/logs/aria_desktop_app_{date}_{time}.log`
5. Logs will continue to be added to this file until you quit the app
6. If you generate logs at a later time, they will be appended to the end of these logs
@@ -48,6 +48,6 @@ open /Applications/Aria.app --args --log-output
```
3. The Aria Desktop app should then open with logging enabled.
-4. The logs will be stored in The logs will be stored in ~/.aria/logs/aria_desktop_app_{date}_{time}.log
+4. The logs will be stored in The logs will be stored in `~/.aria/logs/aria_desktop_app_{date}_{time}.log`
5. Logs will continue to be added to this file until you quit the app
6. If you generate logs at a later time, they will be appended to the end of these logs
diff --git a/website/docs/data_utilities/core_code_snippets/data_provider.mdx b/website/docs/data_utilities/core_code_snippets/data_provider.mdx
index 08bbdde81..9ef7e6619 100644
--- a/website/docs/data_utilities/core_code_snippets/data_provider.mdx
+++ b/website/docs/data_utilities/core_code_snippets/data_provider.mdx
@@ -111,9 +111,9 @@ Project Aria data has four kinds of TimeDomain entries. We strongly recommend al
* TimeDomain.TIME_CODE - for multiple devices
You can also search using three different time query options:
-* TimeQueryOptions.BEFORE (default): last data with t <= t_query
-* TimeQueryOptions.AFTER : first data with t >= t_query
-* TimeQueryOptions.CLOSEST : the data where |t - t_query| is smallest
+* TimeQueryOptions.BEFORE (default): last data with `t <= t_query`
+* TimeQueryOptions.AFTER : first data with `t >= t_query`
+* TimeQueryOptions.CLOSEST : the data where `|t - t_query|` is smallest
```python
for stream_id in provider.get_all_streams():
@@ -133,9 +133,9 @@ for stream_id in provider.get_all_streams():
* TimeDomain::TimeCode - for multiple devices
You can also search using three different time query options:
-* TimeQueryOptions::Before : last data with t <= t_query
-* TimeQueryOptions::After : first data with t >= t_query
-* TimeQueryOptions::Closest : the data where |t - t_query| is smallest
+* TimeQueryOptions::Before : last data with `t <= t_query`
+* TimeQueryOptions::After : first data with `t >= t_query`
+* TimeQueryOptions::Closest : the data where `|t - t_query|` is smallest
```cpp
for (const auto& streamId : provider.getAllStreams()) {
diff --git a/website/docs/open_datasets/aria_everyday_activities_dataset/aea_data_format.mdx b/website/docs/open_datasets/aria_everyday_activities_dataset/aea_data_format.mdx
index 1d280ec7d..06fb84142 100644
--- a/website/docs/open_datasets/aria_everyday_activities_dataset/aea_data_format.mdx
+++ b/website/docs/open_datasets/aria_everyday_activities_dataset/aea_data_format.mdx
@@ -40,34 +40,22 @@ Table 2: