Skip to content

Commit d86ada4

Browse files
Piotr Brzyskifacebook-github-bot
authored andcommitted
Docs - update modules (#131)
Summary: Pull Request resolved: #131 1. Update all packages to their most recent versions. 2. Add missing dependencies. 3. Add `cheerio` module's version override until a new version of `cmfcmf/docusaurus-search-local` is released. (see here cmfcmf/docusaurus-search-local#218) 4. Fix MDX pages failing to compile with docusaurus 3.5.2 Reviewed By: SeaOtocinclus Differential Revision: D61535847 fbshipit-source-id: e52aefc9a2eade45202e618235e799804b13ace9
1 parent f387d23 commit d86ada4

File tree

6 files changed

+141
-222
lines changed

6 files changed

+141
-222
lines changed

website/docs/ARK/mps/request_mps/mps_cli_guide.mdx

Lines changed: 85 additions & 170 deletions
Original file line numberDiff line numberDiff line change
@@ -79,240 +79,155 @@ Project Aria MPS CLI settings can be customized via the mps.ini file. This file
7979

8080
<table>
8181
<tr>
82-
<td><strong>Setting</strong>
83-
</td>
84-
<td><strong>Description</strong>
85-
</td>
86-
<td><strong>Default Value</strong>
87-
</td>
82+
<td><strong>Setting</strong></td>
83+
<td><strong>Description</strong></td>
84+
<td><strong>Default Value</strong></td>
8885
</tr>
8986
<tr>
90-
<td colspan="3" ><strong>General settings</strong>
91-
</td>
87+
<td colspan="3" ><strong>General settings</strong></td>
9288
</tr>
9389
<tr>
94-
<td><code>log_dir</code>
95-
</td>
96-
<td>Where log files are saved for each run. The filename is the timestamp from when the request tool started running.
97-
</td>
98-
<td><code>/tmp/logs/projectaria/mps/</code>
99-
</td>
90+
<td><code>log_dir</code></td>
91+
<td>Where log files are saved for each run. The filename is the timestamp from when the request tool started running.</td>
92+
<td><code>/tmp/logs/projectaria/mps/</code></td>
10093
</tr>
10194
<tr>
102-
<td><code>status_check_interval</code>
103-
</td>
104-
<td>How long the MPS CLI waits to check the status of data during the Processing stage.
105-
</td>
106-
<td>30 secs
107-
</td>
95+
<td><code>status_check_interval</code></td>
96+
<td>How long the MPS CLI waits to check the status of data during the Processing stage.</td>
97+
<td>30 secs</td>
10898
</tr>
10999
<tr>
110-
<td colspan="3" > <strong>HASH</strong>
111-
</td>
100+
<td colspan="3" > <strong>HASH</strong></td>
112101
</tr>
113102
<tr>
114-
<td><code>concurrent_hashes</code>
115-
</td>
116-
<td>Maximum number of files that can be concurrently hashed
117-
</td>
118-
<td>4
119-
</td>
103+
<td><code>concurrent_hashes</code></td>
104+
<td>Maximum number of files that can be concurrently hashed</td>
105+
<td>4</td>
120106
</tr>
121107
<tr>
122-
<td><code>chunk_size</code>
123-
</td>
124-
<td>Chunk size to use for hashing
125-
</td>
126-
<td>10MB
127-
</td>
108+
<td><code>chunk_size</code></td>
109+
<td>Chunk size to use for hashing</td>
110+
<td>10MB</td>
128111
</tr>
129112
<tr>
130-
<td colspan="3" ><strong>Encryption</strong>
131-
</td>
113+
<td colspan="3" ><strong>Encryption</strong></td>
132114
</tr>
133115
<tr>
134-
<td><code>chunk_size</code>
135-
</td>
136-
<td>Chunk size to use for encryption
137-
</td>
138-
<td>50MB
139-
</td>
116+
<td><code>chunk_size</code></td>
117+
<td>Chunk size to use for encryption</td>
118+
<td>50MB</td>
140119
</tr>
141120
<tr>
142-
<td><code>concurrent_encryptions</code>
143-
</td>
144-
<td>Maximum number of files that can be concurrently encrypted
145-
</td>
146-
<td>4
147-
</td>
121+
<td><code>concurrent_encryptions</code></td>
122+
<td>Maximum number of files that can be concurrently encrypted</td>
123+
<td>4</td>
148124
</tr>
149125
<tr>
150-
<td><code>delete_encrypted_files</code>
151-
</td>
152-
<td>Whether to delete the encrypted files after upload is done. If you set this to false local disk usage will double, due to an encrypted copy of each file.
153-
</td>
154-
<td>True.
155-
</td>
126+
<td><code>delete_encrypted_files</code></td>
127+
<td>Whether to delete the encrypted files after upload is done. If you set this to false local disk usage will double, due to an encrypted copy of each file.</td>
128+
<td>True.</td>
156129
</tr>
157130
<tr>
158-
<td colspan="3" ><strong>Health Check</strong>
159-
</td>
131+
<td colspan="3" ><strong>Health Check</strong></td>
160132
</tr>
161133
<tr>
162-
<td><code>concurrent_health_checks</code>
163-
</td>
164-
<td>Maximum number of VRS file healthchecks that can be run concurrently
165-
</td>
166-
<td>2
167-
</td>
134+
<td><code>concurrent_health_checks</code></td>
135+
<td>Maximum number of VRS file healthchecks that can be run concurrently</td>
136+
<td>2</td>
168137
</tr>
169138
<tr>
170-
<td colspan="3" ><strong>Uploads</strong>
171-
</td>
139+
<td colspan="3" ><strong>Uploads</strong></td>
172140
</tr>
173141
<tr>
174-
<td><code>backoff</code>
175-
</td>
176-
<td>The exponential back off factor for retries during failed uploads. The wait time between successive retries will increase with this factor.
177-
</td>
178-
<td>1.5
179-
</td>
142+
<td><code>backoff</code></td>
143+
<td>The exponential back off factor for retries during failed uploads. The wait time between successive retries will increase with this factor.</td>
144+
<td>1.5</td>
180145
</tr>
181146
<tr>
182-
<td><code>interval</code>
183-
</td>
184-
<td>Base delay between retries.
185-
</td>
186-
<td>20 secs
187-
</td>
147+
<td><code>interval</code></td>
148+
<td>Base delay between retries.</td>
149+
<td>20 secs</td>
188150
</tr>
189151
<tr>
190-
<td><code>retries</code>
191-
</td>
192-
<td>Maximum number of retries before giving up.
193-
</td>
194-
<td>10
195-
</td>
152+
<td><code>retries</code></td>
153+
<td>Maximum number of retries before giving up.</td>
154+
<td>10</td>
196155
</tr>
197156
<tr>
198-
<td><code>concurrent_uploads</code>
199-
</td>
200-
<td>Maximum number of concurrent uploads.
201-
</td>
202-
<td>4
203-
</td>
157+
<td><code>concurrent_uploads</code></td>
158+
<td>Maximum number of concurrent uploads.</td>
159+
<td>4</td>
204160
</tr>
205161
<tr>
206-
<td><code>max_chunk_size</code>
207-
</td>
208-
<td>Maximum chunk size that can be used during uploads.
209-
</td>
210-
<td>100 MB
211-
</td>
162+
<td><code>max_chunk_size</code></td>
163+
<td>Maximum chunk size that can be used during uploads.</td>
164+
<td>100 MB</td>
212165
</tr>
213166
<tr>
214-
<td><code>min_chunk_size</code>
215-
</td>
216-
<td>The minimum upload chunk size.
217-
</td>
218-
<td>5 MB
219-
</td>
167+
<td><code>min_chunk_size</code></td>
168+
<td>The minimum upload chunk size.</td>
169+
<td>5 MB</td>
220170
</tr>
221171
<tr>
222-
<td><code>smoothing_window_size</code>
223-
</td>
224-
<td>Size of the smoothing window to adjust the chunk size. This value defines the number of uploaded chunks that will be used to determine the next chunk size.
225-
</td>
226-
<td>10
227-
</td>
172+
<td><code>smoothing_window_size</code></td>
173+
<td>Size of the smoothing window to adjust the chunk size. This value defines the number of uploaded chunks that will be used to determine the next chunk size.</td>
174+
<td>10</td>
228175
</tr>
229176
<tr>
230-
<td><code>target_chunk_upload_secs</code>
231-
</td>
232-
<td>Target time to upload a single chunk. If the chunks in a smoothing window take longer, we reduce the chunk size. If it takes less time, we increase the chunk size.
233-
</td>
234-
<td>3 secs
235-
</td>
177+
<td><code>target_chunk_upload_secs</code></td>
178+
<td>Target time to upload a single chunk. If the chunks in a smoothing window take longer, we reduce the chunk size. If it takes less time, we increase the chunk size.</td>
179+
<td>3 secs</td>
236180
</tr>
237181
<tr>
238-
<td colspan="3" ><strong>GraphQL (Query the MPS backend for MPS Status)</strong>
239-
</td>
182+
<td colspan="3" ><strong>GraphQL (Query the MPS backend for MPS Status)</strong></td>
240183
</tr>
241184
<tr>
242-
<td><code>backoff</code>
243-
</td>
244-
<td>This the exponential back off factor for retries for failed queries. The wait time between successive retries will increase with this factor
245-
</td>
246-
<td>1.5
247-
</td>
185+
<td><code>backoff</code></td>
186+
<td>This the exponential back off factor for retries for failed queries. The wait time between successive retries will increase with this factor</td>
187+
<td>1.5</td>
248188
</tr>
249189
<tr>
250-
<td><code>interval</code>
251-
</td>
252-
<td>Base delay between retries
253-
</td>
254-
<td>4 secs
255-
</td>
190+
<td><code>interval</code></td>
191+
<td>Base delay between retries</td>
192+
<td>4 secs</td>
256193
</tr>
257194
<tr>
258-
<td><code>retries</code>
259-
</td>
260-
<td>Maximum number of retries before giving up
261-
</td>
262-
<td>3
263-
</td>
195+
<td><code>retries</code></td>
196+
<td>Maximum number of retries before giving up</td>
197+
<td>3</td>
264198
</tr>
265199
<tr>
266-
<td colspan="3" ><strong>Download</strong>
267-
</td>
200+
<td colspan="3" ><strong>Download</strong></td>
268201
</tr>
269202
<tr>
270-
<td><code>backoff</code>
271-
</td>
272-
<td>This the exponential back off factor for retries during failed downloads. The wait time between successive retries will increase with this factor.
273-
</td>
274-
<td>1.5
275-
</td>
203+
<td><code>backoff</code></td>
204+
<td>This the exponential back off factor for retries during failed downloads. The wait time between successive retries will increase with this factor.</td>
205+
<td>1.5</td>
276206
</tr>
277207
<tr>
278-
<td><code>interval</code>
279-
</td>
280-
<td>Base delay between retries
281-
</td>
282-
<td>20 secs
283-
</td>
208+
<td><code>interval</code></td>
209+
<td>Base delay between retries</td>
210+
<td>20 secs</td>
284211
</tr>
285212
<tr>
286-
<td><code>retries</code>
287-
</td>
288-
<td>Maximum number of retries before giving up
289-
</td>
290-
<td>10
291-
</td>
213+
<td><code>retries</code></td>
214+
<td>Maximum number of retries before giving up</td>
215+
<td>10</td>
292216
</tr>
293217
<tr>
294-
<td><code>chunk_size</code>
295-
</td>
296-
<td>The chunk size to use for downloads
297-
</td>
298-
<td>10MB
299-
</td>
218+
<td><code>chunk_size</code></td>
219+
<td>The chunk size to use for downloads</td>
220+
<td>10MB</td>
300221
</tr>
301222
<tr>
302-
<td><code>concurrent_downloads</code>
303-
</td>
304-
<td>Number of concurrent downloads
305-
</td>
306-
<td>10
307-
</td>
223+
<td><code>concurrent_downloads</code></td>
224+
<td>Number of concurrent downloads</td>
225+
<td>10</td>
308226
</tr>
309227
<tr>
310-
<td><code>delete_zip</code>
311-
</td>
312-
<td>The server will send the results in a zip file. This flag controls whether to delete the zip file after extraction or not
313-
</td>
314-
<td>True
315-
</td>
228+
<td><code>delete_zip</code></td>
229+
<td>The server will send the results in a zip file. This flag controls whether to delete the zip file after extraction or not</td>
230+
<td>True</td>
316231
</tr>
317232
</table>
318233

website/docs/ARK/sw_release_notes.mdx

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -308,7 +308,7 @@ MPS requests using the Desktop app have been slightly restructured, you no longe
308308

309309
The Streaming button in the dashboard has been renamed to Preview, to better reflect the capability provided by the Desktop app. Use the [Client SDK with CLI](/ARK/sdk/sdk.mdx) to stream data.
310310

311-
Desktop app logs are now stored in ~/.aria/logs/aria_desktop_app_{date}_{time}.log
311+
Desktop app logs are now stored in `~/.aria/logs/aria_desktop_app_{date}_{time}.log`
312312

313313
* Please note, the streaming preview available through the Desktop app is optimized for Profile 12.
314314

website/docs/ARK/troubleshooting/desktop_app_logs.mdx

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -32,7 +32,7 @@ open /Applications/Aria.app --args --log-output
3232
```
3333

3434
3. The Aria Desktop app should then open with logging enabled
35-
4. The logs will be stored in ~/.aria/logs/aria_desktop_app_{date}_{time}.log
35+
4. The logs will be stored in `~/.aria/logs/aria_desktop_app_{date}_{time}.log`
3636
5. Logs will continue to be added to this file until you quit the app
3737
6. If you generate logs at a later time, they will be appended to the end of these logs
3838

@@ -48,6 +48,6 @@ open /Applications/Aria.app --args --log-output
4848
```
4949

5050
3. The Aria Desktop app should then open with logging enabled.
51-
4. The logs will be stored in The logs will be stored in ~/.aria/logs/aria_desktop_app_{date}_{time}.log
51+
4. The logs will be stored in The logs will be stored in `~/.aria/logs/aria_desktop_app_{date}_{time}.log`
5252
5. Logs will continue to be added to this file until you quit the app
5353
6. If you generate logs at a later time, they will be appended to the end of these logs

website/docs/data_utilities/core_code_snippets/data_provider.mdx

Lines changed: 6 additions & 6 deletions
Original file line numberDiff line numberDiff line change
@@ -111,9 +111,9 @@ Project Aria data has four kinds of TimeDomain entries. We strongly recommend al
111111
* TimeDomain.TIME_CODE - for multiple devices
112112

113113
You can also search using three different time query options:
114-
* TimeQueryOptions.BEFORE (default): last data with t <= t_query
115-
* TimeQueryOptions.AFTER : first data with t >= t_query
116-
* TimeQueryOptions.CLOSEST : the data where |t - t_query| is smallest
114+
* TimeQueryOptions.BEFORE (default): last data with `t <= t_query`
115+
* TimeQueryOptions.AFTER : first data with `t >= t_query`
116+
* TimeQueryOptions.CLOSEST : the data where `|t - t_query|` is smallest
117117

118118
```python
119119
for stream_id in provider.get_all_streams():
@@ -133,9 +133,9 @@ for stream_id in provider.get_all_streams():
133133
* TimeDomain::TimeCode - for multiple devices
134134

135135
You can also search using three different time query options:
136-
* TimeQueryOptions::Before : last data with t <= t_query
137-
* TimeQueryOptions::After : first data with t >= t_query
138-
* TimeQueryOptions::Closest : the data where |t - t_query| is smallest
136+
* TimeQueryOptions::Before : last data with `t <= t_query`
137+
* TimeQueryOptions::After : first data with `t >= t_query`
138+
* TimeQueryOptions::Closest : the data where `|t - t_query|` is smallest
139139

140140
```cpp
141141
for (const auto& streamId : provider.getAllStreams()) {

0 commit comments

Comments
 (0)