You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Summary:
Pull Request resolved: #131
1. Update all packages to their most recent versions.
2. Add missing dependencies.
3. Add `cheerio` module's version override until a new version of `cmfcmf/docusaurus-search-local` is released. (see here cmfcmf/docusaurus-search-local#218)
4. Fix MDX pages failing to compile with docusaurus 3.5.2
Reviewed By: SeaOtocinclus
Differential Revision: D61535847
fbshipit-source-id: e52aefc9a2eade45202e618235e799804b13ace9
<td>Where log files are saved for each run. The filename is the timestamp from when the request tool started running.
97
-
</td>
98
-
<td><code>/tmp/logs/projectaria/mps/</code>
99
-
</td>
90
+
<td><code>log_dir</code></td>
91
+
<td>Where log files are saved for each run. The filename is the timestamp from when the request tool started running.</td>
92
+
<td><code>/tmp/logs/projectaria/mps/</code></td>
100
93
</tr>
101
94
<tr>
102
-
<td><code>status_check_interval</code>
103
-
</td>
104
-
<td>How long the MPS CLI waits to check the status of data during the Processing stage.
105
-
</td>
106
-
<td>30 secs
107
-
</td>
95
+
<td><code>status_check_interval</code></td>
96
+
<td>How long the MPS CLI waits to check the status of data during the Processing stage.</td>
97
+
<td>30 secs</td>
108
98
</tr>
109
99
<tr>
110
-
<tdcolspan="3" > <strong>HASH</strong>
111
-
</td>
100
+
<tdcolspan="3" > <strong>HASH</strong></td>
112
101
</tr>
113
102
<tr>
114
-
<td><code>concurrent_hashes</code>
115
-
</td>
116
-
<td>Maximum number of files that can be concurrently hashed
117
-
</td>
118
-
<td>4
119
-
</td>
103
+
<td><code>concurrent_hashes</code></td>
104
+
<td>Maximum number of files that can be concurrently hashed</td>
105
+
<td>4</td>
120
106
</tr>
121
107
<tr>
122
-
<td><code>chunk_size</code>
123
-
</td>
124
-
<td>Chunk size to use for hashing
125
-
</td>
126
-
<td>10MB
127
-
</td>
108
+
<td><code>chunk_size</code></td>
109
+
<td>Chunk size to use for hashing</td>
110
+
<td>10MB</td>
128
111
</tr>
129
112
<tr>
130
-
<tdcolspan="3" ><strong>Encryption</strong>
131
-
</td>
113
+
<tdcolspan="3" ><strong>Encryption</strong></td>
132
114
</tr>
133
115
<tr>
134
-
<td><code>chunk_size</code>
135
-
</td>
136
-
<td>Chunk size to use for encryption
137
-
</td>
138
-
<td>50MB
139
-
</td>
116
+
<td><code>chunk_size</code></td>
117
+
<td>Chunk size to use for encryption</td>
118
+
<td>50MB</td>
140
119
</tr>
141
120
<tr>
142
-
<td><code>concurrent_encryptions</code>
143
-
</td>
144
-
<td>Maximum number of files that can be concurrently encrypted
145
-
</td>
146
-
<td>4
147
-
</td>
121
+
<td><code>concurrent_encryptions</code></td>
122
+
<td>Maximum number of files that can be concurrently encrypted</td>
123
+
<td>4</td>
148
124
</tr>
149
125
<tr>
150
-
<td><code>delete_encrypted_files</code>
151
-
</td>
152
-
<td>Whether to delete the encrypted files after upload is done. If you set this to false local disk usage will double, due to an encrypted copy of each file.
153
-
</td>
154
-
<td>True.
155
-
</td>
126
+
<td><code>delete_encrypted_files</code></td>
127
+
<td>Whether to delete the encrypted files after upload is done. If you set this to false local disk usage will double, due to an encrypted copy of each file.</td>
<td>Maximum number of VRS file healthchecks that can be run concurrently
165
-
</td>
166
-
<td>2
167
-
</td>
134
+
<td><code>concurrent_health_checks</code></td>
135
+
<td>Maximum number of VRS file healthchecks that can be run concurrently</td>
136
+
<td>2</td>
168
137
</tr>
169
138
<tr>
170
-
<tdcolspan="3" ><strong>Uploads</strong>
171
-
</td>
139
+
<tdcolspan="3" ><strong>Uploads</strong></td>
172
140
</tr>
173
141
<tr>
174
-
<td><code>backoff</code>
175
-
</td>
176
-
<td>The exponential back off factor for retries during failed uploads. The wait time between successive retries will increase with this factor.
177
-
</td>
178
-
<td>1.5
179
-
</td>
142
+
<td><code>backoff</code></td>
143
+
<td>The exponential back off factor for retries during failed uploads. The wait time between successive retries will increase with this factor.</td>
144
+
<td>1.5</td>
180
145
</tr>
181
146
<tr>
182
-
<td><code>interval</code>
183
-
</td>
184
-
<td>Base delay between retries.
185
-
</td>
186
-
<td>20 secs
187
-
</td>
147
+
<td><code>interval</code></td>
148
+
<td>Base delay between retries.</td>
149
+
<td>20 secs</td>
188
150
</tr>
189
151
<tr>
190
-
<td><code>retries</code>
191
-
</td>
192
-
<td>Maximum number of retries before giving up.
193
-
</td>
194
-
<td>10
195
-
</td>
152
+
<td><code>retries</code></td>
153
+
<td>Maximum number of retries before giving up.</td>
154
+
<td>10</td>
196
155
</tr>
197
156
<tr>
198
-
<td><code>concurrent_uploads</code>
199
-
</td>
200
-
<td>Maximum number of concurrent uploads.
201
-
</td>
202
-
<td>4
203
-
</td>
157
+
<td><code>concurrent_uploads</code></td>
158
+
<td>Maximum number of concurrent uploads.</td>
159
+
<td>4</td>
204
160
</tr>
205
161
<tr>
206
-
<td><code>max_chunk_size</code>
207
-
</td>
208
-
<td>Maximum chunk size that can be used during uploads.
209
-
</td>
210
-
<td>100 MB
211
-
</td>
162
+
<td><code>max_chunk_size</code></td>
163
+
<td>Maximum chunk size that can be used during uploads.</td>
164
+
<td>100 MB</td>
212
165
</tr>
213
166
<tr>
214
-
<td><code>min_chunk_size</code>
215
-
</td>
216
-
<td>The minimum upload chunk size.
217
-
</td>
218
-
<td>5 MB
219
-
</td>
167
+
<td><code>min_chunk_size</code></td>
168
+
<td>The minimum upload chunk size.</td>
169
+
<td>5 MB</td>
220
170
</tr>
221
171
<tr>
222
-
<td><code>smoothing_window_size</code>
223
-
</td>
224
-
<td>Size of the smoothing window to adjust the chunk size. This value defines the number of uploaded chunks that will be used to determine the next chunk size.
225
-
</td>
226
-
<td>10
227
-
</td>
172
+
<td><code>smoothing_window_size</code></td>
173
+
<td>Size of the smoothing window to adjust the chunk size. This value defines the number of uploaded chunks that will be used to determine the next chunk size.</td>
174
+
<td>10</td>
228
175
</tr>
229
176
<tr>
230
-
<td><code>target_chunk_upload_secs</code>
231
-
</td>
232
-
<td>Target time to upload a single chunk. If the chunks in a smoothing window take longer, we reduce the chunk size. If it takes less time, we increase the chunk size.
233
-
</td>
234
-
<td>3 secs
235
-
</td>
177
+
<td><code>target_chunk_upload_secs</code></td>
178
+
<td>Target time to upload a single chunk. If the chunks in a smoothing window take longer, we reduce the chunk size. If it takes less time, we increase the chunk size.</td>
179
+
<td>3 secs</td>
236
180
</tr>
237
181
<tr>
238
-
<tdcolspan="3" ><strong>GraphQL (Query the MPS backend for MPS Status)</strong>
239
-
</td>
182
+
<tdcolspan="3" ><strong>GraphQL (Query the MPS backend for MPS Status)</strong></td>
240
183
</tr>
241
184
<tr>
242
-
<td><code>backoff</code>
243
-
</td>
244
-
<td>This the exponential back off factor for retries for failed queries. The wait time between successive retries will increase with this factor
245
-
</td>
246
-
<td>1.5
247
-
</td>
185
+
<td><code>backoff</code></td>
186
+
<td>This the exponential back off factor for retries for failed queries. The wait time between successive retries will increase with this factor</td>
187
+
<td>1.5</td>
248
188
</tr>
249
189
<tr>
250
-
<td><code>interval</code>
251
-
</td>
252
-
<td>Base delay between retries
253
-
</td>
254
-
<td>4 secs
255
-
</td>
190
+
<td><code>interval</code></td>
191
+
<td>Base delay between retries</td>
192
+
<td>4 secs</td>
256
193
</tr>
257
194
<tr>
258
-
<td><code>retries</code>
259
-
</td>
260
-
<td>Maximum number of retries before giving up
261
-
</td>
262
-
<td>3
263
-
</td>
195
+
<td><code>retries</code></td>
196
+
<td>Maximum number of retries before giving up</td>
197
+
<td>3</td>
264
198
</tr>
265
199
<tr>
266
-
<tdcolspan="3" ><strong>Download</strong>
267
-
</td>
200
+
<tdcolspan="3" ><strong>Download</strong></td>
268
201
</tr>
269
202
<tr>
270
-
<td><code>backoff</code>
271
-
</td>
272
-
<td>This the exponential back off factor for retries during failed downloads. The wait time between successive retries will increase with this factor.
273
-
</td>
274
-
<td>1.5
275
-
</td>
203
+
<td><code>backoff</code></td>
204
+
<td>This the exponential back off factor for retries during failed downloads. The wait time between successive retries will increase with this factor.</td>
205
+
<td>1.5</td>
276
206
</tr>
277
207
<tr>
278
-
<td><code>interval</code>
279
-
</td>
280
-
<td>Base delay between retries
281
-
</td>
282
-
<td>20 secs
283
-
</td>
208
+
<td><code>interval</code></td>
209
+
<td>Base delay between retries</td>
210
+
<td>20 secs</td>
284
211
</tr>
285
212
<tr>
286
-
<td><code>retries</code>
287
-
</td>
288
-
<td>Maximum number of retries before giving up
289
-
</td>
290
-
<td>10
291
-
</td>
213
+
<td><code>retries</code></td>
214
+
<td>Maximum number of retries before giving up</td>
215
+
<td>10</td>
292
216
</tr>
293
217
<tr>
294
-
<td><code>chunk_size</code>
295
-
</td>
296
-
<td>The chunk size to use for downloads
297
-
</td>
298
-
<td>10MB
299
-
</td>
218
+
<td><code>chunk_size</code></td>
219
+
<td>The chunk size to use for downloads</td>
220
+
<td>10MB</td>
300
221
</tr>
301
222
<tr>
302
-
<td><code>concurrent_downloads</code>
303
-
</td>
304
-
<td>Number of concurrent downloads
305
-
</td>
306
-
<td>10
307
-
</td>
223
+
<td><code>concurrent_downloads</code></td>
224
+
<td>Number of concurrent downloads</td>
225
+
<td>10</td>
308
226
</tr>
309
227
<tr>
310
-
<td><code>delete_zip</code>
311
-
</td>
312
-
<td>The server will send the results in a zip file. This flag controls whether to delete the zip file after extraction or not
313
-
</td>
314
-
<td>True
315
-
</td>
228
+
<td><code>delete_zip</code></td>
229
+
<td>The server will send the results in a zip file. This flag controls whether to delete the zip file after extraction or not</td>
Copy file name to clipboardExpand all lines: website/docs/ARK/sw_release_notes.mdx
+1-1Lines changed: 1 addition & 1 deletion
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -308,7 +308,7 @@ MPS requests using the Desktop app have been slightly restructured, you no longe
308
308
309
309
The Streaming button in the dashboard has been renamed to Preview, to better reflect the capability provided by the Desktop app. Use the [Client SDK with CLI](/ARK/sdk/sdk.mdx) to stream data.
310
310
311
-
Desktop app logs are now stored in ~/.aria/logs/aria_desktop_app_{date}_{time}.log
311
+
Desktop app logs are now stored in `~/.aria/logs/aria_desktop_app_{date}_{time}.log`
312
312
313
313
* Please note, the streaming preview available through the Desktop app is optimized for Profile 12.
0 commit comments