You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
The rounds for FedAvg to achieve 97% test accuracy on MNIST using 2NN with E=5 reported in [[4]](#4) / FedLab:
288
+
We use the same partitioned MNIST dataset in FedAvg[[4]](#4) to evaluate the corectness of FedLab. The rounds for FedAvg to achieve 97% test accuracy on MNIST using 2NN with E=5 reported in [[4]](#4) / FedLab:
289
289
<table>
290
290
<tr>
291
291
<td rowspan="2">Sample ratio</td>
@@ -302,56 +302,59 @@ The rounds for FedAvg to achieve 97% test accuracy on MNIST using 2NN with E=5 r
302
302
<td>0.0</td>
303
303
<td>1455 / 1293</td>
304
304
<td>316 / 77 </td>
305
-
<td>4278 / *</td>
305
+
<td>4278 / 1815</td>
306
306
<td>3275 / 1056</td>
307
307
</tr>
308
308
<tr>
309
309
<td>0.1 </td>
310
310
<td>1474 / 1230</td>
311
311
<td>87 / 43 </td>
312
-
<td>1796 / *</td>
312
+
<td>1796 / 2778</td>
313
313
<td>664 / 439</td>
314
314
</tr>
315
315
<tr>
316
316
<td>0.2</td>
317
317
<td>1658 / 1234</td>
318
318
<td>77 / 37 </td>
319
-
<td>1528 / *</td>
319
+
<td>1528 / 2805</td>
320
320
<td>619 / 427 </td>
321
321
</tr>
322
322
<tr>
323
323
<td>0.5</td>
324
-
<td>-- / *</td>
324
+
<td>-- / 1229</td>
325
325
<td>75 / 36 </td>
326
-
<td>-- / *</td>
326
+
<td>-- / 3034</td>
327
327
<td>443 / 474</td>
328
328
</tr>
329
329
<tr>
330
330
<td>1.0</td>
331
-
<td>-- / *</td>
331
+
<td>-- / 1284</td>
332
332
<td>70 / 35 </td>
333
-
<td>-- / *</td>
333
+
<td>-- / 3154</td>
334
334
<td>380 / 507</td>
335
335
</tr>
336
336
</table>
337
337
338
+
### Computation Efficienty
338
339
339
-
Time cost in 100 rounds under diffrent acceleration set(TODO):
340
+
Time cost in 100 rounds (50 clients are sampled per round) under different acceleration settings. 1M-10P stands for the simulation runs on 1 machine with 4 GPUs and 10 processes. 2M-10P stands for the simulation runs on 2 machine with 4 GPUs and 10 processes (5 processes on each machine).
We provide a few performance baseline in communication-efficient federated learning, which includes QSGD and top-k.
352
+
We provide a few performance baselines in communication-efficient federated learning including QSGD and top-k. In the experiment setting, we choose $\alpha = 0.5$ in label Dirichlet partitioned mnist with 100 clients. We run 200 rounds with sample ratio 0.1 (10 clients for each FL round) of FedAvg, where each client performs 5 local epoches of SGD with full batch and learning rate 0.1. We report the top-1 test accuracy and its communication volume during the training.
0 commit comments