@@ -1302,65 +1302,72 @@ remaining to do.
1302
1302
Concurrent Requests
1303
1303
-------------------
1304
1304
1305
- Thanks to responses being lazy, requests are always managed concurrently.
1306
- On a fast enough network, the following code makes 379 requests in less than
1307
- half a second when cURL is used::
1305
+ Symfony's HTTP client makes asynchronous HTTP requests by default. This means
1306
+ you don't need to configure anything special to send multiple requests in parallel
1307
+ and process them efficiently.
1308
1308
1309
+ Here's a practical example that fetches metadata about several Symfony
1310
+ components from the Packagist API in parallel::
1311
+
1312
+ $packages = ['console', 'http-kernel', '...', 'routing', 'yaml'];
1309
1313
$responses = [];
1310
- for ($i = 0; $i < 379; ++$i) {
1311
- $uri = "https://http2.akamai.com/demo/tile-$i.png";
1312
- $responses[] = $client->request('GET', $uri);
1314
+ foreach ($packages as $package) {
1315
+ $uri = sprintf('https://repo.packagist.org/p2/symfony/%s.json', $package);
1316
+ // send all requests concurrently (they won't block until response content is read)
1317
+ $responses[$package] = $client->request('GET', $uri);
1313
1318
}
1314
1319
1315
- foreach ($responses as $response) {
1316
- $content = $response->getContent();
1317
- // ...
1320
+ $results = [];
1321
+ // iterate through the responses and read their content
1322
+ foreach ($responses as $package => $response) {
1323
+ // process response data somehow ...
1324
+ $results[$package] = $response->toArray();
1318
1325
}
1319
1326
1320
- As you can read in the first "for" loop, requests are issued but are not consumed
1321
- yet. That's the trick when concurrency is desired: requests should be sent
1322
- first and be read later on. This will allow the client to monitor all pending
1323
- requests while your code waits for a specific one, as done in each iteration of
1324
- the above "foreach" loop .
1327
+ As you can see, the requests are sent in the first loop, but their responses
1328
+ aren't consumed until the second one. This is the key to achieving parallel and
1329
+ concurrent execution: dispatch all requests first, and read them later.
1330
+ This allows the client to handle all pending responses efficiently while your
1331
+ code waits only when necessary .
1325
1332
1326
1333
.. note ::
1327
1334
1328
- The maximum number of concurrent requests that you can perform depends on
1329
- the resources of your machine (e.g. your operating system may limit the
1330
- number of simultaneous reads of the file that stores the certificates
1331
- file). Make your requests in batches to avoid these issues .
1335
+ The maximum number of concurrent requests depends on your system's resources
1336
+ (e.g. the operating system might limit the number of simultaneous connections
1337
+ or access to certificate files). To avoid hitting these limits, consider
1338
+ processing requests in batches.
1332
1339
1333
1340
Multiplexing Responses
1334
1341
~~~~~~~~~~~~~~~~~~~~~~
1335
1342
1336
- If you look again at the snippet above, responses are read in requests' order.
1337
- But maybe the 2nd response came back before the 1st? Fully asynchronous operations
1338
- require being able to deal with the responses in whatever order they come back.
1343
+ In the previous example, responses are read in the same order as the requests
1344
+ were sent. However, it's possible that, for instance, the second response arrives
1345
+ before the first. To handle such cases efficiently, you need fully asynchronous
1346
+ processing, which allows responses to be handled in whatever order they arrive.
1339
1347
1340
- In order to do so , the
1341
- :method: `Symfony\\ Contracts\\ HttpClient\\ HttpClientInterface::stream `
1342
- accepts a list of responses to monitor . As mentioned
1348
+ To achieve this , the
1349
+ :method: `Symfony\\ Contracts\\ HttpClient\\ HttpClientInterface::stream ` method
1350
+ can be used to monitor a list of responses. As mentioned
1343
1351
:ref: `previously <http-client-streaming-responses >`, this method yields response
1344
- chunks as they arrive from the network. By replacing the "foreach" in the
1345
- snippet with this one, the code becomes fully async ::
1352
+ chunks as soon as they arrive over the network. Replacing the standard `` foreach ``
1353
+ loop with the following version enables true asynchronous behavior ::
1346
1354
1347
1355
foreach ($client->stream($responses) as $response => $chunk) {
1348
1356
if ($chunk->isFirst()) {
1349
- // headers of $response just arrived
1350
- // $response->getHeaders() is now a non-blocking call
1357
+ // the $response headers just arrived
1358
+ // $response->getHeaders() is now non-blocking
1351
1359
} elseif ($chunk->isLast()) {
1352
- // the full content of $response just completed
1353
- // $response->getContent() is now a non-blocking call
1360
+ // the full $response body has been received
1361
+ // $response->getContent() is now non-blocking
1354
1362
} else {
1355
- // $chunk->getContent() will return a piece
1356
- // of the response body that just arrived
1363
+ // $chunk->getContent() returns a piece of the body that just arrived
1357
1364
}
1358
1365
}
1359
1366
1360
1367
.. tip ::
1361
1368
1362
- Use the ``user_data `` option combined with ``$response->getInfo('user_data') ``
1363
- to track the identity of the responses in your foreach loops .
1369
+ Use the ``user_data `` option along with ``$response->getInfo('user_data') ``
1370
+ to identify each response during streaming .
1364
1371
1365
1372
Dealing with Network Timeouts
1366
1373
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
0 commit comments