@@ -1314,65 +1314,72 @@ remaining to do.
13141314Concurrent Requests
13151315------------------- 
13161316
1317- Thanks to responses being lazy,  requests are always managed concurrently. 
1318- On a fast enough network, the following code makes 379  requests in less than 
1319- half a second when cURL is used:: 
1317+ Symfony's HTTP client makes asynchronous HTTP  requests by default. This means 
1318+ you don't need to configure anything special to send multiple  requests in parallel 
1319+ and process them efficiently. 
13201320
1321+ Here's a practical example that fetches metadata about several Symfony
1322+ components from the Packagist API in parallel::
1323+ 
1324+     $packages = ['console', 'http-kernel', '...', 'routing', 'yaml']; 
13211325    $responses = []; 
1322-     for ($i = 0; $i < 379; ++$i) { 
1323-         $uri = "https://http2.akamai.com/demo/tile-$i.png"; 
1324-         $responses[] = $client->request('GET', $uri); 
1326+     foreach ($packages as $package) { 
1327+         $uri = sprintf('https://repo.packagist.org/p2/symfony/%s.json', $package); 
1328+         // send all requests concurrently (they won't block until response content is read) 
1329+         $responses[$package] = $client->request('GET', $uri); 
13251330    } 
13261331
1327-     foreach ($responses as $response) { 
1328-         $content = $response->getContent(); 
1329-         // ... 
1332+     $results = []; 
1333+     // iterate through the responses and read their content 
1334+     foreach ($responses as $package => $response) { 
1335+         // process response data somehow ... 
1336+         $results[$package] = $response->toArray(); 
13301337    } 
13311338
1332- As you can read  in the first "for"  loop, requests are issued  but are not consumed 
1333- yet. That's  the trick when concurrency  is desired: requests should be sent 
1334- first and be read later on. This will allow the client to monitor all pending 
1335- requests while your code waits for a specific one, as done in each iteration of 
1336- the above "foreach" loop .
1339+ As you can see, the requests are sent  in the first loop, but their responses 
1340+ aren't consumed until  the second one. This  is the key to achieving parallel and 
1341+ concurrent execution: dispatch all requests first, and read them later. 
1342+ This allows the client to handle all pending responses efficiently while your 
1343+ code waits only when necessary .
13371344
13381345.. note ::
13391346
1340-     The maximum number of concurrent requests that you can perform depends on 
1341-     the resources of your machine  (e.g. your  operating system may  limit the
1342-     number of simultaneous reads of the file that stores the certificates 
1343-     file). Make your  requests in batches to avoid these issues .
1347+     The maximum number of concurrent requests depends on your system's resources 
1348+     (e.g. the  operating system might  limit the number of simultaneous connections 
1349+     or access to certificate files). To avoid hitting these limits, consider 
1350+     processing  requests in batches.
13441351
13451352Multiplexing Responses
13461353~~~~~~~~~~~~~~~~~~~~~~ 
13471354
1348- If you look again at the snippet above, responses are read in requests' order.
1349- But maybe the 2nd response came back before the 1st? Fully asynchronous operations
1350- require being able to deal with the responses in whatever order they come back.
1355+ In the previous example, responses are read in the same order as the requests
1356+ were sent. However, it's possible that, for instance, the second response arrives
1357+ before the first. To handle such cases efficiently, you need fully asynchronous
1358+ processing, which allows responses to be handled in whatever order they arrive.
13511359
1352- In order to do so , the
1353- :method: `Symfony\\ Contracts\\ HttpClient\\ HttpClientInterface::stream `
1354- accepts  a list of responses to monitor . As mentioned
1360+ To achieve this , the
1361+ :method: `Symfony\\ Contracts\\ HttpClient\\ HttpClientInterface::stream ` method 
1362+ can be used to monitor  a list of responses. As mentioned
13551363:ref: `previously  <http-client-streaming-responses >`, this method yields response
1356- chunks as they arrive from  the network. By replacing  the "foreach" in the 
1357- snippet  with this one, the code becomes fully async ::
1364+ chunks as soon as  they arrive over  the network. Replacing  the standard `` foreach `` 
1365+ loop  with the following version enables true asynchronous behavior ::
13581366
13591367    foreach ($client->stream($responses) as $response => $chunk) { 
13601368        if ($chunk->isFirst()) { 
1361-             // headers of  $response just arrived 
1362-             // $response->getHeaders() is now a  non-blocking call  
1369+             // the  $response headers  just arrived 
1370+             // $response->getHeaders() is now non-blocking 
13631371        } elseif ($chunk->isLast()) { 
1364-             // the full content of  $response just completed  
1365-             // $response->getContent() is now a  non-blocking call  
1372+             // the full $response body has been received  
1373+             // $response->getContent() is now non-blocking 
13661374        } else { 
1367-             // $chunk->getContent() will return a piece 
1368-             // of the response body that just arrived 
1375+             // $chunk->getContent() returns a piece of the body that just arrived 
13691376        } 
13701377    } 
13711378
13721379.. tip ::
13731380
1374-     Use the ``user_data `` option combined  with ``$response->getInfo('user_data') ``
1375-     to track the identity of the responses in your foreach loops .
1381+     Use the ``user_data `` option along  with ``$response->getInfo('user_data') ``
1382+     to identify each response during streaming .
13761383
13771384Dealing with Network Timeouts
13781385~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ 
0 commit comments