You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
The naive solution is to read the entire data into memory and just convert it there. This will however require a lot of memory for e.g. a 5 GiB file, so I would rather not do that.
We can implement this ourselves, but it will not be straightforward to get the entire size of the byte string without processing the entire string. This may even be OK, as it is linear effort. If we do not have the size of the stream, then requests will switch to Chunk-Encoded and I am not sure if the Data Attribute Recommendation service supports this.
Another solution is to use the codecs.iterdecode function. This returns an iterable, which will again cause requests to use the Chunk-Encoded mode.
Right now, we only allow binary, which requires additional work compared to just opening a file or passing text.
The text was updated successfully, but these errors were encountered: