You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
At least aiohttp will accept an AsyncGenerator out of the box, so the change would not be that big, but there might be an issue with requests though. Could this be circumvented with something?
My actual use case is that I'm reading a byte stream from a socket and I need to upload that stream as a file to GCS. The current implementation would require either a tempfile, or BytesIO in memory. It would be nice if the buffering would be handled by the network stack.
The text was updated successfully, but these errors were encountered:
We'd definitely be open to accepting a PR implementing this suggestion! I suspect we'd need to ensure that the requests version of this library strips out the relevant check -- might require a change to the py3to2 code generation.
It would be great if the upload could consume an AsyncGenerator as the
stream
for upload body. It seems that in https://github.com/talkiq/gcloud-aio/blob/master/storage/gcloud/aio/storage/storage.py#L250 there is explicit check that the uploaded body (stream) must be of typeio.IOBase
.At least
aiohttp
will accept an AsyncGenerator out of the box, so the change would not be that big, but there might be an issue withrequests
though. Could this be circumvented with something?My actual use case is that I'm reading a byte stream from a socket and I need to upload that stream as a file to GCS. The current implementation would require either a tempfile, or
BytesIO
in memory. It would be nice if the buffering would be handled by the network stack.The text was updated successfully, but these errors were encountered: