HTTP compression support for Azure Storage Services (via Accept-Encoding/Content-Encoding fields)
Add support for 'Accept-Encoding' and 'Content-Encoding' fields in the HTTP request in Azure Storage Services with supported compression schemes gzip and deflate. This should helps reduce network payload by 10x-50x in some cases like saving/loading bulk records to Azure Table Storage.

Thank you for your feedback. Providing this functionality is on our backlog but there is no ETA we can share at this time. We will provide updates when they become available. For any further questions, or to discuss your specific scenario, send us an email at azurestoragefeedback@microsoft.com.
15 comments
-
smoser commented
I opened a issue for azure-sdk-for-go to support compressing PUT requests at https://github.com/Azure/azure-sdk-for-go/issues/4263 .
-
Anonymous commented
This can be done using a logic app to watch on a container for any added blob than un-zip that blob to another container. There is however a 50MB limit on this operation in logic apps.
-
Anonymous commented
We seriously need this feature. We are a new customer and have 78k PDF's to upload. Uploading a zip and having it decompress in the blob service would be ideal.
-
Patrick Jonas commented
Thank you and the team support you get time and effort to committee to the networking you have do great. to keep me sign in to my account.
-
HS commented
+1 This would be a great feature. It would help in onboarding new customers to Azure. One of the big hassles is uploading huge VHD files. If we can zip them, upload them and unzip them directly onto the storage account, it will save a lot of time.
-
Will Hancock commented
Yeh this is an awesome feature of S3. If you have lots of files maybe 1000's of small tiny images etc.
I think on that bases you should revisit... or we'll look at advising our client to AWS.
-
Poojan Kothari commented
I have a scenario where clients upload password protected zip files onto azure blob, and i want to process it onto blob itself to get inputstreams out of it ? any suggestions.
-
Justin Chase commented
AWS supports this when creating lambda functions by the way. You upload a zip to S3 then point lambda at that zip, it unpacks it on the file storage for the machine. Very fast compared to uploading each file independently.
-
Paul Smullen commented
This would be a great feature. An API command to tell Azure to Unzip to a given location. Right now, if a large Zip (e.g. 1GB) has been uploaded, I need to download it, Unzip it and re-upload it. Very ineffciient...
-
Peter Taylor commented
Or, better, go for compliance with the HTTP/1.1 RFC and use Transfer-Encoding.
-
Greg Galloway commented
If you upload a gzip file and set Content-Encoding to gzip web browsers will automatically decompress it as described here:
http://stackoverflow.com/questions/23263129/does-windows-azure-blob-storage-support-serving-compressed-files-similar-to-amaz -
Bart Czernicki commented
This is huge...inserting batches of 100 causes large chatter/responses from Azure Table Storage. This is on inserts up to 500 meg/hour in bandwidth, not a huge price concern but still a lot.
Another alternative is to raise the batch limit to 1,000 or something for transferring large amounts of data. 100 is not enough.
-
Shawn Weisfeld commented
I have to upload 2.2 billion small (10kb each) images, I built something like this using a worker role, however it would be great if it was built into the framework.
-
Charles Lamanna commented
This would decrease our network costs 10x+ -- an amount that starts to add up since we transfer a fairly large amount of data given our size.
-
che commented
Azure blob storage is great for hosting lots of small files like those needed to support DeepZoom or Pivot. However, uploading thousands of small files one-by-one is very slow. It would be great if I could upload a zip file and have Azure unzip it into blob storage. Zip compression might also help reduce the upload bandwidth in certain situations.