Update: Microsoft will be moving away from UserVoice sites on a product-by-product basis throughout the 2021 calendar year. We will leverage 1st party solutions for customer feedback. Learn more here.

Storage

  • Hot ideas
  • Top ideas
  • New ideas
  • My feedback
  1. Support Reserved Capacity for Premium Blobs

    PLEASE Support Reserved Capacity for Premium Blobs

    31 votes
    Vote

    We're glad you're here

    Please sign in to leave feedback

    Signed in as (Sign out)
    You have left! (?) (thinking…)
    1 comment  ·  Blobs  ·  Flag idea as inappropriate…  ·  Admin →
  2. Filter set wildcard match in storage lifecycle management

    Same feature like präfix but with a wildcards to filter for files that should be deleted.

    30 votes
    Vote

    We're glad you're here

    Please sign in to leave feedback

    Signed in as (Sign out)
    You have left! (?) (thinking…)
    2 comments  ·  Blobs  ·  Flag idea as inappropriate…  ·  Admin →
  3. Raise the default storage signed version for anonymous requests to above 2011-08-18

    My customer used blob to serve anonymous request for their video site. They noticed that users using chrome or android could not adjust the playback progress, which is caused by the low default storage signed version 2009-09-19 not supporting Range in HTTP header.
    Is it possible to raise the default signed version from 2009-09-19 to at least 2011-08-18 so that customer would not need to set the default signed version manually in this case?

    30 votes
    Vote

    We're glad you're here

    Please sign in to leave feedback

    Signed in as (Sign out)
    You have left! (?) (thinking…)
    0 comments  ·  Blobs  ·  Flag idea as inappropriate…  ·  Admin →
  4. Lifecycle Management to support other blob types than just BlockBlob

    It is too restrictive to only offer lifecycle management to BlockBlob. For example we use AppendBlob for some logging and cannot auto delete those after a period of time. It would be good to be able to specify retention policies on those types as well.

    27 votes
    Vote

    We're glad you're here

    Please sign in to leave feedback

    Signed in as (Sign out)
    You have left! (?) (thinking…)
    3 comments  ·  Blobs  ·  Flag idea as inappropriate…  ·  Admin →
  5. Support blob updates on Event Grid

    It would be great if you could support blob updates on Event Grid. A blob change means we need to reprocess the blob.

    27 votes
    Vote

    We're glad you're here

    Please sign in to leave feedback

    Signed in as (Sign out)
    You have left! (?) (thinking…)
    0 comments  ·  Blobs  ·  Flag idea as inappropriate…  ·  Admin →
  6. Worm compliance at object level

    Please add a feature to support WORM policies at object level. Currently it's only supporting at container level. This has very potential and AWS already has this feature supporting at both object and bucket level.

    24 votes
    Vote

    We're glad you're here

    Please sign in to leave feedback

    Signed in as (Sign out)
    You have left! (?) (thinking…)
    0 comments  ·  Blobs  ·  Flag idea as inappropriate…  ·  Admin →
  7. Support chunked transfer encoding on Blob PUT

    Currently the PUT API for block blobs requires a Content-Length header. This essentially makes the API unusable for clients that use async pipelining to start sending the blob bytes while they are being received or generated, and therefore before the total content length is known. Rather, it requires the caller to buffer the entire blob locally, determine the length, and then call the PUT API with that length. This is inefficient. When using an async client SDK, like the Java Storage SDK v10, knowing the entire content length ahead of time should not be necessary. See https://github.com/Azure/azure-storage-java/issues/336, for example.

    23 votes
    Vote

    We're glad you're here

    Please sign in to leave feedback

    Signed in as (Sign out)
    You have left! (?) (thinking…)
    3 comments  ·  Blobs  ·  Flag idea as inappropriate…  ·  Admin →
  8. Support for multiple byte ranges on blob read/write

    We need random read (and later write) access to thousands of discrete ranges (each in the order of a few KBs) within very large binary blobs (in the order of 100s of GB). The current APIs force us to submit a single request for each such range. One negative aspect is billing, of course, but the main problem is the client-side and network loads for handling all these requests!

    We would like to ask for the byte range support to be extended to multiple ranges (e.g. "bytes=from0-to0, from1-to1, ...").

    The API should of course specify a maximum number of ranges…

    23 votes
    Vote

    We're glad you're here

    Please sign in to leave feedback

    Signed in as (Sign out)
    You have left! (?) (thinking…)
    1 comment  ·  Blobs  ·  Flag idea as inappropriate…  ·  Admin →
  9. hotlink prevention

    Preventing others to consume my Azure resources via HTTP_REFERER header validation.
    This is a common scenario for blogs, websites etc.

    Scott Hanselman has even written a blog post about it: http://www.hanselman.com/blog/BlockingImageHotlinkingLeechingAndEvilSploggersWithIISUrlRewrite.aspx

    22 votes
    Vote

    We're glad you're here

    Please sign in to leave feedback

    Signed in as (Sign out)
    You have left! (?) (thinking…)
    1 comment  ·  Blobs  ·  Flag idea as inappropriate…  ·  Admin →
  10. Limit the size of files in a blob storage container/account

    There should be a config option to limit the maximum size of any given file in a blob storage container or account. When I'm allowing a client to directly upload to my blob storage with an SAS, they could technically upload a file or any size and as they're not going through my app, there's no way for me to control this. I want to be able to say "any file in this container or account may be no larger than 10MB" or whatever.

    20 votes
    Vote

    We're glad you're here

    Please sign in to leave feedback

    Signed in as (Sign out)
    You have left! (?) (thinking…)
    0 comments  ·  Blobs  ·  Flag idea as inappropriate…  ·  Admin →
  11. Support GZIP & DEFLATE Content-Encoding for List Blobs Responses

    For collections with large numbers of blobs, the XML response payload easily approaches 100s of MBs. In our testing, each block of 5000 objects resulted in a ~2.8MB response. This is without any other flags set (i.e. no snapshots, no metadata, no uncommitted blobs). The response XML is highly compressible. In our testing we saw a 93% size reduction. (2.8MB -> 210KB). This would be a huge improvement in transfer performance, and a large cost savings in egress bandwidth situations.

    20 votes
    Vote

    We're glad you're here

    Please sign in to leave feedback

    Signed in as (Sign out)
    You have left! (?) (thinking…)
    3 comments  ·  Blobs  ·  Flag idea as inappropriate…  ·  Admin →
  12. Add capability for bulk deletion of blobs

    Right now, deletion of a container is the only way a number of blobs can be deleted together. The drawback, however, is that the container name cannot be used again for some time. This causes inconvenience for a number of use cases.

    Proposal: There should at least be way for emptying the container in one API call

    17 votes
    Vote

    We're glad you're here

    Please sign in to leave feedback

    Signed in as (Sign out)
    You have left! (?) (thinking…)
    2 comments  ·  Blobs  ·  Flag idea as inappropriate…  ·  Admin →
  13. Server side delete, rename of virtual folder

    Currently if I have millions of documents inside a virtual folder, there is not way to delete them without actually iterating through everything, which is extremely time consuming (weeks to process deletions)

    For example, if you have multiple containers, which in turn have multiple virtual folders with millions of blobs, moving, renaming or deleting the "folder" is impossible without some painfully slow iterating through everything.

    thanks!

    15 votes
    Vote

    We're glad you're here

    Please sign in to leave feedback

    Signed in as (Sign out)
    You have left! (?) (thinking…)
    0 comments  ·  Blobs  ·  Flag idea as inappropriate…  ·  Admin →
  14. Multiple IP Support for storageAccount networkAcl ARM Templates

    Microsoft.Storage/storageAccounts Azure Resource Manager definitions format. Currently there is no way to specify multiple IPs in an IpRule for the networkAcls config. As such, it is difficult to maintain whitelisted IPs in the "variables" or "parameters" section of my ARM template. Could multiple IP support be integrated?

    14 votes
    Vote

    We're glad you're here

    Please sign in to leave feedback

    Signed in as (Sign out)
    You have left! (?) (thinking…)
    1 comment  ·  Blobs  ·  Flag idea as inappropriate…  ·  Admin →

    Thanks for the valid suggestion. Your feedback is now open for the user community to upvote which allows us to effectively prioritize your request against our existing feature list and also gives us insight into the potential impact of implementing the suggested feature

  15. Add support for UTF-8 for blob metadata for searching in metadata in non-english

    At the moment, blobs metadata is support search index, but upload metadata support ONLY ASCII characters. https://github.com/Azure/azure-cosmosdb-dotnet/issues/552

    14 votes
    Vote

    We're glad you're here

    Please sign in to leave feedback

    Signed in as (Sign out)
    You have left! (?) (thinking…)
    1 comment  ·  Blobs  ·  Flag idea as inappropriate…  ·  Admin →
  16. 14 votes
    Vote

    We're glad you're here

    Please sign in to leave feedback

    Signed in as (Sign out)
    You have left! (?) (thinking…)
    0 comments  ·  Blobs  ·  Flag idea as inappropriate…  ·  Admin →
  17. Add special ParquetBlobStorage allowing filtering

    Parquet is de facto standard for storing big data.

    However we usually do not need all date stored in parquet hive/files.

    It would be great if there were ParquetBlobStorage - extension of BlobStorage that:

    a) would understand parquet format i.e. in order to read schema it will read only end of the file

    b) allow selective column/row reading - no need to copy full blob when we need just couple columns or group of row.

    14 votes
    Vote

    We're glad you're here

    Please sign in to leave feedback

    Signed in as (Sign out)
    You have left! (?) (thinking…)
    1 comment  ·  Blobs  ·  Flag idea as inappropriate…  ·  Admin →
  18. Increase the size of a BLOB Storage Account to an indefinite amount

    I'm currently working on a solution which will be using multiple storage accounts which will continue to grow indefinitely.
    The 100TB limit means that I have to continuously list the blobs in my container to workout what the total size used is and then determine if I need to create a new storage account on the fly. This is cumbersome and time consuming for the type of performance the business requires.

    It would be better to remove the account size limitation which would simplify the solution considerably and remove all the blob listing look-ups which seem counter intuitive. The only…

    13 votes
    Vote

    We're glad you're here

    Please sign in to leave feedback

    Signed in as (Sign out)
    You have left! (?) (thinking…)
    3 comments  ·  Blobs  ·  Flag idea as inappropriate…  ·  Admin →

    Thank you for your feedback. We are currently working on increasing the scale limits of storage accounts. See the following post for updated capacity limits (currently at 5 PB for a single storage account): https://azure.microsoft.com/en-in/blog/announcing-larger-higher-scale-storage-accounts/. For any further questions, or to discuss your specific scenario, send us an email at azurestoragefeedback@microsoft.com.

  19. more detailed error messages

    We've run into an issue where data we tried to store wouldn't fit in the blob file. The error message was (416) The page range specified is invalid.

    At first it wasn't clear whether we were trying to start saving beyond the page file or data didn't fit or something else was culprit. Eventually we figured it out.

    It would be nice if the error message would provide more details: page size, size we tried to save, location we tried to save within the file, etc. This would've showed us the issue right away. Instead we had to spend a…

    12 votes
    Vote

    We're glad you're here

    Please sign in to leave feedback

    Signed in as (Sign out)
    You have left! (?) (thinking…)
    2 comments  ·  Blobs  ·  Flag idea as inappropriate…  ·  Admin →

    Thank you for your feedback. We are currently working on providing this functionality and will provide updates when they become available. See the following article for the latest: https://docs.microsoft.com/en-us/rest/api/storageservices/status-and-error-codes2. Note that for REST API version 2017-07-29 and later, failed API operations also return the storage error code string in a response header. For any further questions, or to discuss your specific scenario, send us an email at azurestoragefeedback@microsoft.com.

  20. Container snapshot

    Could you please allow us to create a Snapshot of container ( as whole). It is quite difficult to create snapshot of each blob incase if we have more number of blobs.

    12 votes
    Vote

    We're glad you're here

    Please sign in to leave feedback

    Signed in as (Sign out)
    You have left! (?) (thinking…)
    0 comments  ·  Blobs  ·  Flag idea as inappropriate…  ·  Admin →
  • Don't see your idea?

Feedback and Knowledge Base