Storage

  • Hot ideas
  • Top ideas
  • New ideas
  • My feedback
  1. reduce the minimum Maximum Bandwidth on the Azure Backup Services Backup Agent to less than 512kbps

    The Backup Agent for Azure Backup Services includes a feature whereby the maximum bandwidth to be consumed by the agent can be set (according to whether part of the working day or not).

    This is a useful feature, but the minimum that the maximum bandwidth can be set to is 512kbps. This is higher than the upstream bandwidth available on many ADSL connections used by small businesses, rendering the feature useless. I suggest that the minimum maximum bandwidth is reduced, say to 128kbps or even 0kbps.

    54 votes
    Vote
    Sign in
    (thinking…)
    Sign in with: Microsoft
    Signed in as (Sign out)
    You have left! (?) (thinking…)
    under review  ·  7 comments  ·  Flag idea as inappropriate…  ·  Admin →
  2. Add metadata support to storage queue messages

    Please add the capability to add custom metadata to storage queue messages so that we can more easily implement additional communication features on top of them (correlation, header based routing, ...)

    51 votes
    Vote
    Sign in
    (thinking…)
    Sign in with: Microsoft
    Signed in as (Sign out)
    You have left! (?) (thinking…)
    under review  ·  2 comments  ·  Queues  ·  Flag idea as inappropriate…  ·  Admin →
  3. AzCopy should support filtering of table entities

    Just like a Pattern parameter for copying blobs, there should be a similar parameter for copying table entities.

    It should be possible to filter both on partition and and row keys.

    I personally find it quite seldom that I need to copy entire table.

    51 votes
    Vote
    Sign in
    (thinking…)
    Sign in with: Microsoft
    Signed in as (Sign out)
    You have left! (?) (thinking…)
    3 comments  ·  AzCopy  ·  Flag idea as inappropriate…  ·  Admin →
  4. Add support for gzip and/or deflate on Table Storage

    I noticed that the Storage team has gone to great lengths to reduce bandwidth usage by switching to JSON. Take this to the next logical step and add support for the Accept-Encoding header in the client libraries and have the server return content gzipped or deflated. JSON compresses quite nicely, especially if the entities returned from a query are similar, which they nearly always will be if you're querying on PartitionKey and RowKey

    50 votes
    Vote
    Sign in
    (thinking…)
    Sign in with: Microsoft
    Signed in as (Sign out)
    You have left! (?) (thinking…)
    2 comments  ·  Tables  ·  Flag idea as inappropriate…  ·  Admin →
  5. Like (Contains) Operator for Azure Table

    It would be very useful to use like (Contains) operator to query for rowkeys or partitionkeys, if it's not possible for partitionkeys then it could just be for rowkeys.

    40 votes
    Vote
    Sign in
    (thinking…)
    Sign in with: Microsoft
    Signed in as (Sign out)
    You have left! (?) (thinking…)
    under review  ·  0 comments  ·  Tables  ·  Flag idea as inappropriate…  ·  Admin →
  6. Provide "ForceSerializeAttribute" for TableEntity-derived properties and fields

    In a sense, it would be the opposite of the existing "IgnorePropertyAttribute" by forcing the serialization of decorated fields and properties (even if it they are private)

    39 votes
    Vote
    Sign in
    (thinking…)
    Sign in with: Microsoft
    Signed in as (Sign out)
    You have left! (?) (thinking…)
    0 comments  ·  Tables  ·  Flag idea as inappropriate…  ·  Admin →
  7. Remove or dramatically increase the max number of stored access policies per container, file share, table, or queue

    Today the maximum policy you can have is 5. This is way to low as we are using blob storage in a data sharing scenario between providers and consumers and we need to use policies to be able to revoke issued SAS tokens in this sharing scenario. 5 policies means that we can only control 5 SAS keys that can be revoked. We need a lot more.

    39 votes
    Vote
    Sign in
    (thinking…)
    Sign in with: Microsoft
    Signed in as (Sign out)
    You have left! (?) (thinking…)
    0 comments  ·  General  ·  Flag idea as inappropriate…  ·  Admin →
  8. 35 votes
    Vote
    Sign in
    (thinking…)
    Sign in with: Microsoft
    Signed in as (Sign out)
    You have left! (?) (thinking…)
    1 comment  ·  Tables  ·  Flag idea as inappropriate…  ·  Admin →
  9. Please give us Count(), TakeLast(int i), Skip(int i)

    There are many Scenarios were count is useful. Ich I write logs to Azure Tables most of the time i want to retreive the last n entries. It would be great if there would be something like TakeLast(int i) or Skip(int i) for doing something like Skip(count - take).Take(take).

    32 votes
    Vote
    Sign in
    (thinking…)
    Sign in with: Microsoft
    Signed in as (Sign out)
    You have left! (?) (thinking…)
    0 comments  ·  Tables  ·  Flag idea as inappropriate…  ·  Admin →
  10. 29 votes
    Vote
    Sign in
    (thinking…)
    Sign in with: Microsoft
    Signed in as (Sign out)
    You have left! (?) (thinking…)
    2 comments  ·  Flag idea as inappropriate…  ·  Admin →
    under review  ·  Jeff Irwin responded

    Can you please provide more information about what you’d like to use this for in Windows Azure Storage?

  11. Manage SAS Token by Name and Include in Audit Logs

    Give SAS tokens a name when generating then:
    - allow report/table of all generated token
    - allow revoke of exisiting token (or modification of access)
    - use the SAS token name in storage audit logs

    At the moment, the storage access logs do not show any useful information about who has made access, and this is critical to a practical audit function.

    28 votes
    Vote
    Sign in
    (thinking…)
    Sign in with: Microsoft
    Signed in as (Sign out)
    You have left! (?) (thinking…)
    2 comments  ·  General  ·  Flag idea as inappropriate…  ·  Admin →

    Thank you for your feedback. Currently you can use a stored access policy to manage revocation of an existing token. You are also able to track requests made using an existing stored access policy in the storage account logs. See https://docs.microsoft.com/en-us/azure/storage/common/storage-dotnet-shared-access-signature-part-1#controlling-a-sas-with-a-stored-access-policy for more details. For any further questions, or to discuss your specific scenario, send us an email at azurestoragefeedback@microsoft.com.

  12. Let's sunset "One of the request inputs is not valid." to never be seen again

    This is the most useless message ever. Also let the "valid" inputs succeed and return the bad ones.

    26 votes
    Vote
    Sign in
    (thinking…)
    Sign in with: Microsoft
    Signed in as (Sign out)
    You have left! (?) (thinking…)
    under review  ·  0 comments  ·  Tables  ·  Flag idea as inappropriate…  ·  Admin →
  13. We would like to audit/log delete operations on Azure blob storage containers. Currently we can only do this at a Storage Account layer.

    We are simply looking for more granularity with our storage logging in Azure. If someone were to view/delete our blob containers, we would like to see these operations logged and have the ability to alert on them.

    25 votes
    Vote
    Sign in
    (thinking…)
    Sign in with: Microsoft
    Signed in as (Sign out)
    You have left! (?) (thinking…)
    2 comments  ·  Blobs  ·  Flag idea as inappropriate…  ·  Admin →
  14. Increase the maximum message size of 64k. I know, Sb queues support 256k, but they are too darn slow. We don't need all of the features of

    Please increase the max message size limit of 64k. I know SB queues support 256k, but we don't need all the extra features and they are way too slow for our needs. We are in fact moving from SB queues to Storage queue now because of perf issues, and the only thing nipping at us is the reduced message size. 256k would be perfect and would make for a nice parity between the queue.

    25 votes
    Vote
    Sign in
    (thinking…)
    Sign in with: Microsoft
    Signed in as (Sign out)
    You have left! (?) (thinking…)
    under review  ·  1 comment  ·  Queues  ·  Flag idea as inappropriate…  ·  Admin →
  15. Improve exception messages

    "An error occurred while processing this request."

    That message doesn't say anything. A suggestion would be to append the inner exception message. It makes debugging a lot easier.

    ----

    "One of the request inputs is not valid"

    That message doesn't say anything at all and I can't figure out why I get it. It's thrown on SaveChanges()

    23 votes
    Vote
    Sign in
    (thinking…)
    Sign in with: Microsoft
    Signed in as (Sign out)
    You have left! (?) (thinking…)
    under review  ·  3 comments  ·  Tables  ·  Flag idea as inappropriate…  ·  Admin →
  16. hotlink prevention

    Preventing others to consume my Azure resources via HTTP_REFERER header validation.
    This is a common scenario for blogs, websites etc.

    Scott Hanselman has even written a blog post about it: http://www.hanselman.com/blog/BlockingImageHotlinkingLeechingAndEvilSploggersWithIISUrlRewrite.aspx

    22 votes
    Vote
    Sign in
    (thinking…)
    Sign in with: Microsoft
    Signed in as (Sign out)
    You have left! (?) (thinking…)
    1 comment  ·  Blobs  ·  Flag idea as inappropriate…  ·  Admin →
  17. Allow batch insert and upsert of rows with different partition keys

    Really self-explanatory. This is currently presenting a big bottleneck in my system because I must do up to 100 hundred separate roundtrips where I otherwise could have made a single.

    This would also make it a lot easier to choose a proper RowKey/PartitionKey architecture for systems, where both performant insertions and retrievals are important.

    19 votes
    Vote
    Sign in
    (thinking…)
    Sign in with: Microsoft
    Signed in as (Sign out)
    You have left! (?) (thinking…)
    under review  ·  2 comments  ·  Tables  ·  Flag idea as inappropriate…  ·  Admin →
  18. Support GZIP & DEFLATE Content-Encoding for List Blobs Responses

    For collections with large numbers of blobs, the XML response payload easily approaches 100s of MBs. In our testing, each block of 5000 objects resulted in a ~2.8MB response. This is without any other flags set (i.e. no snapshots, no metadata, no uncommitted blobs). The response XML is highly compressible. In our testing we saw a 93% size reduction. (2.8MB -> 210KB). This would be a huge improvement in transfer performance, and a large cost savings in egress bandwidth situations.

    20 votes
    Vote
    Sign in
    (thinking…)
    Sign in with: Microsoft
    Signed in as (Sign out)
    You have left! (?) (thinking…)
    3 comments  ·  Blobs  ·  Flag idea as inappropriate…  ·  Admin →
  19. 19 votes
    Vote
    Sign in
    (thinking…)
    Sign in with: Microsoft
    Signed in as (Sign out)
    You have left! (?) (thinking…)
    under review  ·  1 comment  ·  Queues  ·  Flag idea as inappropriate…  ·  Admin →
  20. Make the $logs container read only

    We need to be sure that all audit logs are there and can not be modified/deleted. Now it is possible to delete audit log files from the $logs folder.

    18 votes
    Vote
    Sign in
    (thinking…)
    Sign in with: Microsoft
    Signed in as (Sign out)
    You have left! (?) (thinking…)
    1 comment  ·  General  ·  Flag idea as inappropriate…  ·  Admin →
  • Don't see your idea?

Feedback and Knowledge Base