How can we improve Azure Storage?

SAS tokens for blobs with restricted file types and maximum file size

When issuing a Shared Access Signature (SAS token) there should be(optional) parameters for maximum file size and restricted whitelisted file extensions so that clients can upload files only with any of the whitelisted extensions in token with not bigger than maximum size mentioned in SAS token.

40 votes
Vote
Sign in
Check!
(thinking…)
Reset
or sign in with
  • facebook
  • google
    Password icon
    Signed in as (Sign out)
    You have left! (?) (thinking…)
    Rajesh shared this idea  ·   ·  Flag idea as inappropriate…  ·  Admin →

    7 comments

    Sign in
    Check!
    (thinking…)
    Reset
    or sign in with
    • facebook
    • google
      Password icon
      Signed in as (Sign out)
      Submitting...
      • Rob Eisenberg commented  ·   ·  Flag as inappropriate

        I'd like to see this feature request re-opened and re-evaluated. There are a few reasons for this:

        * The number of customers requesting this isn't clear from this one ticket because there are multiple requests for this same feature through different tickets. More customers are asking for this than might be understood. For example, see here: https://feedback.azure.com/forums/217298-storage/suggestions/31031605-provide-ability-to-limit-the-size-of-the-payload-b
        * This feature would help address a security concern documented by your own documentation. See best practices #7 here https://docs.microsoft.com/en-us/azure/storage/common/storage-dotnet-shared-access-signature-part-1#best-practices-when-using-sas If this feature were implemented, #7 from the security list could be mitigated entirely.
        * There is a massive rise in edge-computing, single-page-apps, decentralized/hybrid apps, and collaborative apps based on CRDT's and IPFS which could be based on Azure storage more safely if this feature were implemented.

        Please consider re-opening this as it is critical for security in many important scenarios, enough to make me consider other cloud competitors.

      • Robert Abraham commented  ·   ·  Flag as inappropriate

        My company would also make good use of this feature. Is it infeasible, or just unprioritized at the moment? If the latter, I really should be able to vote for this request, to help capture the demand for it.

      • JT commented  ·   ·  Flag as inappropriate

        Can we combine this suggestion with the identical suggestion "Limit the number of times SAS can be used"?

      • Ted van der Veen commented  ·   ·  Flag as inappropriate

        Sure we could bypass this by going through a custom API call from a web role that retrieves a SAS token that is valid for a few seconds the moment the client code submits to the storage backend, but this does not explicitly prohibit client to submit only once (or any other maximum limit).

      • Bill Wilder commented  ·   ·  Flag as inappropriate

        This would have a large and important impact on customers afraid to expose storage directly to clients (even protected by a shared access signature (using Valet Key Pattern)) because customers can know that their expenses will be capped. Otherwise there is the real risk of economic DoS, intended or not.

      • Ted van der Veen commented  ·   ·  Flag as inappropriate

        When issuing a Shared Access Signature (SAS token) there should be an (optional) parameter for the maximum number of transactions the client can perform.
        At the moment if we grant someone for example insert or add access to the Queue storage service, a potentially malicious client can insert a virtually unlimited number of messages into the queue during the lifespan of the SAS token. We need a way to limit the number of (CRUD) transactions.
        Transaction are already logged / counted for billing purposes, so this may not be too complex to implement.

      Feedback and Knowledge Base