Storage

  • Hot ideas
  • Top ideas
  • New ideas
  • My feedback
  1. 26 votes
    Vote
    Sign in
    (thinking…)
    Sign in with: Microsoft
    Signed in as (Sign out)
    You have left! (?) (thinking…)
    5 comments  ·  Files  ·  Flag idea as inappropriate…  ·  Admin →

    Hi folks,

    I have this update marked as “started” as we have 100 TiB available through premium and through limited regions for standard (see public announcement of this coming soon – should already be available).

    For File Sync, we test with number of items (files/directories), as we don’t really care about file share size (it doesn’t affect the performance of sync – just upload/download). With v7, our current tested threshold (not a hard limit) is 50 million objects. This was done on ~20 TiB shares, but you should be able to go larger without issue.

    We plan to continue to scale up this tested threshold with future versions of the agent as we continue to test.

    Thanks,

    Will Gries
    Program Manager, Azure Files

  2. Make it possible to send Storage Analytics ($logs) to Log analytics

    Currently Storage accounts can generate detailed logs which are stored under $logs in the storage account. But it is not possible to link this to Log Analytics. This functionality would enable monitoring data access, which is sometimes a regulatory requirement.

    108 votes
    Vote
    Sign in
    (thinking…)
    Sign in with: Microsoft
    Signed in as (Sign out)
    You have left! (?) (thinking…)
    3 comments  ·  General  ·  Flag idea as inappropriate…  ·  Admin →

    This integration work is started. Log Analytics will be one of export options in diagnostic setting. It’s estimated to have public preview by Nov CY2019. Before this built-in connection is offered, you can refer https://azure.microsoft.com/en-us/blog/query-azure-storage-analytics-logs-in-azure-log-analytics/ to build short term solution to export logs to Log Analytics.

    For any further questions, or to discuss your specific scenario, send us an email at azurestoragefeedback@microsoft.com.

  3. Allow setting Archive tier at the account and container levels

    Currently the archive tier can be set only at the blob level. There are plenty of uses cases for having entire storage accounts or containers for archival where setting the tier for each blob is tedious and non-value-adding.

    136 votes
    Vote
    Sign in
    (thinking…)
    Sign in with: Microsoft
    Signed in as (Sign out)
    You have left! (?) (thinking…)
    9 comments  ·  Blobs  ·  Flag idea as inappropriate…  ·  Admin →

    Thank you for your feedback. We are currently in public preview of blob storage lifecycle management. The feature set offers a rich, rule-based policy engine which you can use to transition your data to the best access tier and to expire data at the end of its lifecycle. This framework can be used to match and tier blobs based on prefix to enable batch archiving of an account, containers, or even virtual directories. Having talked to a number of customers, we concluded that this solution addresses many scenarios where the need for account and container level archiving exists. See our post on the Azure Blog to get started: https://azure.microsoft.com/en-us/blog/azure-blob-storage-lifecycle-management-public-preview/preview/.

    For any further questions, or to discuss your specific scenario, send us an email at DLMFeedback@microsoft.com.

  4. Global File Locking for Azure File Sync

    For Azure File Sync on multiple geographically diverse servers, lock files across the enterprise to eliminate conflicts.

    For example, a user in Chicago opens a spreadsheet on a local server, and that server is running Azure File Sync to a share that is also mounted by a server in Atlanta also running Azure File Sync. The file lock for the spreadsheet should be global, preventing write access in Atlanta when opened in Chicago.

    See Panzura.com for reference.

    1,033 votes
    Vote
    Sign in
    (thinking…)
    Sign in with: Microsoft
    Signed in as (Sign out)
    You have left! (?) (thinking…)
    39 comments  ·  Files  ·  Flag idea as inappropriate…  ·  Admin →
  5. Appliance for Azure File Sync

    I want to use Azure File Sync on easy/small server.
    no-RAID, less-storage, less-price, less-install, less ITpro,
    and order at Online store, or order at PC shop,
    and starting Azure Files within few minutes.

    I hope a appliance for using Azure Files Storage with Azure File Sync.

    38 votes
    Vote
    Sign in
    (thinking…)
    Sign in with: Microsoft
    Signed in as (Sign out)
    You have left! (?) (thinking…)
    4 comments  ·  Files  ·  Flag idea as inappropriate…  ·  Admin →
  6. Please fix standalone Azure Storage Explorer to not change DateTime fields

    I'm using the standalone Azure Storage Explorer with a Table Storage database and when I double-click a row to see the fields and then click Update, Azure Storage Explorer changes the DateTime fields even though I didn't make any changes. It's adding 6 hours each time.

    It's easy to reproduce: open a table with a DateTime field, double-click a row, don't change anything - just click Update. Then open the same row and you'll see the time has changed. Click update again. Open the row again - the time changed again (adding 6 hours each time)!

    While searching for help…

    1 vote
    Vote
    Sign in
    (thinking…)
    Sign in with: Microsoft
    Signed in as (Sign out)
    You have left! (?) (thinking…)
    2 comments  ·  Storage Explorer  ·  Flag idea as inappropriate…  ·  Admin →
  7. Allow user-based access to Blob Containers (for support employees)

    For auditing purposes and to prevent data corruption, we want to give our support employees a user-centric, read-only access to Blob Containers in order to be able to investigate possible data corruptions (caused by bugs in systems).

    This is not possible now because the security architecture of Blob Service does not even know the concept of users or roles.

    SAS is not secure enough mechanism because it gives access to anyone by just sharing a link + you can't track who's actually using it.

    310 votes
    Vote
    Sign in
    (thinking…)
    Sign in with: Microsoft
    Signed in as (Sign out)
    You have left! (?) (thinking…)
    10 comments  ·  Blobs  ·  Flag idea as inappropriate…  ·  Admin →

    Thank you for you feedback. Currently we are in public preview of Azure Active Directory authentication for storage. This feature set allows you to use Azure’s role-based access control framework to grant specific permissions to users, groups and applications down to the scope of an individual blob container or queue. You can see the public preview announcement here: https://azure.microsoft.com/en-us/blog/announcing-the-preview-of-aad-authentication-for-storage/

    For any further questions, or to discuss your specific scenario, send us an email at azurestoragefeedback@microsoft.com.

  8. more detailed error messages

    We've run into an issue where data we tried to store wouldn't fit in the blob file. The error message was (416) The page range specified is invalid.

    At first it wasn't clear whether we were trying to start saving beyond the page file or data didn't fit or something else was culprit. Eventually we figured it out.

    It would be nice if the error message would provide more details: page size, size we tried to save, location we tried to save within the file, etc. This would've showed us the issue right away. Instead we had to spend a…

    12 votes
    Vote
    Sign in
    (thinking…)
    Sign in with: Microsoft
    Signed in as (Sign out)
    You have left! (?) (thinking…)
    2 comments  ·  Blobs  ·  Flag idea as inappropriate…  ·  Admin →

    Thank you for your feedback. We are currently working on providing this functionality and will provide updates when they become available. See the following article for the latest: https://docs.microsoft.com/en-us/rest/api/storageservices/status-and-error-codes2. Note that for REST API version 2017-07-29 and later, failed API operations also return the storage error code string in a response header. For any further questions, or to discuss your specific scenario, send us an email at azurestoragefeedback@microsoft.com.

  9. Provide an actual "read-only" role for Azure File Service

    The reader role available in RBAC doesn't really do what you think it'd do when applied to Azure File Service. A user with this role can't even see the Azure File Service when browsing the storage account.

    As I understand it this is by design as the reader role works on Azure objects and not on files and folders.

    What I need is a method that prevents users from accidentally deleting files or folders by giving them read-only access to the actual files and folders. I don't need fine grained control over access to files and folders, just a straightforward…

    78 votes
    Vote
    Sign in
    (thinking…)
    Sign in with: Microsoft
    Signed in as (Sign out)
    You have left! (?) (thinking…)
    2 comments  ·  Files  ·  Flag idea as inappropriate…  ·  Admin →

    Thank you for this feedback!

    We’re working on or have shipped several features that we think will satisfy this request.

    First, we have shipped the share snapshot feature, which enables you to protect a point-in-time for a file share. If a user were to delete a file, you can restore from the previous snapshot. To make this easier, Azure Backup will soon support scheduling share snapshot.

    Second, and more to the specific ask in the initial post, we are working on AAD authentication and authorization for Azure file shares. When we ship this feature, you will have the ability to set share ACLs that prevent deletes or modifications.

    Thanks,

    Will Gries
    Program Manager, Azure Files

  10. Azure Files - Access/Size Limits

    Azure Files - Access/Size Limits

    Please provide access to the Azure Files service from on-prem. users/devices over 2S2 VPN/ExpressRoute connection.

    Also, 5TB size limit is a deal breaker for many organizations. They need to scale higher. Although a workaround would be to provision a different storage account when the size limit is hit.

    223 votes
    Vote
    Sign in
    (thinking…)
    Sign in with: Microsoft
    Signed in as (Sign out)
    You have left! (?) (thinking…)
    started  ·  12 comments  ·  Files  ·  Flag idea as inappropriate…  ·  Admin →
  11. Storage to have readonly key for blobs

    Allow for Storage to have Read only keys that can be used in client tools like Azure Storage explorer or Cloudberry for read only access.
    Often times, a client tool needs to be given to customer to access files and it is not feasible to give the shared key which has full access.

    Even better would be to allow for a "user" key to be generated with granular permissions on containers that can be set from Azure portal.

    17 votes
    Vote
    Sign in
    (thinking…)
    Sign in with: Microsoft
    Signed in as (Sign out)
    You have left! (?) (thinking…)
    2 comments  ·  Storage Explorer  ·  Flag idea as inappropriate…  ·  Admin →

    Thank you for you feedback. Currently we are in public preview of Azure Active Directory authentication for storage. This feature set allows you to use Azure’s role-based access control framework to grant specific permissions to users, groups and applications down to the scope of an individual blob container or queue. You can see the public preview announcement here: https://azure.microsoft.com/en-us/blog/announcing-the-preview-of-aad-authentication-for-storage/

    For any further questions, or to discuss your specific scenario, send us an email at azurestoragefeedback@microsoft.com.

  12. AzCopy should support /MIR option like robocopy

    I would like the performance of AzCopy and the functionality of robocopy /MIR which will incrementally copy a local folder to Azure block blobs including deleting blobs that no longer exist in the local folder. The scenario is wanting to one-way sync a local directory with a blob container. The reverse would also be helpful of one-way synchronizing block blobs to a local directory. Currently I don't believe AzCopy can delete files from the destination.

    360 votes
    Vote
    Sign in
    (thinking…)
    Sign in with: Microsoft
    Signed in as (Sign out)
    You have left! (?) (thinking…)
    18 comments  ·  AzCopy  ·  Flag idea as inappropriate…  ·  Admin →
  13. Static website hosting in Azure blob storage

    This would require supporting default file to render and somekind of redirect from root to a particular container.

    2,239 votes
    Vote
    Sign in
    (thinking…)
    Sign in with: Microsoft
    Signed in as (Sign out)
    You have left! (?) (thinking…)
    75 comments  ·  Blobs  ·  Flag idea as inappropriate…  ·  Admin →

    Thank you for your feedback. We are currently in public preview of static website hosting for Azure Storage to enable this scenario. Check out the blog post here for more details: https://azure.microsoft.com/en-us/blog/azure-storage-static-web-hosting-public-preview. The feature set includes support for default documents and custom error documents for HTTP status code 404.

    For any further questions, or to discuss your specific scenario, send us an email at azurestoragefeedback@microsoft.com.

  14. ACL's for AzureFiles

    I've started experimenting with Azure Files. One of the features I'm lacking is the fact that you cannot give access to Folders/Files on AzureFiles based on Active Directory credentials. If you setup a typical fileshare one would like to be able to grant/revoke access to folders and files based on information of users in AD.

    3,010 votes
    Vote
    Sign in
    (thinking…)
    Sign in with: Microsoft
    Signed in as (Sign out)
    You have left! (?) (thinking…)
    125 comments  ·  Files  ·  Flag idea as inappropriate…  ·  Admin →

    Hi everyone,

    We recently announce the General Availability of Azure Active Directory Domain Services (Azure AD DS) authentication for Azure Files! By enabling integration with Azure AD DS, you can mount your Azure file share over SMB using Azure AD credentials from Azure AD DS domain joined Windows VMs with NTFS ACLs enforced. For more details, please refer to our blog post:http://aka.ms/azure-file-aadds-authentication-ga-blog.

    A part of the GA announcement, we shared the upcoming plan to extend the authentication support to Active Directory (AD) either hosted on-premises or in cloud. If you need an Azure Files solution with AD authentication today, you can consider installing Azure File Sync (AFS) on your Windows File Servers where AD integration is fully supported.

    If you are interested to hear future updates on Azure Files Active Directory Authentication, please complete this sign-up survey:https://aka.ms/AzureFilesADAuthPreviewSurvey.

    Thanks,
    Azure Files Team

  15. Azure storage - WORM capable blobs

    To better use Azure as a compliant storage for archiving, it'd be nice to have Azure to support WORM feature. More info is on the below thread:
    http://social.technet.microsoft.com/Forums/windowsazure/en-US/79258196-b8e1-4e10-90f2-feed54d69d0e/azure-and-worm-for-compliance?forum=windowsazuredata

    78 votes
    Vote
    Sign in
    (thinking…)
    Sign in with: Microsoft
    Signed in as (Sign out)
    You have left! (?) (thinking…)
    4 comments  ·  Flag idea as inappropriate…  ·  Admin →

    Thank you for your feedback. We are currently in public preview of immutable storage for Azure Blobs which will enable you to keep your data in a non-erasable and non-modifiable state for a configurable retention interval. Check out the blog post here for more details: https://azure.microsoft.com/en-us/blog/azure-immutable-blob-storage-now-in-public-preview/

    For any further questions, or to discuss your specific scenario, send us an email at azurestoragefeedback@microsoft.com.

  16. highlight how to disable Nagle's algorithm in your performance optimization MSDN articles

    The MSDN article that describes Nagle's algorithm (http://msdn.microsoft.com/en-us/library/windowsazure/hh697709.aspx) should highlight the performance gains from disabling Nagle's algorithm.

    In our case, disabling that algorithm yielded an order of magnitude improvement in latency (from 300 to 30 ms). This is a vital piece of information that is not highlighted in performance optimization documentation for Azure.

    It may even be worthwhile to consider disabling Nagle's algorithm by default, or provide some default configuration examples that remark on this algorithm.

    1 vote
    Vote
    Sign in
    (thinking…)
    Sign in with: Microsoft
    Signed in as (Sign out)
    You have left! (?) (thinking…)
    0 comments  ·  Tables  ·  Flag idea as inappropriate…  ·  Admin →
  17. Add a way to copy the contents of an entire storage account to another storage account

    Say your application stores important data in an Azure Storage Account, perhaps spread across both Table and Blob containers.

    It would be very handy to be able to copy the entire contents of a storage account to another storage account. So that you could duplicate your azure storage environment between your stage production environments easily, allowing you to test updates to your application on a copy of your live production data before actually deploying it to production.

    I know that bits and pieces of this functionality exist already, via things like Copy Blobs and Blob Snapshots, but nothing built in…

    170 votes
    Vote
    Sign in
    (thinking…)
    Sign in with: Microsoft
    Signed in as (Sign out)
    You have left! (?) (thinking…)
    16 comments  ·  AzCopy  ·  Flag idea as inappropriate…  ·  Admin →

    Thank you for your feedback. We would like to get more feedback about this item before we prioritize. Is the ask here to replicate a storage account on-demand ? Or have a mechanism to always keep two or more storage accounts in sync ?

    Meanwhile, to unblock some of our users we have published a sample that copies blobs from an account to another account: https://docs.microsoft.com/en-us/azure/storage/scripts/storage-common-transfer-between-storage-accounts?toc=%2fpowershell%2fmodule%2ftoc.json

  18. Increase the size of a BLOB Storage Account to an indefinite amount

    I'm currently working on a solution which will be using multiple storage accounts which will continue to grow indefinitely.
    The 100TB limit means that I have to continuously list the blobs in my container to workout what the total size used is and then determine if I need to create a new storage account on the fly. This is cumbersome and time consuming for the type of performance the business requires.

    It would be better to remove the account size limitation which would simplify the solution considerably and remove all the blob listing look-ups which seem counter intuitive. The only…

    9 votes
    Vote
    Sign in
    (thinking…)
    Sign in with: Microsoft
    Signed in as (Sign out)
    You have left! (?) (thinking…)
    3 comments  ·  Blobs  ·  Flag idea as inappropriate…  ·  Admin →

    Thank you for your feedback. We are currently working on increasing the scale limits of storage accounts. See the following post for updated capacity limits (currently at 5 PB for a single storage account): https://azure.microsoft.com/en-in/blog/announcing-larger-higher-scale-storage-accounts/. For any further questions, or to discuss your specific scenario, send us an email at azurestoragefeedback@microsoft.com.

  19. Provide Time to live feature for Blobs

    If I need to provide a user (or external system) some data (blob) which might be outcome of some processing (or other) and it has some expiration time I'd like to just put a new blob and set TTL property with TimeSpan (or set absolute DateTime). When the period is over my blob is deleted. So I don't have to pay for it and don't need to spin up some service for doing it myself.

    1,287 votes
    Vote
    Sign in
    (thinking…)
    Sign in with: Microsoft
    Signed in as (Sign out)
    You have left! (?) (thinking…)
    38 comments  ·  Blobs  ·  Flag idea as inappropriate…  ·  Admin →

    Thank you for your feedback. We are currently in public preview of blob storage lifecycle management. The feature set offers a rich, rule-based policy engine which you can use to transition your data to the best access tier and to expire data at the end of its lifecycle. See our post on the Azure Blog to get started: https://azure.microsoft.com/en-us/blog/azure-blob-storage-lifecycle-management-public-preview/preview/.

    For any further questions, or to discuss your specific scenario, send us an email at DLMFeedback@microsoft.com.

  20. Email Notifications of Storage/Capacity

    have email notifications of storage limit or service capacities. Recently ran out of space w/ no warning...

    7 votes
    Vote
    Sign in
    (thinking…)
    Sign in with: Microsoft
    Signed in as (Sign out)
    You have left! (?) (thinking…)
    0 comments  ·  General  ·  Flag idea as inappropriate…  ·  Admin →
← Previous 1
  • Don't see your idea?

Feedback and Knowledge Base