Storage
-
Enable immediate sync after changes on the Azure file share for Azure File Sync
When I make a change on my server endpoint (Windows File Server), Azure File Sync initiates a sync session very quickly after file save, however for changes on the cloud endpoint (Azure file share), I have to wait at least 24 hours to have changes get synced down to my server endpoints.
Please invest in features to initiate a sync session immediately after changes are made cloud-side, or at least increase the interval of sync from the Azure file share.
2,212 votesAs you mentioned, we initiate a change detection job once every 24 hours to enumerate the Azure file share and scan for changes. This is required for the Azure file share because Azure Files currently lacks a change notification mechanism like Windows Server has. Long term we will add this capability and have automatic immediate sync.
There is now a way to trigger sync to happen on files that are placed directly in the Azure File share. With this new cmdlet you can point sync to particular files or directories and have it look for changes right then. This is intended for scenarios where some type of automated process in Azure is doing the file edits or migrations are done by an admin (like moving a new directory of files into the share). For end-user changes, the best thing to do is install Azure File Sync in an IaaS VM…
-
Schedule snapshots of Azure file shares
I want to schedule snapshot of Azure Files.
Every morning, Every Monday, etc.
For trouble each users, they can restore files at morning. or Monday.
IT pro can say, Hey you can back to morning who trouble any.355 votesThank you for your feedback! We working to provide this functionality through the Azure Backup offering. They are currently testing this functionality in a private preview, and are going to be entering a public preview soon. I will update this feedback item as soon as this is released.
Thanks,
Will Gries
Program Manager, Azure Files -
Enable multiple custom domains for storage accounts
I would like to register 2 or more custom domains to point to 1 storage account.
For example, foo.com, bar.com both point to blob.etc.windows.net
272 votesThank you for your feedback. Enabling multiple custom domains for a single storage account is planned for the coming year. We will provide updates when they become available. For any further questions, or to discuss your specific scenario, send us an email at azurestoragefeedback@microsoft.com.
-
Azure Files and Azure File Sync must support all unicode characters
Azure Files and Azure File Sync do not support all unicode characters/characters supported by NTFS. This is particularly challenging when working with languages with non-Latin charactersets (see this post on issues supporting Japanese charactersets: https://social.msdn.microsoft.com/Forums/ja-JP/1ed730a9-9a4d-40dd-b84e-6f184e35e633/).
This is also a significant adoption blocker when trying to adopt Azure File Sync, as file names on NTFS that are valid are rejected by Azure Files.
142 votesThank you for the feedback! This is an issue we are aware of and are actively working to close this gap this year. Please feel free to continue to vote for this item to help us prioritize this effort.
Thanks,
Will Gries
Program Manager, Azure Files -
Changeable "last-modified" file property on Azure File Service
At the moment, our application stores its documents in a normal folder on a network share. The “Last Modification Time” of the files are important for internal version-handling, and our application heavily uses this file-property.
Now we decided to move our documents into the cloud and we started using Azure File Service Preview. When we upload the documents via REST API, the “Last-Modified” property of the uploaded files get the current-time – that’s reasonable.
But unfortunately afterwards the “Set File Properties” REST API call (https://msdn.microsoft.com/en-us/library/azure/dn166975.aspx) does not support the change of the “Last-Modified” property. (I tried both with…
113 votesThanks for this feedback. Long term, we would like the File REST API for Azure Files to be a superset of the functionality available in SMB. In the short term, we are looking to close some of the gaps in the API that are most painful, like the file attributes that you can set via SMB but not File REST.
We would appreciate further feedback on this item, in particular feedback on the things that you can do today over SMB but not REST to help us prioritize the order in which we close the gap!
Thanks,
Will Gries
Program Manager, Azure Files -
Scale to Premium Storage from standard Storage
Please allow us to switch from Standard Storage to Premium Storage and the other way around. Or at least provide us with the option to scale up from Standard Storage to Premium Storage (one way).
The current process involves stopping the affected VM, copying the disk using AzCopy and recreating the VM in the new storage account.
I'm pretty sure Micorosft's team can take care of this process behind the scenes and just provide us with a user interface option and/or an API method.Currently we have multiple VMs (about 20+) running on standard storage that need to be upgraded…
92 votesAzure Storage is actively working on features that could improve the migration story from standard to premium disks.
-
consider azure premium files share for desktop applications which need heavy worloads, latency < 1ms, high I/O but only need 100gb size
We have a .net desktop application for exporters. and run it well in our pc and Area network. We now built a premium files share storage with 500g
and make a share path with our local machine, we found it works become very slow. if we only open one database file and edit,update one item, it only a little slow. but when we will open,create,update many item and many files. it become very very slow. We also do it from vm machine. when we excute out desktop allpication in vm machine. it is fine. but when we link files share…83 votesHi folks,
While we have made premium file shares generally available, we agree there is ongoing performance work associated with them.
One of the big performance related items is one you touched on in the description: performance when doing activity on many files at once. Azure Files, on both premium and standard, performance best for read/write operations with few handle/metadata operations (i.e. databases) and less good for scenarios that require lots of handle and metadata operations. We are working to improve this performance category and hope to have more to share soon.
Thanks,
Will Gries
Program Manager, Azure Files -
Auto defragment append blobs
Append blobs are great, however, when a such a blob is generated through a long series of small increments, the read performance of the resulting blob is very poor: from 10x to 20x slower than reading a regular page blob.
The performance problem goes away if the app rewrite the append-blob in large 4MB chunks. However, this process is complicated to setup, and collides with any 'always-on' property of the app.
As append blobs are append-only, it would be much better if Azure was taking care of defragmenting the append blobs on its own; possible through a dedicated API operation…
75 votesThank you for your feedback. This work is planned for the coming year. For any further questions, or to discuss your specific scenario, send us an email at azurestoragefeedback@microsoft.com.
-
[Feature Request] Please enable customer to use alert rule and capacity graph for Premium Storage Accounts.
According to the document below, Premium Storage Accounts have the following limits.
- Azure subscription and service limits, quotas, and constraints<https://azure.microsoft.com/en-us/documentation/articles/azure-subscription-service-limits/>
Total disk capacity per account 35 TB
Total snapshot capacity per account 10 TBMax bandwidth per account (ingress + egress1) <=50 Gbps
We need to manage these capacities so that we can execute operations for Premium Storage Account without error.
Please enable customer to use alert rule and capacity graph for Premium Storage Accounts.Japanese
以下のドキュメントによるとプレミアムストレージアカウントには上限があります。Azure サブスクリプションとサービスの制限、クォータ、制約
< https://azure.microsoft.com/ja-jp/documentation/articles/azure-subscription-service-limits/ >
アカウントごとのディスク容量合計 35 TB
アカウントごとのスナップショット容量合計 10 TBアカウントごとの最大帯域幅 (受信 + 送信1) 50 Gbps 以下
エラーなくプレミアムストレージへの操作を行うために、これらの上限を管理する必要があります。 …
31 votesAzure Storage is currently working on features that would address this problem.
-
Allow smaller premium drive sizes for VM
Currently the smallest premium disk size when creating a VM is 128 GB. We have many VM's that are used solely for computation intensive workload and large amounts of data are not stored to the operating system drive. A 64 GB option would be sufficient for the space we are using and help to reduce some of the premium storage costs.
26 votesAzure Storage is actively working on features that would address this concern.
-
Increase the number of attached Disks per Core
Please allow attaching more disks per core, or increase the max size of disk (1TB by default) without forcing upgrading the VM instance.
For my project I use A1 Standard VM instance and CPU/memory is sufficient for my project, but it needs more storage (up to 10TB).
I can't attach more disks without spending more on CPU/memory that my project doesn't require.
I think, we should be charged for capacities/storage we really use and not for what we're forced to use!
Please support it!10 votesAzure Storage is actively working on features that could address this request.
-
Implement viewer for json / csv blob files
At the moment the only way to view a files in storage blobs is to download it and open it within an editor. This is fine for larger files, but for the smaller ones it would be a bit easier to be able to get a quick (pre)view of the file in the browser by just clicking it. Should not be that hard to implement an additional pane with a view of the raw file contents.
9 votesThank you for you feedback. This work is planned for the coming year. We will provide updates when they become available. For any further questions, or to discuss your specific scenario, send us an email at azurestoragefeedback@microsoft.com.
-
Display OS Disks and VM Images in Delete storage account blade
There's many 'cannot delete storage resource' questions caused by OS disks or VM image references such as below;
- https://feedback.azure.com/forums/217298-storage/suggestions/11525553-cannot-delete-storage-account
- https://feedback.azure.com/forums/217298-storage/suggestions/10976463-failed-to-delete-storage-account-10-51-am-failed-t
- http://stackoverflow.com/q/34362904/361100It would be good to provide VM disks and OS images information in "Delete storage account" blade summary page.
7 votesAzure Storage is actively working on features that could address this problem.
- Don't see your idea?