Allow setting Archive tier at the account and container levels
Currently the archive tier can be set only at the blob level. There are plenty of uses cases for having entire storage accounts or containers for archival where setting the tier for each blob is tedious and non-value-adding.
Thank you for your feedback. We are currently in public preview of blob storage lifecycle management. The feature set offers a rich, rule-based policy engine which you can use to transition your data to the best access tier and to expire data at the end of its lifecycle. This framework can be used to match and tier blobs based on prefix to enable batch archiving of an account, containers, or even virtual directories. Having talked to a number of customers, we concluded that this solution addresses many scenarios where the need for account and container level archiving exists. See our post on the Azure Blog to get started: https://azure.microsoft.com/en-us/blog/azure-blob-storage-lifecycle-management-public-preview/preview/.
For any further questions, or to discuss your specific scenario, send us an email at DLMFeedback@microsoft.com.
Craig Humphrey commented
+1 to also being able to set host/cold/archive tier at the Share level.
James Boyce commented
Something to look at in the interim before the Azure team give us this option is a script that Marc Kean wrote. It can parse through your blobs and set the storage tier by the age of the blob. He also uploaded it to the automation template directory in Azure so you could add it to your automation account and have it run on a schedule from there. Here's his post explaining how it works:
Another option if you don't have a bunch of files you're adding every month is you could set up a flow in Microsoft Flow to watch your Azure container (or a folder in an Azure container) and then set the storage tier of the blob when it gets created. Basic Flow plans allow for 750 runs of your flows per month for free. I'm doing this to offload data from a OneDrive folder to Azure for long-term storage. When the flow runs it watches for creation of the file in OneDrive, then creates the blob in Azure and sets the tier to archive.
Rijnhard Hessel commented
Just signed up to azure. Was looking at storing 50TB immediately in archive, and more to come, for extended periods.
Then discovered this. To transfer to cool storage first and then to archive makes it less economical (which is possibly why they haven't done it).
The amount of effort it is going to require to individually archive files with the amount of data we have is **** ridiculous.
Congrats guys, customer lost.
Any updates on this?
Yes, you really need to make it automatic/automated to make everything archive.
If you could program it to choose hot or cool, why can you not program it to choose archive as well??
Also, why can we not use Capital Letters??
Lester Waters commented
The Azure Storage team really messed this one up. I asked for: (a) allowing the "archive" tier to be a global default, just like "cool" is; and (b) allowing inheritance of the tier by setting it at the container level. The storage team insisted that the "apps" would be able to handle it... yeah maybe one day. We abandoned using the archive tier as a result.
I found a powershell script about putting everything in the container to archive. i have not tested out it.
i am still missing the option to store everything directly to archive access tier.
If i remember correctly competive company Amazon has something called Glacier that gives the possibility to backup from S3 directly to Glacier for archiving purpose.
Derek Gabriel commented
This missing seems foolish. How hard could it be to cycle through the blobs when I enable this on the container? I assume I can do it through the API, so just add a check box that saves me the time....