How can we improve Azure Storage?

Provide Time to live feature for Blobs

If I need to provide a user (or external system) some data (blob) which might be outcome of some processing (or other) and it has some expiration time I'd like to just put a new blob and set TTL property with TimeSpan (or set absolute DateTime). When the period is over my blob is deleted. So I don't have to pay for it and don't need to spin up some service for doing it myself.

1,280 votes
Sign in
Sign in with: Microsoft
Signed in as (Sign out)
You have left! (?) (thinking…)
AlexLF shared this idea  ·   ·  Flag idea as inappropriate…  ·  Admin →

Thank you for your feedback. We are currently in public preview of blob storage lifecycle management. The feature set offers a rich, rule-based policy engine which you can use to transition your data to the best access tier and to expire data at the end of its lifecycle. See our post on the Azure Blog to get started:

For any further questions, or to discuss your specific scenario, send us an email at


Sign in
Sign in with: Microsoft
Signed in as (Sign out)
  • William Wong commented  ·   ·  Flag as inappropriate

    I think TTL and rule-based are different approachs. The implemented rule-based approach should not be confused with the original ask, they serve different purpose.

    TTL is on a per blob basis, based on creation time, and should be as granular as 15 minutes.

    The suggested rule-based is per account (not per container), limited to 100 rules, based on last modification time, based on file path pattern, and granularity is 1 day.

    And one of the main concern about rule-based is that if the developer already setup the structure of their storage, it is not trivial to move over because the file path pattern requirement and global rule. They will need to rename and touch all their blobs to use the new feature.

    So I believe this response is not what the community asked for.

  • Sean Feldman commented  ·   ·  Flag as inappropriate

    Blob storage lifecycle management does NOT cover this specific feature.
    The status from the Storage team is misleading! Currently, the feature can operate on a container/pattern level, but not individual blobs and the team is pushing back per blob policy.
    I've raised this with Storage team, asking for a policy like approach using a well defined metadata, but they have no immediate timeline for it.

    Suggested approach:

    "rules": [
    "name": "expirationRuleBasedOnMetadata",
    "enabled": true,
    "type": "Lifecycle",
    "definition": {
    "filters": {
    "blobTypes": [ "blockBlob" ]
    "actions": {
    "baseBlob": {
    "delete": { "daysAfterModificationGreaterThan": "attribute:ttl" }

  • Anonymous commented  ·   ·  Flag as inappropriate

    This is an important feature for companies like us who need to archive or delete automatically after a certain period of time.

  • Christopher Warrington commented  ·   ·  Flag as inappropriate

    The storage lifecycle management preview is fine, but doesn't address the issue I have. I want to be able to create a blob and set it to expire in, say, 3 hours. When 3 hours have elapsed, the blob should be deleted (perhaps subject to the container's soft delete policy). If I create a different blob, I want to set the expiration to say 15 minutes. I also want to be able to extend and shrink a blob's TTL after it has been created. The once-a-day, policy based approach doesn't let me do this any of this.

  • David Gard commented  ·   ·  Flag as inappropriate

    I find it worrying that the only way to delete 'expired' objects in a blob store currently is to iterate through them all within a Logic App, checking each against an arbitrary date to determine if it should be deleted.

    I've tried the "Delete Old Blobs" Logic App template, and somehow a single run against a blob store with ~1100 objects generated just short of 4500 billable events, and that's without actually deleting any of the blobs. For blob stores with a large number of objects, this using this Logic App template would likely get very expensive very quickly.

    One would hope that a proper mechanism for setting a TTL on an object in a blob store will arrive soon, particularly given the arrival of GDPR in the EU.

  • Aniruddha Diwakar commented  ·   ·  Flag as inappropriate

    Sorry for immediate second post. I forgot to mention that this will actually also improve the security as we are making sure that the blob is not available when it is already used by my consumer. Offcourse this setting should be something like On/Off per blob/container so that i can choose to use or not use this feature.

    Basically this is more like a "one time use and throw" kind of a feature

  • Aniruddha Diwakar commented  ·   ·  Flag as inappropriate

    Is there a way similar logic can be applied when blob is accessed for the first time. Basically as owner of the data I want to have my consumer access the data for next say X hours. But the moment they access the data and download of blob is successful for the first time, I dont want to wait for remaining hours to expire/delete the blob. I want to delete the blob immediately after the download is completed.
    Any ideas on how can this be done?? Or is there a plan of having this feature in Azure itself so that I dont have to worry about having and maintaining another system, component.

  • Ramon de Klein commented  ·   ·  Flag as inappropriate

    We implemented this on our BLOB storage and in two days the logic app resulted in a bill of $233 (logic app only). I cannot even consider this logic app a solution. It's far more efficient to create a Powershell/C# program that lists the blobs and removes the old ones instead.

    The only proper solution is to provide a TTL like AWS S3 does, so the deletion actually transparent.

  • Ramon de Klein commented  ·   ·  Flag as inappropriate

    I have implemented a logic app (see, but I don't like this solutions for a variety of reasons:
    1. I need to maintain a logic app.
    2. I need to pay for the logic app when it triggers.
    3. It doesn't seem to scale well. Listing all blobs to an array is madness with a container with >1M of objects.
    4. I get "Rate limit is exceeded. Try again in 4 seconds." when deleting the blobs.

    The AWS S3 lifecycle method ( is much more convenient and user-friendly.

    It was confirmed to be implemented in 2017, but the only thing we got was a louse logical app, that doesn't work well. Please come up with something better.

  • Oliver Tomlinson commented  ·   ·  Flag as inappropriate

    Just received this MS after chasing :

    "We actually will have something published shortly that will enable this with Logic Apps (some final testing underway on this), while the full API/policy based version of it will be shipped in CY18 per the current plan."

  • Oliver Tomlinson commented  ·   ·  Flag as inappropriate

    Please allow us to define a TTL/expiry on a container that applies to all blobs in the container, not just individual blobs! Thank you.

    P.s. where is the update from the Azure Storage Team on this?

    Your last response has been over 7 months, and not the "at least once per quarter" as promised.

    Why doesn't the Azure Storage Team expose a public road map like other Azure Teams?!?

← Previous 1

Feedback and Knowledge Base