Thanks for your feedback. We are now planning to add support for /MIR and rsync-like functionality in AzCopy. Unlike them, the sync mechanism will only work on based on time stamp and file size. Deleting files in the destination will be based on an optional flag.
The first preview of this feature in AzCopy is expected to release around May, 2018.
Agreed -- it would make doing backups from one storage account to another painless --I am relatively new to Azure storage and have just moved over 300,000 files to blob storage and cannot find ANY MS utility that can keep my source account in sync with a backup account.
Background: While doing some testing it became apparent how easy it was to delete a container and all blobs within it -- Simply click the Delete icon for any container and then click "Yes" to confirm and all your production blobs may soon be magnetic dust -- and because most production code will automatically (re)create a container if it does not exist, you will NOT be able to recover your data (see point 2 below):
This from Microsoft:
If you delete a Blob Storage Container, we cannot guarantee that the data can be recovered. We can initiate a best effort option, though; if the data hasn’t been garbage collected yet, we would open a critical response incident ticket as soon as possible. For more information, feel free to visit the Azure Storage webpage. Also, it is always good practice to keep a backup of your data, but if this were to happen, the hypothetical procedure (behind the scenes) would be:
1. Storage doesn’t provide in-place recovery, which means the data won’t be restored to the original storage account. Instead, it will be put in the secondary region.
2. We would ask you NOT to recreate the containers. It is crucial that a container with the same name is not made because this will overwrite the path to the pre-existing container, losing all chance of recovery
3. If the storage account in question is a LRS account, recovery is not possible. If it is GRS or RAGRS, we would ask you to execute the following steps:
a. You would need to generate a read+list permissions SAS token for the container you want to recover, setting the expiry time to be at least 7 days in the future so the recovery has time to be recovered.
b. You must make your account RAGRS via the portal. I would recommend doing this anyway, as the data is safer in this kind of storage.
Bottom line -- if you have a production system and some intern or newbie decides to delete a container that holds production data, you are basically doomed.
You can now use the Azure CDN to access blobs with custom domains over HTTPS. See the following article for instructions on how to do so: https://docs.microsoft.com/en-us/azure/storage/storage-https-custom-domain-cdn. Having talked to a number of customers, we concluded that this solution addresses many scenarios where the need for HTTPS access to blobs with custom domains exists.
Native Azure Storage support for using SSL to access blobs at custom domains is still on our backlog. We would love to hear about your scenarios where using the Azure CDN is not an acceptable solution, either by posting on this thread or sending us an email at firstname.lastname@example.org.
As seen by the multitude of posts below, can someone (anyone?) from inside Microsoft please post a reply with regard to the question about when SSL (https) will be supported with custom domains?
As most of the comments below indicate, we WANT to use MS services, but unless SSL functionality comes before the end of this year (2016), many of us will simply be forced into to using a different service as we cannot make any type of argument to our superiors as to why our competitors are hosting their content via https, and we (who are using MS services) cannot.
Tomorrow, I have to try to explain to a panel of execs why we cannot use https and our custom domain like our competitors are. I fear they may simply say, "Well then, lets switch services"!
Again, an MS reply / announcement / specific date / any help would be in order.