Work on Bytecode support has started. Will update here when this becomes generally available.
Thank you for your suggestion and votes.
We are currently working on this. Expect release before end of 2018.
1 vote0 comments · Application Insights » Service monitoring and diagnostics · Flag idea as inappropriate… · Admin →
We have just released a major update to our pricing for Cosmos DB.
Starting today, customers can now provision a database with just 400 RU/s (or about $ 0.77/day USD). Combined with the ability to share throughput across collections this should make Cosmos DB much more affordable to users with many small collections
We invite you to checkout our new pricing in the Azure Portal and give us feedback.
Thank you for your patience and feedback.
I finally found an article about this: https://azure.microsoft.com/en-us/blog/azure-cosmos-developer-experience-updates-december-2018/#db-level-throughput
So in a shared throughput database, you basically need 100 RUs per collection, with an overall database minimum of 400 RUs.
I'd still like to see this be more "pay as you go", but this is a *huge* improvement.
While this is something we do not have planned in the short term. You can use community tools such as this: https://github.com/devkimchi/ARM-Templates-in-YAML
Would be so much easier to read and write!
Our long-term goal is to have every service within Azure publish events, however, we have yet to begin work on this one.
As always, we’re passing the feedback along, but make sure you reach out to Storage as well so they hear your voice directly!
We’ve added some custom configurable tiles such as; ‘markdown’ (allows custom markdown), ‘armData’ (will display data from a defined arm GET), and ‘armActions’ (creates a button which will create a user defined arm POST call)
You can now use the Azure CDN to access blobs with custom domains over HTTPS. See the following article for instructions on how to do so: https://docs.microsoft.com/en-us/azure/storage/storage-https-custom-domain-cdn. Having talked to a number of customers, we concluded that this solution addresses many scenarios where the need for HTTPS access to blobs with custom domains exists.
Native Azure Storage support for using SSL to access blobs at custom domains is still on our backlog. We would love to hear about your scenarios where using the Azure CDN is not an acceptable solution, either by posting on this thread or sending us an email at email@example.com.
With static web hosting in preview, https://azure.microsoft.com/en-us/blog/azure-storage-static-web-hosting-public-preview/, this is needed even more.
Thanks for your suggestion. Currently, we don’t have anything planned in changing our billing meter from hour to minute. But will leave this request open and will include this in future planning discussions for our roadmap.
Thank you for your suggestion.
Thank you for all the feedback for the preview of RU/m. We are currently reviewing feedback to drive further improvements. Please stay tuned. For any questions, please reach out to us at AskCosmosDB@microsoft.com
Doesn't "unlimited containers" already fit this request? The problem, as I see it, is that the minimum for that is 1,000 RUs where it should be 100 RUs so we can truly pay for what we use.
Thank you for your feedback. We are currently in public preview of static website hosting for Azure Storage to enable this scenario. Check out the blog post here for more details: https://azure.microsoft.com/en-us/blog/azure-storage-static-web-hosting-public-preview. The feature set includes support for default documents and custom error documents for HTTP status code 404.
For any further questions, or to discuss your specific scenario, send us an email at firstname.lastname@example.org.
I don't know why they haven't updated everyone, but I already have it running. :) See https://www.youtube.com/watch?v=LFFCokEONvo
Thanks for your feedback! As you mentioned, we initiate a change detection job once every 24 hours to enumerate the Azure file share and scan for changes. This is required for the Azure file share because Azure Files currently lacks a change notification mechanism like Windows Server has (we watching the USN journal on Windows Server to automatically initiate sync sessions on the server after changes are made).
Long term, we would like to build a change notification mechanism directly into Azure Files. Shorter term, we could use your feedback to understand how painful the once every 24 hours change detection is for you. Please vote and/or leave comments on this item to let us know we should invest in work to make the change detection job run more frequently/faster.
Program Manager, Azure Files
Could you imagine if a change on OneDrive's web interface didn't sync back to any clients for 24 hours? No one would use it. It's pretty obvious why a feature like this is needed. Due to the flow of information, if it isn't syncing nearly instantaneously, it's not a viable solution for us.
We have started work to enable this. We discuss the approach and implementation some during our webcast last month at about minute 25 here: http://aka.ms/azurefunctionslive