Autoscale Throughput (RU/s)
Depending on the average amount of incomming requests/ required RU's, (or other parameters,)
I would like to autoscale the througput(RU/s) of a collection.
We have now released autoscale for general availability.
Thank you everyone for your comments and votes. We are very excited to release this feature for all of you.
As others have pointed out, it doesn't make sense to charge for the max RU/s per hour if said usage was used over a few seconds. It's obvious that this is to maximize returns from the autoscaling service, which I and others are very thankful for. Ideally the charge would be per second but it's understandable if it had to be max per minute or even max per 5-10 minutes for the overhead on your end but max per hour is ridiculous and conspicuous bad faith. Please address this, thanks.
Thank you very much (-:
I highly appreciate it.
Nathan Becker commented
We were able to enable it on our Cosmos databases today, and so far it's working great! Costs have dropped by a significant amount, making it quite a bit more viable for us. Thanks!
Kristoffer Axelsson commented
The autopilot feature is great, it's exactly what I need in the use case of my current client, I've upvoted, I really hope to this become GA soon!
When Autopilot becomes GA there are some things I really hope to see:
In the case I described, we have some fluctuations in RU/s and have several collections and databases set up with indexes. Of course, it's possible to re-create containers and indexes with i.e. the CosmosDB REST API, but fact still remains that if would be better to not have to re-create databases and/or containers just to enable the awesome Autopilot feature, especially in production use.
Also, as I understand it, Autopilot feature can currently only be enabled in Azure Portal, I think it would be great if it could be enabled with i.e. the Powershell, CosmosDB REST API and the. NET SDK, in a real world usage this is basically a necessity since in real world usage we deploy the resources with CI/CD as far as possible.
Finally, I hope it will be possible to change a database/container to Manual and then back to Autopilot again, this currently does not seem to be the case (even though being able to turn it on on existing db/containers and enabling it with Powershell/or the .NET SDK is more important of course).
Yohan S. commented
Currently CosmosDB is billed / hour. It means if you need a burst workload 1 second for each hour, you'll need to have your RU/s configured to its max everytime.
In order to add more value to the Autopilot feature, could you bill each second instead of each hour ?
We are currently facing this scenario when we sync our data with other system/database each hour.
Pål Berg commented
We are currenly testing autopilot as we wanted to lower the cost. The data size is approx 20 GB and we moved this from a 10k RU/s database to autopilot with 20k RU/s. Conclusions so far are that new database has a longer delay when the database has scaled down and needs to scale up (I assume this is the case). Subsequent calls give good response. We still get some 429 responses, but since the next autopilot will be 100k with a minimum of 10k, this is not an option. A better option for us would then be to set a mimimum and maximum. Cost is approx half of previous.
Nathan Becker commented
Very promising, but like others have mentioned, it's not going to work for us unless we can add more containers for each tier - we currently have a situation where we have 25 containers that are rarely used (they're for developer Azure environments used for development/integration testing), and would most likely need 400 RU/s for 99.99% of the time. Forcing us to pay for 10K RU/s for that is excessive, especially since we currently pay for 2200 RU/s using the shared database throughput pool, and that is more than enough for our needs.
David Wilson commented
Autopilot has been a big improvement but as others have pointed out, these issues are critical for us:
- need support for more containers without forcing upgrade to minimum RU tier
- the tiers are spaced too far apart
- the minimum RU/s would ideally be fixed at 400 for every tier. Our traffic is very bursty and I'm happy to commit to a realistic maximum for our usage. But don't force my expenses to go thru the roof during our slow periods, in expectation of a traffic swarm that could be days away.
Burdyugov, Anton commented
please vote for this:
Allow to turn on Autoscale Throughput feature for already created containers
Maxim Rybkov commented
Direction is great, but the limitation in # of containers is even bigger than minimum throughput. How it will work with storage tables migration. We have 1_000s of containers, which means we need 1_000_000s RU/s?
Simon Vane (svane) commented
This looks very promising but I have a few comments (some of which have already been covered):
- We need to be able to set this on existing databases / containers.
- We need more granularity for setting max RU/s. The jumps are massive. 4k, 20k, 100k, 500k are not realistic steps.
- We need the minimum RU/s to be less than 10% of maximum. We have high (unpredictable) peak usage (which we're happy to pay for when we're using it) and very low off peak (and we only want to pay for the low usage during these times).
- We really want the minimum to be 400 as is now, irrespective of the maximum.
- 1.5 times the standard RU/s cost is a bit hard to swallow.
I really want to continue using Cosmos but it's proving a hard sell because of the cost / scaling models.
Takekazu Omi commented
This is a great feature. I waited long.
I have only one order. Autopilot RU / s is 1.5 times more expensive. Let's improve this.
Kazuyuki Miyake commented
Great! I've been waiting. I have already activated it in my account so I will try it!
Leo Tabakov commented
Please add ability to enable Autopilot for existing collection.
Also would be nice to have more granular throughput options or ability to specify a custom range.
Is there a way to enable Autopilot for existing collections? I enabled the preview feature but it seems it's only available when creating new collections.
Lajos Marton commented
The direction is good, but I would need more lower throughput numbers. As now the minimum is 10% and the minimal maximum output is 2000. But what if, I would need minimum 10 and maximum 100? My main issue is with Cosmos DB that is too expensive for low loads in DB.
Any update its almost been 2 year since it started?
Peter Carr commented
This is essential functionality that is directly preventing people from adopting cosmos. There should 100% be an elegant solution or at the very least a way to accomplish this that isn't ridiculously convoluted, messy, and difficult. And if it has to be convoluted and difficult there should at least be a coherent tutorial on MS docs. This is seriously frustrating.
Come on Microsoft, pull your finger out
Any updates, Azure team? Can you give us any hints? Is this coming in 2019?