Autoscale Throughput (RU/s)
Depending on the average amount of incomming requests/ required RU's, (or other parameters,)
I would like to autoscale the througput(RU/s) of a collection.
We’re excited to announce that we are making this a lot easier with our preview of Autopilot. With Autopilot, Azure Cosmos DB will automatically manage and scale the RU/s of your containers based on the usage. This eliminates the need for custom scripting to change RU/s and makes it easier to handle bursty, unpredictable workloads.
You can try out Autopilot in your Cosmos accounts by going to the Azure Portal and enabling the feature in the “Preview Features” blade.
Pål Berg commented
We are currenly testing autopilot as we wanted to lower the cost. The data size is approx 20 GB and we moved this from a 10k RU/s database to autopilot with 20k RU/s. Conclusions so far are that new database has a longer delay when the database has scaled down and needs to scale up (I assume this is the case). Subsequent calls give good response. We still get some 429 responses, but since the next autopilot will be 100k with a minimum of 10k, this is not an option. A better option for us would then be to set a mimimum and maximum. Cost is approx half of previous.
Nathan Becker commented
Very promising, but like others have mentioned, it's not going to work for us unless we can add more containers for each tier - we currently have a situation where we have 25 containers that are rarely used (they're for developer Azure environments used for development/integration testing), and would most likely need 400 RU/s for 99.99% of the time. Forcing us to pay for 10K RU/s for that is excessive, especially since we currently pay for 2200 RU/s using the shared database throughput pool, and that is more than enough for our needs.
David Wilson commented
Autopilot has been a big improvement but as others have pointed out, these issues are critical for us:
- need support for more containers without forcing upgrade to minimum RU tier
- the tiers are spaced too far apart
- the minimum RU/s would ideally be fixed at 400 for every tier. Our traffic is very bursty and I'm happy to commit to a realistic maximum for our usage. But don't force my expenses to go thru the roof during our slow periods, in expectation of a traffic swarm that could be days away.
Burdyugov, Anton commented
please vote for this:
Allow to turn on Autoscale Throughput feature for already created containers
Maxim Rybkov commented
Direction is great, but the limitation in # of containers is even bigger than minimum throughput. How it will work with storage tables migration. We have 1_000s of containers, which means we need 1_000_000s RU/s?
Simon Vane (svane) commented
This looks very promising but I have a few comments (some of which have already been covered):
- We need to be able to set this on existing databases / containers.
- We need more granularity for setting max RU/s. The jumps are massive. 4k, 20k, 100k, 500k are not realistic steps.
- We need the minimum RU/s to be less than 10% of maximum. We have high (unpredictable) peak usage (which we're happy to pay for when we're using it) and very low off peak (and we only want to pay for the low usage during these times).
- We really want the minimum to be 400 as is now, irrespective of the maximum.
- 1.5 times the standard RU/s cost is a bit hard to swallow.
I really want to continue using Cosmos but it's proving a hard sell because of the cost / scaling models.
This is a great feature. I waited long.
I have only one order. Autopilot RU / s is 1.5 times more expensive. Let's improve this.
Kazuyuki Miyake commented
Great! I've been waiting. I have already activated it in my account so I will try it!
Leo Tabakov commented
Please add ability to enable Autopilot for existing collection.
Also would be nice to have more granular throughput options or ability to specify a custom range.
Is there a way to enable Autopilot for existing collections? I enabled the preview feature but it seems it's only available when creating new collections.
Lajos Marton commented
The direction is good, but I would need more lower throughput numbers. As now the minimum is 10% and the minimal maximum output is 2000. But what if, I would need minimum 10 and maximum 100? My main issue is with Cosmos DB that is too expensive for low loads in DB.
Any update its almost been 2 year since it started?
Peter Carr commented
This is essential functionality that is directly preventing people from adopting cosmos. There should 100% be an elegant solution or at the very least a way to accomplish this that isn't ridiculously convoluted, messy, and difficult. And if it has to be convoluted and difficult there should at least be a coherent tutorial on MS docs. This is seriously frustrating.
Come on Microsoft, pull your finger out
Any updates, Azure team? Can you give us any hints? Is this coming in 2019?
Shubham Priya commented
Right now there is no mechanism to scale out Cosmos DB based on increase in throughput for a collection. There should be a provision to scale out and back based on rules derived from Metrics
How to change Cosmos DB RU/s value from 400 to 20000 in the morning then step down to 400 at night automatically.
Is there any to fetch the Max RUPM Consumed Per Minute through the REST API
Colin Webber commented
Is there any update or ETA on this?
It's been almost 2 years at this point. My team is implementing CosmosDB for SessionState and Distributed caching across numerous projects and this feature will be crucial for controlling costs while still responding to peaks in traffic throughout the day effectively.
This really is mandatory as we are getting hit by peak loads that mean I have to have the RU's to match, a very expensive model.