How can we improve Azure Cosmos DB?

Autoscale Throughput (UR/s)

Depending on the average amount of incomming requests/ required RU's, (or other parameters,)
I would like to autoscale the througput(RU/s) of a collection.

1,144 votes
Vote
Sign in
Check!
(thinking…)
Reset
or sign in with
    Password icon
    Signed in as (Sign out)
    You have left! (?) (thinking…)
    tjgalama shared this idea  ·   ·  Flag idea as inappropriate…  ·  Admin →

    45 comments

    Sign in
    Check!
    (thinking…)
    Reset
    or sign in with
      Password icon
      Signed in as (Sign out)
      Submitting...
      • Morrolan commented  ·   ·  Flag as inappropriate

        Wow, this is still not a feature? This was one of the larger reasons I decided not to use cosmos. Such a pain to run jobs to up the RU's, and then put them down again, and handle when task fails you don't want the collections left set high. Pain to deal with this stuff, when it should be all taken care of in a managed service.

      • tjgalama commented  ·   ·  Flag as inappropriate

        Waiting more than two years on an issue that has over 800 votes with a state that's more than a year old.
        Is it really so hard to auto-scale throughput based on 429 responses? Or do I need to hack my own solution?
        Don't get me wrong, I ❤ CosmosDB. But this, I just do not understand it.
        Please respond

      • Danny van der Kraan commented  ·   ·  Flag as inappropriate

        @Dan Friedman: No, because you still need to provision RU's via the portal or programmatically. I want to be able to not provision RU's at all and have Azure do it via an algorithm. Then you have a true serverless solution.

      • Dan Friedman commented  ·   ·  Flag as inappropriate

        Doesn't "unlimited containers" already fit this request? The problem, as I see it, is that the minimum for that is 1,000 RUs where it should be 100 RUs so we can truly pay for what we use.

      • Ian Bennett commented  ·   ·  Flag as inappropriate

        For partitioned collections it would be good if autoscale would work on a per partition basis to keep costs down. RU allocation across every partition can be inefficient as in the following examples.

        1. RU allocation applies to all replicas. Using node.js SDK, it is not possible to direct queries to secondary replicas so that RU goes largely unused.

        2. In some use cases it can be better for performance to only use 1 or 2 partitions at a time for read/writes but over time data is spread across all partitions. In this case the RU for all partitions not being actively used at the time are largely unused.

      • Anonymous commented  ·   ·  Flag as inappropriate

        what is the status of this request? i agree with the other user, playing as set RU amount and not based on usage is bad. considering moving away.

      ← Previous 1 3

      Feedback and Knowledge Base