How can we improve Azure Cosmos DB?

Better handling of "request limit too large"

Here (https://docs.microsoft.com/en-us/azure/cosmos-db/performance-tips) you can see that when you have exceeded your RU/s quota, Cosmos DB will respond with HTTP Status 429 with an HTTP header x-ms-retry-after-ms telling you when you can make the next request. It also says that the request will be retried automatically, if you use the SQL API.

For the MongoDB API, requests will not be retried automatically and there is nothing telling the user how long to wait before retrying. Instead you have to catch a MongoCommandException, look at the Code property for 16500 and guess when to retry,

I'd like the same support in the Mongo API as exists in the SQL API.

152 votes
Vote
Sign in
(thinking…)
Sign in with: oidc
Signed in as (Sign out)
You have left! (?) (thinking…)
Kristoffer Persson shared this idea  ·   ·  Flag idea as inappropriate…  ·  Admin →

3 comments

Sign in
(thinking…)
Sign in with: oidc
Signed in as (Sign out)
Submitting...
  • Shunsuke Futakuchi commented  ·   ·  Flag as inappropriate

    Only this simple query, it uses over 15000RU. (other simple query uses 2.8 RU)
    > db._collection_name_.find({"_id": {"$in": [null]}});
    It's hard to handle in MongoDB API.

  • adam commented  ·   ·  Flag as inappropriate

    I'm getting the same error just calling DeleteManyAsync on a collection. This is terrible behaviour - the resource throttling should be implemented as part of the service, not thrown back to the client as an error.

Feedback and Knowledge Base