How can we improve Azure Cosmos DB?

Azure CosmosDB is too expensive for small but multiple collections

Currently using on-prem MongoDB (on Linux) and wish to move to Azure, but I find CosmosDB is too expensive for small but multiple (MongoDB)collections because it seems that a minimum of 400 RRU's/per second will be charged for each collection.

The terminology used on the pricing web pages is somewhat unclear though and I am not sure if the pricing for the minimum of 400 RRU's/second applies to partitions or collection (or if these terms are in fact identical semantically)

1,561 votes
Vote
Sign in
(thinking…)
Password icon
Signed in as (Sign out)
You have left! (?) (thinking…)
AzureEager shared this idea  ·   ·  Flag idea as inappropriate…  ·  Admin →

We have just released a major update to our pricing for Cosmos DB.

Starting today, customers can now provision a database with just 400 RU/s (or about $ 0.77/day USD). Combined with the ability to share throughput across collections this should make Cosmos DB much more affordable to users with many small collections

We invite you to checkout our new pricing in the Azure Portal and give us feedback.

Thank you for your patience and feedback.

129 comments

Sign in
(thinking…)
Password icon
Signed in as (Sign out)
Submitting...
  • Will Lee commented  ·   ·  Flag as inappropriate

    very expensive compare to azure tables. I used that for everything because Cosmos is way too expensive

  • Jake commented  ·   ·  Flag as inappropriate

    WHY, why is Cosmos DB so expensive? I migrated my schema (15 collections so far) with 100 RUs each (smallest possible, I believe) and I have very few data in these collections yet I'm being billed extra. Why is creating a simple collection that is NOT being utilized so expensive? What happened to "pay for what you use"? We still need the team to improve in this area.

  • Anonymous commented  ·   ·  Flag as inappropriate

    So my first full Cosmos bill has arrived ... Sorry MS, paying £65 a month for a TINY dev/test database is ridiculous. In fact it's so ridiclous I'm not going to carry on using it.

  • Yohan S. commented  ·   ·  Flag as inappropriate

    Using the shared pricing model is really helpfull but all collections in a dev/test environments will also have small number documents stored.

    In the Bizpark subscription, we have 130€ available / month and it's currently limiting the usage to 22 collections with the current 100RU limitation / collection.

    Until we get a serverless pricing model,
    Can you lower (once more) the entry price / collection please?
    (something like 10RU/collection would help our dev/test env to be setup correctly. And we will have 1000 or 1500 RU shared trough all the collections)

  • Stefan Reichel commented  ·   ·  Flag as inappropriate

    The new database-shared trhoughput definitely hepls. Although the bill can still get quite high because of spiky workloads. We have a data source that sends large amounts of data in irregular intervals while in beteween there is not that much going on. We still have to keep the RU slider quite high because of the spikes. A true serverless pricing model would be a real help here because it still costs way more than using MongoDB Atlas. Especially for dev/test environments where one still pays a relatively exorbitant amount of money if you want to keep your environments the same in structure and setup.

  • Daniel commented  ·   ·  Flag as inappropriate

    Ugh. I just saw my bill and it looks huge!! I didn't even store anything. Now, I'm being charged because the default collection setting reserves enough RUs to cost you a lot of $$$. This wasn't clear to me. I definitely want Cosmos DB to cost less.

  • John commented  ·   ·  Flag as inappropriate

    I totally agree with the original comment. We too were burned with a tiny Cosmos system we were using to prototype before scaling up. We have completely turned it off.

    The pricing is very unclear. You can set up a shared database but then have to code to use shards across different collections. The RU/s measure is really confusing. You are given a free 100 RU/s tier when you sign up, but the minimum is 400 RU/s, so the "free" addition is meaningless.

    We originally had 8 collections with the intent to grow to about 20. By default it put 1000 RU/s into each one (we didn't understand the complex pricing). The cost per day for 1000 RU/s is $1.92. 20 collections x 30 days x $1.92 = $1,152 per month for a database that was about 100 MB. The equivalent on Atlas MongoDB is zero.

    I raised the issue on Microsoft support and the initial answer was that they didn't know about Cosmos.

    I cannot understand why anyone would use this. My recommendation is go elsewhere.

  • Alex commented  ·   ·  Flag as inappropriate

    We now pay more than $23 per month for a small database that was totally free when using aws dynamo db.

  • Frederik Østerbye commented  ·   ·  Flag as inappropriate

    This update seemed so promising, but you totally forgot to mention that you need at least 100 RU/s per collection.

    For our scenario we have a DB with 25 collections, but with an average throughput of 0,67 RU/s.
    We're paying 193 USD per month for a development database.

    Come on, MS! This is simply not very cost effective if you need replicated instances to support DEV, UAT and PROD.

  • Jim Brown commented  ·   ·  Flag as inappropriate

    The new database level offer is great! But it is very confusing on how to move between "tiers" of RU throughput at the database level. For example, if you provision a database with 20000 RU/s, you can't scale it down to 400. You have to start at 400. And if you go above 10000 you get moved to a higher tier and can't come back down again! We had a process planned for daily imports that scales the database way up just during the import but we got locked in and had to delete the entire database just to scale it back down!

  • Jake commented  ·   ·  Flag as inappropriate

    Please make CosmosDB more cost effective. As others mentioned, the current pricing model is really bad for indie developers and small businesses.

  • Anonymous commented  ·   ·  Flag as inappropriate

    $5/mo. entry would help get us hobbyists in the door. The new pricing is a start but $23/mo. is still out of reach.

  • Tom commented  ·   ·  Flag as inappropriate

    I just spent some time playing with this update finally... The Azure Portal won't let me set a database throughput of <100 RU/s per collection. I have 6 collections in a database and get an error trying to update my database throughput to 400 RU/s: "The replace operation is invalid, OriginalOffer: 600 RUs, NewOffer: 400 RUs"
    Is this part of the design? I couldn't find this mentioned anywhere in the pricing details page.

  • josie commented  ·   ·  Flag as inappropriate

    You could create a subcollection field within each distinct document set and then you can keep everything in one collection and just query them out using by subcollection.

  • Erik Skaarup commented  ·   ·  Flag as inappropriate

    Is it possible to provision a database with shared RU's for all collections, where each collection is a fixed size (10gb) so it will stay in one partition?

    It seems that if you e.g. create a database with Provision throughput on 1000 RU's, the collection created within that automatically becomes a unlimitted collection.

  • Eric Barch commented  ·   ·  Flag as inappropriate

    Just contacted Azure support. Looks like it was a regression in the portal. Expect a fix soon :)

  • Eric Barch commented  ·   ·  Flag as inappropriate

    It appears as though this is no longer working. The minimum RUs per database has been raised back to 10K.

← Previous 1 3 4 5 6 7

Feedback and Knowledge Base