Azure CosmosDB is too expensive for small but multiple collections
Currently using on-prem MongoDB (on Linux) and wish to move to Azure, but I find CosmosDB is too expensive for small but multiple (MongoDB)collections because it seems that a minimum of 400 RRU's/per second will be charged for each collection.
The terminology used on the pricing web pages is somewhat unclear though and I am not sure if the pricing for the minimum of 400 RRU's/second applies to partitions or collection (or if these terms are in fact identical semantically)

Cosmos DB supports sharing throughput across multiple collections with database-level throughput. This costs approximately $24/month.
Thank you for your feedback on this item. Will be closing this item at this time.
Thank you.
159 comments
-
Rob Kraft commented
I just noticed a new option in CosmosDB that drastically reduces costs for small data requirements. It is a new capacity mode called "Serverless". It can cost as little as 50 cents per month. I am giving it a try! Yay!
-
Ramesh Jangir commented
Hi Azure Cosmos DB Team,
Let’s assume that I am creating a database in “X” region where Azure says that price is 26.57 USD per month ( with manual provision of 400 RU).
If I create one database named “ABC” with two collection named “Pro” and “Sub” how will be the billing in this scenario?,
1. Will it charge 26.57 USD for this database and 400 RU will be shared for these two collections. There is no other charges for these collections.
2. Or, will it charge me 26.57 USD for collection “Pro” and same amount for collection “Sub”, that will become 53.14 USD (26.57 USD * 2) per month.
-
Anonymous commented
the default pricing is absolutly insane and a just a rip-off! the default should be shared throughput.
The pricing makes prototyping impossible with cosmosDB. Not sure what Microsoft is thinking here. -
Mike commented
I got slapped with a huge bill that I did not expect. At this point, I wonder how many clients are using Cosmos DB. If the pricing does not make sense, there is no point in staying.
-
Federico Repond commented
Allows sharing up to 25 collections and it:s not clear but you also have a size cap. If you want to do something serous with cosmos db you end up paying for throughput per collection which is excessively expensive. .
-
Imran commented
Why close this request? You're still charging fascistic prices. RUs are still too expensive. A real world application needs to pull from multiple containers, many of which have indexes. A lot of RUs are utilized for simple scenarios. We shouldn't be forced to pay a high price for using a service that's supposed to be cheaper because cloud... Please lower your RU pricing.
Also, there is no staggered pricing on storage. You're charging a flat $0.25 per gig regardless of volume. I'm considering moving to AWS because it doesn't make financial sense for me to be on Azure.
-
Allan commented
Try the gremlin API, you need to be millionaire to use it!
-
Wick commented
Merging all containers into one to avoid high consumption of RU/s sounds like a bad workaround - not to mention unique key constraints need to be dropped. As of now, I find your RU/s billing ridiculously expensive. I just looked at my metrics. A single user needs about 30 RU/s during the time they are logged in. How am I supposed to scale to millions of users? CosmosDB has become a showstopper, so I will be moving away from Azure soon.
-
E commented
The official answer should be:
> Cosmos DB supports sharing throughput across multiple (max. 25) collections with database-level throughput.
In my case, I have 3 larger and 40 small collections (43 in total), the pricing would be at least 400 RU/s + 18 * 100 RU/s, which is 2200 RU/s and equals ~110EUR per month
https://docs.microsoft.com/en-us/azure/cosmos-db/set-throughput:
> In February 2020, we introduced a change that allows you to have a maximum of 25 containers in a shared throughput database, which better enables throughput sharing across the containers. After the first 25 containers, you can add more containers to the database only if they are provisioned with dedicated throughput, which is separate from the shared throughput of the database.
-
Rob Kraft commented
Adding my voice to the chorus. RU pricing is too confusing. I understand that I can do 2 Selects and 3 Inserts per day to use up my 400 RU per month. That is ridiculous. I thought I misunderstood the pricing, until I got hit with the bill. Firebase and AWS DynamoDB are thousands of times cheaper.
-
Matt commented
Microsoft, please reduce the ******** price on this trash. I'm paying $250 a month even though my consumption is probably a quarter of that. This is insane. Your pricing model is ripping off people.
-
Jan B commented
"Cosmos DB supports sharing throughput across multiple collections with database-level throughput."
Can you please point us to the relevant documentation for this feature as apparently it's not obvious how to enable this (going by recent comments)?
-
Eric Tijerina commented
Just got slapped with a $175 bill for 3 collections with less than 1MB !!! What the actual **** is this?!
How are developers able to run Mongo on azure without going out of business??
-
W commented
Everyone here is lucky there's now MongoDB Atlas :)
-
Tommy Tucker commented
@Roman why are you not combining those containers and adding a property if _type to each document? With that few of documents it shouldn't effect query speed. Query on _type when you need only those documents. Your database will shrink to 1 to 3 containers.
-
Roman commented
CosmosDB is becoming prohibitively expensive. My application needs about 15 containers, of which 12 have less than 50 rows each. Yet, I am forced to pay for 100RU for each of those containers, which makes this project look really stupid... I am paying from my own pocket to run this for a charity volunteer center. Now guess what - I had to merge some of those containers together, mess up the code in the process - just to work around the greedy MS.
-
Dan Jarvis commented
I agree that converting an application to use sharded collections is a bit much to take advantage of the per-database throughput plan, though I can understand why technically it needs to be that way. For our application it'd be quite an effort, taking into consideration all the QA regression and performance testing and so forth.
-
Anonymous commented
I had the same exact question and I don't feel the answer could have been clearer. I would also like to mention that there are lots of applications with small datasets that could take advantage of a more generous pricing scheme. How about free until the database gets to be a certain size?
-
Szymon Sasin commented
Any tutorial how to enable it?
-
Anonymous commented
I vaguely rembered coming across this issue - after I came across this issue again.
The default pricing is plain horrible - I just spun up a trial instance and ran my app against it (instead of Mongo DB Atlas, which is free). Starting my app created a few collections (1-2 records in each) - those few kilobytes already would have put me in the 3-digit range when it comes to pricing. Seriously, guys...
-> Provide a free tier for DEV / Eval purposes
-> Make it easier to setup per-database pricing with much lower minimum RU/s. I don't know why that would be a problem - don't worry, if we need additional RU/s, we will buy them, obviously...