Azure CosmosDB is too expensive for small but multiple collections
Currently using on-prem MongoDB (on Linux) and wish to move to Azure, but I find CosmosDB is too expensive for small but multiple (MongoDB)collections because it seems that a minimum of 400 RRU's/per second will be charged for each collection.
The terminology used on the pricing web pages is somewhat unclear though and I am not sure if the pricing for the minimum of 400 RRU's/second applies to partitions or collection (or if these terms are in fact identical semantically)
Cosmos DB supports sharing throughput across multiple collections with database-level throughput. This costs approximately $24/month.
Thank you for your feedback on this item. Will be closing this item at this time.
The pricing is quite insane, we are not going to migrate to azure as it's to expensive to rewrite the code.
Adam M. commented
Just so you guys know, you can actually set database-level RUs as low as 10,000. The UI doesn't allow it, but you can use the API.
The current multiple collection pricing model is absurd. It is forcing us to compromise a good architecture to save cost. Our team's confidence is low in moving forward with such limitations. Please do something about it
Adam Leffert commented
Couldn't agree more. The min pricing for per-db rather than per-collection throughput is $2,900.
I'm trying to use Azure Table Storage with the Table API, but it's very slow. I'm working on code that calls Azure Table Storage for the first call, which takes about 1 to 1.5 seconds for very simple queries on non-primary-key properties. Then I cache in Azure blobs, keyed on a string composed of the method name and query parameters, but it it takes over 500ms just call Exists on a blob reference, by primary key. If you call Download on a blob without calling Exists and the blob doesn't exist, the code throws an exception, which is a bad idea for working code and takes about the same time. It could return null instead, or have a TryDownload version.
I tried using Premium rather than Standard Azure blob storage, but that doesn't work with Table API, only VM disks.
There should be a dev/prototype pricing tier for CosmosDB which does NOT include geo-replication, but does have the same indexing features and response time. Throughput should be per-db (account), not per collection.
That way, a dev that may want to use CosmosDB on a project, but can't afford a few hundred dollars for a 20-30 collections and very little db can see how it works. Then when big projects come along, they would be ready to scale up, which is one of the major benefits of cloud computing.
With the current pricing scheme, there is no reasonably-priced on-ramp to Cosmos DB.
Deva Kaladipet commented
Costing model of Cosmos DB is counter intuitive for a cloud based solution. Why forced to have a minimum RU of 1000 even though we don't need. This itself adds a cost of $700 per year. For cloud based offering should have pay as you go model, this is what I see on all other Cloud properties but not for Cosmos DB. Is anyone from product team looking to offer flexing RU based option?
Eduardo Queiroz Peres commented
CosmosDB would be the perfect solution for my company, but this 400RU minimum is prohibitive. I do check every now and then this thread, looking forward to possible changes or solutions to this limit. It would be a very good news to hear that this 400RU minimum was removed or that per-database pricing became more reasonable.
Aaron Charcoal Styles commented
Adding my voice to the choir that provisioning 400RU is a price killer for small apps.
Dale Michalk commented
I was very happy when i read about the per DB pricing....until i set it up and saw the 50,000 RRU minimum....we are right back where we started with the overpriced database.
Reality is this...its not a good replacement or port for Mongodb if you don't have $$$ to burn. I had a small project that costs $150 a month in Atlas. It was costing us $1000 a month in CosmosDB.
Putting everything in a single collection is NOT a Mongodb design pattern....its the preference of the original DocumentDB team of focusing on replication/latency first and programmer needs second. Didnt they ever review what the most popular document DB was doing and how its community uses it????
The per-database pricing would be great, if we could set a more reasonable minimum. 50,000 R/U is ridiculous. And putting all our types into a single collection is even more absurd.
I've run into the same problem after migrating from Azure Table Storage, I used one collection per entity, but that was too expensive, so I wrote an article on how I solved it by creating a repository that handles the separation of entities while storing them all in one collection. This way, I could keep my business logic intact and just change the data layer, check it out here: https://tareksharbak.com/multiple-entities-in-one-collection-azure-cosmos-db/
Cody Schnacker commented
We have several collections set to the minimum 400 RU/s, yet they don't even use a 10th of that for these collections. We are paying for a massive amount of queries that aren't ever used.
The solution is to merge multiple collections into a single one and query using a 'datatype' field? Really?
A collection should simply be a 'label' with no minimum RU/s requirement! I'm fine with a minimum RU/s on the database level, but setting it on the per-collection level makes me concerned with CosmosDB's fundamental design, or at least how the data abstractions are surfaced to the users.
CosmosDB Mongo is offering RU Pool but it doesn't help. Because it requires 50k RU/S as the minimum which is too large for a small project.
I have a small website which was previously backed up by SQL server (costed us around 8$/month). Moved it to CosmosDB and it's now costing us 58$/month ... this is a website used for internal operations and it gets hit 2-3 times a day, that's it. No reporting or large queries against that DB, nothing.
Russell de Pina commented
Where Microsoft screws you over is when you're creating collections. It sets the default at 20000 RUs and shows you that it's costing you 38.40 per day, but subconsciously you're thinking per month. Next thing you know you're looking at a $14,000 expense for CosmosDB
NOT COOL AT ALL!!!!
Tom Tucker commented
The problem boils down to the extra cost in separating data without needing extra RU's
At $0.008 / 100RU and 400RU minimum it is not too bad a price at $24.00/mo
Now I need the same RU/s but want to separate a dev collection from a production collection $48.00/mo (unless you delete and recreate the dev collection every time you need it)
What if I want to separate my data by customer each customer in a separate collection 10 customers x $24.00/mo is $240/mo
Ok so now lets move to RU/s per database the new feature that will save us money. It has a 50,000 RU minimum costing $2,880/mo. Thats saving me loads of money.
I do hear they are listening and are working out a pricing model for separating data with low RU at a affordable price. We shall see
Apparently, they just changed their pricing model. We now pay for the total Gb of data in the databases, and for the request units per second.
Great news ! (until it changes again...)
David Betz commented
This is extremely expensive compared to Azure Table Storage (which is the baseline price for comparison -- we can't use our finances as a comparison since we all have different financial situations). This is entirely unusable for individual use. I generally rethink my document-models and caress into tabular storage; ATS is affordable.
lacmta reg commented
I talked to an MS engineer that created Cosmos DB about the cost problem and the response I have received was he had no idea what I was talking about since he is not paying anything for it.
Please change the database shared RUs minimum level for MongoAPI from 50000 to around 1000 or even 5000 as a start, in order to allow customers to use this database platform to applications with multi collections but not so high performance needs.
I cannot see why MS have to have this high level, it will only cut off customers to other providers.
This will definitely limit the "Migrate applications to cloud" approach for MS as a choice, unfortunately...
Jojo Diawuo-Appiah commented
The way i see cosmos db being less expensive is by separating the table in your database that needs the most transaction read/writes and trigger processing and outsource just that table to a cosmos db collection. Ideal for online ledger /transaction tables.