More granular sizing for Elastic Database Pools
Currently the sizing of Elastic Database Pools is not very granular. I'd like to be able to adjust the pool eDTU in much smaller increments, e.g. 5 or 10 DTUs at a time.
Let's say I have a premium elastic pool serving say 20 databases with 125 eDTU and I'm happy with the performance. I get a few more customers so need to add say 3 databases ... there's no way to grow my pool to support the additional load. Either I pay twice as much to have a 250 eDTU pool or I put up with fewer average resources or I put the new databases in a smaller Standard pool or keep them out of the pool. None of these are very good solutions for me. Whichever size of elastic pool you're at the only options are to go up by 100% or down by 50%, meaning even customers with big pools (lakes?) will have a hard time getting the sizing right.
Similarly the per-database eDTU min/max needs to be more granular to be useful. Actually I find it hard to imagine a case right now where anyone could be using this feature ...
If my pool is 125 eDTU then I can't use the min/max feature because there's no setting available between 0 and 125 - any database having a minimum would take all my resources. If I upsize to a 250 eDTU pool then I have the option of setting just one or two databases to have a min and/or max of 125 ... but that's half the pool's resources to just one database! If you're going to spend that many eDTUs on a single database you almost might as well not use an elastic pool. I love the idea for this feature but it only makes sense if the options are more granular than 125 eDTUs at a time.
Several improvements in this regard have since come online. Please see Morgan’s comments (posted 6/11/2018). Thanks
Kevin Cloet commented
Is there any way we could set a min/max per specific database.
We have multiple customers paying different amounts for their databases. Some don't care and get the smallest one, others pay more so they can have more resources.
The way the min/max is setup right now is not working for such a scenario. We can put the maximum higher but that's for all the databases . That way the smaller paying customers can take advantage of this and also use the maximum resources.
Robert Downey commented
I'd like this same granularity with vCores. We have EDPs that are maxed out at 2TB, but definitely don't need the 6 vCores we're required to have configured. We almost never go above 20% vCore utilization.
Martin Szabo commented
I agree with Rory and other commenters that we need more granular click-stops. We recently had to increase our 500 eDTU Premium pool, and the next level up was 1000 eDTU of which we typically only use 20% (except for during index rebuild time).
Yes, several improvements have been made, but not the ones requested in this feedback post.
We still cannot increase the eDTU click stops in a granular way. It goes from 400 straight to 800!
Why can this not at least follow the previous click stops: 100-200-300-400, in 100 increments (it would be even better if it was more granular with 10/15/20/25 click stops, but I would settle for consistent 100's at this stage).
Morgan Oslake commented
Appreciate the feedback on this topic. Note that several improvements in this regard have since come online. For example:
More choices in eDTUs per pool and eDTUs per DB; higher storage limits; higher database limits:
Additional eDTU choices and higher eDTU limits per DB:
Storage add-ons (to get more storage without having to purchase more compute):
Other feedback mentioned will be taken into consideration for future improvements.
Brian Hall commented
I agree with Rory and the commenters here and wanted to refresh this conversation. We're currently a startup running a Standard with 200 eDTUs. We tried running at 100 eDTUs but during our highest load were right up around 95%-100% usage. We scale up to 200 and hover around 40%-50%.
That's double the cost while not using over half the resources. One of the features of cloud computing is supposed to be enabling more efficient resource usage. The way elastic pools are currently setup leaves much to be desired.
I think it's fine to have click-stops but they shouldn't get bigger and bigger. e.g. if you could do it in 25 eDTU increments that's fine, or even 50 eDTU. But I don't understand why the control can't be much more granular than it is now.
If I have a pool at 125 eDTU the only way I can make it slightly bigger is paying DOUBLE my current amount. Same if I'm at 250 eDTU, I have to go up to 500 instead of being able to go to just 300 or 350. The impact of this, which we experience directly, is we are stuck on a lower level than we'd like. I'd prefer to pay for 300 eDTU than 250 but I can't afford 500 so I'm stuck at 250.
Thomas Mueller commented
The per-database eDTU max is the most important part for us. These settings are not nearly granular enough.
If I have a pool with relatively small eDTUs, then I only have one or two options for the per-database max. I need smaller maximums so multiple databases can live together in peace.
If I have a large pool, say 1,000 eDTUs, then the largest max available is 500 eDTUs. I need to be able to set the max to 800 or 900 so I can run a large job on one database without clobbering all the others.
Flexibility with the ratio between eDTU and GB would be useful too - a bit like the concept the compute team have with different spec VMs to deal with different scenarios (eg F series and GS series). Our databases are content rich so we'd have to pay for an 800 eDTU pool because of the storage needs, but only really need 200 eDTU.