Pricing based on transactional throughput, similar to SQL Azure
The current model is a lot like the old SQL Azure models, where it was the volume of data that largely drove the cost of the service. SQL Azure became an affordable breakthrough when it shifted toward pricing based on overall transactional throughput (i.e., large databases that did not need to handle huge volumes were now affordable).
Search should work the same way. If I operate a marginally popular web forum, for example, I may have several gigs of data to index, but the storage of that is the cheap part. It doesn't get a lot of queries, and new data is added infrequently. Paying for high computational cost doesn't make sense because there is no high computational cost. Again, if this were consistent with the "DTU" model of SQL Azure, one only has to pay for that computational overhead if they need it.
Thank you for your feedback. While it is unlikely we’ll address this suggestion in the near future, we’ll reassess based on the number of votes it receives.
Azure Search Product Team
I completly aggree. In my senario I have a index size that is just under 2 gig but growing slowly. The traffic to this is fairly low. Its very hard to justify the >3x price tag of Standard1 when the performance of Basic is more than enough except the capacity.