Add ability to view Azure Table size/entity count (rows)
I've created this idea as suggested on the forums: http://social.msdn.microsoft.com/Forums/en/windowsazure/thread/ea18ae29-36a3-42c6-8420-877216efbd42
One of the big challenges in adopting azure table storage over traditional SQL storage is the ability to know how much data is stored and how it is being used.
Being able to break size/rows down by partition would be invaluable when trying to modify / optimize Partition & Row keys. (Given data doesn't always grow as we would expect, and new bottlenecks can & will emerge).
In addition the ability to view the usage data per table / partition would be fantastic.
Obviously there are ways of doing this at a higher layer, but having a consistent and reliable API for exploring the data would go a long way to making table storage a viable NOSQL alternative.
We understand this is a top customer ask and as such it is currently on our backlog to be prioritized. We will update when the status changes.
Waiting to exhale...
Whats is the status now ? How it can be so difficult to implement a count ? A default functionality !
I'm trying to understand the complexity involved in adding this feature; to understand the holdup.
The value would be extremely high, and it seems to me that the difficulty in implementing this would be extremely low.. so, seriously.. what gives?
7 years and counting. Thank you Microsoft
While I can get the capacity manually from the billing interface, it would really help us to be able to get this information through an API. We are storing large amounts of IoT data in Table storage, and we need more visibility into this data to be able to make good decisions.
Richard Hubert commented
Is there already a way to see current usage in terms of total data sizes? This is needed if only for accounting reasons...
5 years and counting... :(
Fernando Silva commented
Added in 2010. Still under review. What a joke
I need something like this for blobs. I have a multi-tenant product that stores user data in blobs. Each client may be storing millions of blobs with me (which I then store on Azure). I need a way to quickly determine how much data each user is storing. Ideally I could make this query using a path prefix. If not then I could store each customer's data in their own container and make the query by container. Looping through blob lists and summing up the sizes just doesn't scale.
Also, it would be good to know both the raw size as well as the billable size.
James Andrews commented
Thankyou - the level of engagement & transparency of the Azure team is one of the most appealing features of the platform.
Below are some additional points for the storage team.
This feature would not need to function as a SELECT COUNT(*) equivalent, ie: the results don't need to be realtime or 100% accurate. Similar to SQL Server when using the sysobjects table for row counts, and sp_spaceused for sizing.
I would think this provides a solution to several problems:
1. Sub billing of the platform (As opposed to creating an entire billing solution as suggested elsewhere in the forum, if a PK used a customer ID this would solve allow for end-user based usage breakdowns)
2. Optimization of partition / row key schemes (& hence query performance)
3. An understanding of where resources are being consumed (when I look at my bill I might get a shock, it would be good to know exactly what was taking up space)
I thought it might be interesting to provide this data through a read only table, similar to SQL Server. (Although I don't really mind, as the first thing I would do is create a table to store a history of table / partition size & count so I could monitor growth over time... but others might find this a nice way of accessing the data)