Support secondary Indexes
Need to be able to sort on something other than the rowkey
We understand this is a top customer ask and as such it is currently on our backlog to be prioritized. We will update when the status changes.
آموزش فن بیان و مهارت های ارتباطی commented
We absolutely need this. thanks for sharing. http://www.behtarinbayan.ir
Mani Gandham commented
Azure Table Storage is a key/value store (sometimes referred to as wide-column stores), similar to HBase, Cassandra, DynamoDB, BigTable, and others. The whole architecture is based around a sorted hashmap with partition and row keys, and that's where the scale and performance comes from. Secondary indexes do not really fit in this model.
If you need secondary indexes then you'll likely have to do it yourself. Some providers automate this (like AWs DynamoDB) and others build a new offering on top (like Google Cloud Datastore). It seems Azure went with the extreme option via CosmosDB which is a full database service that can be accessed over the table API, although at that point it would probably better to just use the native document store interface and get all the richness. Either way, that's probably the only path forward if you don't want to manage indexes yourself.
Scott Helme commented
I guess that the response to this will now be to use Azure Cosmos: https://docs.microsoft.com/en-us/azure/cosmos-db/introduction
This would awesome and bring this service to the premier league. Having the secondary indexes managed by the service makes it incredibly useful for so many different use cases and also for developers with different skills.
Frank Szendzielarz commented
Agree with @Anonymous below. Secondary indexes would really help, but in my experience really committing to ATS and thinking hard about the problem usually (again, in my experience) leads to solutions that make sense and do not need the additional indexes. I think the main benefit of the secondary indexes would be making it much more adoptable and requiring much less of a leap of faith.
ATS is still a great solution that works well. It is quite unique compared to what AWS and Google offer.
Hemant C Sharma commented
How is this coming along? Will be very useful indeed....
Steffen Gammelgård commented
Hope the secondary index is still coming.
Azure Table Storage is an immensely powerful platform, and is already an exceptionally cost effective NoSQL database for certain kinds of applications.
Being able to add a secondary index would drastically reduce the complexity and the amount of redundant data in certain scenarios and would make an already awesome platform relevant for even more projects.
I too have used index tables to get secondary indicies and sometimes that would still be necessary to allow for efficient partitioning, but having a second index still seems like it would be very useful!
While I agree that built-in secondary indexes would be nice, in most cases it's better to consider indexes on a case-by-case basis and implement the index differently depending on your queries and their frequency. Information on creating your own secondary indexes can be found here https://msdn.microsoft.com/en-us/library/dn589791.aspx
+1 on Inattention and Neglect.
And with the evolution of discussion on this, it has become clear that to base critical software on top of Azure Table Storage with any needs beyond simple distributed partially sorted Hashtable is pointless; indeed, I can no longer trust UV as a channel where roadmap about these features is actually accurately communicated.
No, AIS is not a solution to this. I do not want an index that may, or may not be in an updated state. Heck, I am even willing to build my own index! Just let me grab multiple non-contiguous rows (by RowKey) in one request!
But more importantly than even that, decide if UserVoice is actually a channel that you want to use to engage with your customers. If it is, announcing at your conference that a feature is coming, then ignoring it for 2 years, then another 3 years before 2 years after that saying it actually is not coming, more than likely, is incredibly insulting.
inattention and neglect is the perfect summary of ATS. Microsoft, up the price or do whatever, just do something.
SSD-backed storage + secondary indexes is a no-brainer.
Appreciate the workaround. But the consensus that ATS has become - through inattention and neglect - an also-ran is hard to shake. Not sure how I could recommend that anyone build a solution on it at this point. At most, I would recommend ATS as a read-only data store, where you put data that you hope to never see again but aren't quite willing to say goodbye to.
Aaron, DocumentDB is not a replacement, it is a totally different beast and not a key-value store such as Table Storage. Also DocumentDB is not portable (yet at least) and gives you a lock-in to the public Azure cloud. Table Storage is portable through Azure Stack.
I dream and hope MS is picking up Table Storage again and gives us Premium (SSD-backed) and with secondary indexes. There's really no reason on this good earth they shouldn't do this!
Aaron Lawrence commented
I would agree with Mike Olsen that it is a BAD IDEA to build a new application on Azure Table Storage, as Microsoft appear to be deprecating it (although it's not completely clear what they regard as a replacement, DocumentDB seems most likely)
Regarding table storage, I've also noticed that in the new portal when you look at diagnostics config in WebApps, there's only File Storage and Blob Storage as options, while in the classic portal you also have Table Storage as an option.
Even more, I've noticed that the internal logging in WebJobs via WebJobs Dashboard seems to have moved from using Table Storage to using Blob Storage - but that may be for any number of reasons, of course.
Anyways, I wouldn't recommend table storage as a long-term option for anything new for these reasons. And of course it's hard to use without any secondary indexing.
Mike Olson commented
The writing's on the wall folks. It's been what, 3 years since this was supposedly added to their roadmap? When was the last time Microsoft made *any* real improvements to Table Storage, besides putting a ton of breaking changes in the SDK that required hundreds of hours of dev to support?
Azure Table Storage is old news and is being ushered unceremoniously out the door. It's time for us to give in and move to DocumentDB, Azure SQL, or one of the dozen new storage services that Microsoft is actually putting effort into. Or just do what I've been doing and just start storing your data in flat files on Blob Storage, since for a lot of cases that's easier to code, more performant and more flexible.
At least Microsoft isn't slowly moving us away from Cloud Services... oh wait.
Any comments from Microsoft admins ?
Aaron Lawrence commented
It's difficult to use ATS as it stands for anything real. We built our own indices in SQL, which of course introduces exciting new consistency issues. One index is just too little - basically that gives you the ability to move data in and out by an identifier, but not do anything else with it.
We absolutely need this! Due to the high cost and throughput limitations of DocumentDB it is not a viable alternative to having secondary indexes in table storage.
Let me give you an example of why this is critical. I have about 30 GB of event logs. I want to store these in ATS. The problem is that I need to be able to query them along multiple different axes - the company they belong to, the individual user they belong to, the date/time they came in, the event name, and so forth. And eventually, probably lots more. My current solution is to store the data "n" times, each one with a different partition-key/row-key schema, to enable querying along that particular dimension. So far so good - it's not a problem to write the data "n" times, given how well ATS performs, and how cheap it is.
But the problem is maintenance. Right now, with about 30 GB of data, if I come up with a new dimension that I need to support, it takes me at least a day to write the scripts to export and then re-import the data to the new format (because it all has to be parallelized, and needs to track state for each portion I'm running in parallel, or it would take weeks); and even if I don't ***** up on the (very complex) import/export scripts somehow, it then takes at least 1-2 days to actually get all the data over to the new table.
And that's simply not a scalable model. What happens when I don't have 30 GB of data I need to pivot, but 300 GB? Or 30 TB? The pivot scripts will take weeks to run, and maintaining any shred of consistency through the process gets very, very complicated.
I get that this is a complex problem to solve. But it doesn't make it any less complicated by telling every ATS user to come up with their own (almost certainly unoptimal and buggy) solution.