Modelling Complex Types in Indexes
When modelling an index the data types are restrictive. There are simple types and collections. There is nothing that allows us to model complex types e.g.
The oData spec allows for complex types
This feature is now Generally Available: https://docs.microsoft.com/en-us/azure/search/search-howto-complex-data-types
Radoslaw Maziarka (PGS Software) commented
Missing this feature is the reason why people adopt custom served ElasticSearch or Solr. Please implement it as soon as possible.
It's been a while since we last heard from this. Can you share any more information with us please?
Yohan S. commented
Any update please ?
Zach Bergman commented
Awesome that this has been started. Is there any information you can provide regarding an ETA of the feature? It seems to be 11 months into development - Can't wait to use it :)
From the docs it seems only String collections are supported, and this feature has been in development since Jan 18 2018. Any update appreciated otherwise please add me to the private preview. Thx!
Saurabh Abhyankar commented
I have dropped a mail. Please consider and revert.
Dean Peters commented
Hey Azure Team, one of the primary reasons we would have to adopt Elastic or Solr is the inability of Azure Search to support complex objects.
Where are we with this initiative? How long does one have to wait to request addition to the private preview.
We have received your requests, even though you're getting an error message back. It looks like our public distribution list includes a private list as a member, which is why you're seeing the error. I'll see if we can get it fixed. Please let us know by posting here if you do not receive a reply from us. Thanks!
Yeah, my reply got rejected too
Gutemberg Ribeiro commented
We are unable to get in touch by that distribution list as it is configured to allow only messages fro inside MSFT as per this message returned:
> The group azsearch only accepts messages from people in its organization or on its allowed senders list, and your email address isn't on the list.
Goran Jovanovic commented
We would like to sign up for the private preview, as well. Thanks.
Freek Van Mensel commented
How can we sign-up for a private preview? We're very interested in the progress regarding this feature!
Yohan S. commented
Do you have any update on this please ?
Martin van der Burght commented
How to sign up for the private preview? We are definitely interested in this feature!
Grahame Horner commented
any update would be good; please.. Q3 2017 is here
Graham Bunce commented
Usual bump that will get ignored...
This really doesn't help much where you want to facet by hierarchical type, but only some of those facets, i.e. a customer can add their own facets to a item in the index. When they search, they want to see their facets but not for the 1000 other customers in the same index. The way you force us to do this, I think, is that we implement something like you state above (e.g. customer key|facet name) and then when search we have to pull back *all* the facets of this type, regardless of the customer we're actually searching under. I then need to perform client-side parsing to discard the 99% of the facet results that I don't need.
How is this even remotely scalable? What if I have 1000 customers with 100 facets each.... I need to do a facet, count:100000... I mean come on... seriously?
Make json as another data type within same index.
for eg. If we have Asset Index with the following fields
Id string , key
Location : geography Point
CurrentState : string
ReadyForFieldWork : boolean
Tags : String Collection
FieldData : Json
provide some way to search the json datatype with filter
Same concept can be applied to other document types as well , pdf , word, text file and so on.
This will eliminate the possibility of maintain two different indexes( one for properties and one for blobs)
Any updates on this?
Adam Łepkowski commented
Usage of mutiple indexes will not work if you have a dynamic structure of the model (fully configurable by ther user).
Azure Search is supposedly built on top of Elasticsearch, but is crippled with lots of seemingly artificial limitations - flat document, 512K max document size, rigid schema requirements, relatively small maximum document counts.
The flat document structure seems to indicate that the Lucene Document object is being directly used, rather than the very flexible NoSQL, document-oriented approach of Elasticsearch. Which is fine, but extremely limited compared what Lucene-based search engines like Elasticsearch and Solr have been offering for years now. Being able to define an index with a schema containing nested levels of contained sub-documents and collections is a necessity to providing developers with a search tool for rapidly building efficient search solutions.