It might be useful to have more control over the cache. In particular, in some cases it is useful to "replay" a certain time window, re-running queries on it. If there was a way to set the caching policy to (startTime,endTime), rather than always to (startTime,now), it could be useful. Maybe even setup a temporary additional follower cluster pointing to a historical time window, that can be advanced as necessary (faster than the actual time progress, once done querying), so that it wouldn't affect the regular operation of the cluster1 vote
Currently the anomaly chart works only on Kusto explorer. I would like to add anomaly and forecasting in Azure notebooks.1 vote
The search function (Ctrl+F) in the Kusto explorer desktop app shall support queries like "10:27:22" to ease search in log file like contents1 vote
- Make the web UI work on Android Chrome as first priority
- When accessing from phone, adjust the UI to be suitable for small screen
Currently, if we do IngestionFromQuery operations using .set-or-append or .append like commands using async, we get the operation id to track the progress of the operation later on. We could get the status of multiple operations using .show operations command. But, we get the operation details of only one operation with .show operation details command. This makes us fire lot of .show operation details commands to track get the Ingestion time, rowcount of the ingested tables. It would be great if Kusto had capability to fire only one Command to get the operation details of mutliple operations like .show operation.1 vote
I would like to have the capability of creating a clone of the database, where the newly created database is exactly the same as one it is cloned from (including permissions, folder structures of tables/functions, etc).1 vote
In.show commands result, show number of rows affected and data size affected, at least for ingestion command (like CommandType = DataIngestPull)1 vote
In Kusto.Explorer, after querying, it always shows the "Query Summary" window, but I only care about the "Results" tab. If I try to close the query summary, it just comes back on the next query. And if I try to dual-pane the results and query summary, the layout is reset on next query.1 vote
When do you recommend Azure Data Explorer time series capabilities vs. Azure Time Series Insights.1 vote
print "ÉCOLE" =~ "école" // FALSE
print "COLE" =~ "cole" // TRUE
print tolower("École") == "école" // TRUE
print "SØD" =~"sød" // FALSE
print tolower("SØD") == "sød" //TRUE
The tolower() function supports scandinavian letters 'ÆØÅæøå' and the French accented letters. But the string operator =~ does not.1 vote
Allow non-admin/author to annotate objects in the explorer pane.
The use case is a consumer of data prepared by somebody else. The consumer has only read/query access. Reality is that schema get convoluted quickly. The names of tables, functions and cols aren't self-explanatory (quite often). Worse, real example, table1.RsrcId and table2.BladeId might mean the same thing. Though, only the consumer of both tables (possibly from different clusters) would care.
Therefore, allowing users with read-only access to annotate would be instrumental. Said annotations should be share-able.1 vote
You can change ‘docstring’ of function/table/columns (look for control commands that look like:
.alter table|function|column docstring …
when resuming an existing cluster via the Azure portal, I would like to be able to update configuration properties, like setting a new number of nodes, and a new SKU.
Currently, it is required to wait for the cluster resume operation to complete, and only then is it possible to alter such configuration properties1 vote
when creating a cluster via the Azure portal, I would like to be able to set the number of nodes, and not only the SKU.
Currently, it is required to wait for the cluster creation to complete, and only then is it possible to scale it out.1 vote
Several teams share clusters which makes assigning the streams challenging and they go to the cluster owner/creator vs the actual DB or table owner. We need a way to assign DBs to individual services so we can allow more granular service attribution. I know metadata can be done at a table level but that's too granular. We need it at a DB level as well. This scenario is important for GDPR and data governance as there are teams who are shying away from registering their shared clusters so they do not get all the action items assigned to them1 vote
Let's say that your table has a column called SomeColumn that only has 10 different possible string values. Then, I think it would be useful to have autocomplete when trying to do a string match:
| where SomeColumn == '[Show autocomplete options]
This would avoid having to do a "distinct" operator to figure out the values and then copy that value into another query.1 vote
.create external table command causes an error if the table already exists. This is inconvenient for automated deployment. Would be nice if -create-or-alter was supported!1 vote
Can we supply syntax to use the [Service-specific schemas](https://docs.microsoft.com/en-us/azure/azure-monitor/platform/diagnostic-logs-schema#service-specific-schemas-for-resource-diagnostic-logs) template for specific Azure services, not general the [Top-level diagnostic logs schema](https://docs.microsoft.com/en-us/azure/azure-monitor/platform/diagnostic-logs-schema#top-level-diagnostic-logs-schema). I don't want to type each column property with
project operatorall the time. Is it possible?1 vote
Currently there is a function to convert a base64 string to utf8 string i.e. base64_decodestring. However if there is no valid string output, that function fails and returns nothing.
e.g. print base64_decodestring("igAAAAAAAACDAAAAACAAAA==")
There are cases where we need a byte representation of a base64 string. Having a function like base64_decodearray that gives a dynamic array of 0-255 values allows us to get that.1 vote
- Don't see your idea?