According to this article https://azure.microsoft.com/en-us/support/legal/sla/log-analytics/v1_1/ SLA on indexing log data might take up to 6 hours. OMS has built in alerting that allows you to trigger actions within 5 minutes of data arrival. But if indexing takes more than 5 minutes - then what's the point of creating alert that might trigger on something that is no longer a problem, or not trigger at all if there is real problem. What is the average data indexing time? Log Analytics would be much more useful and have many more applications in real world if that indexing time is much lower. 6 hours worst case seems like a joke for any real time responding to problems system.
(Usually I see 5 seconds delay on indexing of custom logs. But couple days ago it spiked to 20 minutes for couple hours. As the result Log Analytics, which otherwise is very nice, might be not used at all, especially after reading that it actually might take 6 hours)
According to this article https://azure.microsoft.com/en-us/support/legal/sla/log-analytics/v1_1/ SLA on indexing log data might take up to 6 hours. OMS has built in alerting that allows you to trigger actions within 5 minutes of data arrival. But if indexing takes more than 5 minutes - then what's the point of creating alert that might trigger on something that is no longer a problem, or not trigger at all if there is real problem. What is the average data indexing time? Log Analytics would be much more useful and have many more applications in real world if that indexing time is much lower. 6…353 votes
We have recently published an article – https://docs.microsoft.com/en-us/azure/log-analytics/log-analytics-data-ingestion-time that details various aspects of data ingestion time for Log Analytics, and clarifies distinction between the financially-backed SLA and our Service-Level Objectives. In fact, the typical latency to ingest data into Log Analytics is between 3 and 10 minutes, with 95% of data ingested in less than 7 minutes.
We are also actively working to bring this latency down even further, and many customers already report that they experienced a significant improvement, but more is coming.
Would it be cool if you could configure Windows Server WEF (Windows Event Forwarding - http://technet.microsoft.com/en-us/library/cc748890.aspx ) to send to Advisor for Log Management scenario, without using the SCOM agent ?
Alternatively, if one already has a forwarder/collector (WEF/WEC) architecture in place, could it be possible to use just one SCOM agent/gateway to pull the 'forwarded' logs stored on that collector from that single box to the cloud.313 votes
This is currently under development, scheduled to be in preview later in 2018
Allow the collection and addition of custom fields using advanced logging or custom IIS modules. Example is to add x-forwarded-for to IIS logs in W3WC format.186 votes
Let’s see how many come here and vote this, but we probably won’t special case this one log type ourselves.
We are anyhow doing work to enable per-tenant schema (since your fields would be different than mine) – tracked as part of the ‘custom fields’ work http://feedback.azure.com/forums/267889-azure-operational-insights/suggestions/6519270-allow-to-perform-parsing-and-custom-fields-extract
to be followed eventually by ‘custom logs’ http://feedback.azure.com/forums/267889-azure-operational-insights/suggestions/7113030-collect-text-log-files
which will enable this scenario – and many more!
Azure WebSites write to WAD in a different folder structure. The work of this other idea http://feedback.azure.com/forums/267889-azure-operations-insights/suggestions/6519377-collect-iis-logs-from-windows-azure-diagnostics-st enables reading those IIS logs for Azure Cloud Services (i.e. web role instances) but not for Azure Web sites.
This new idea is for the latter scope.167 votes
Cloud Services / Virtual Machines write with a different container/folder structure in Azure blob than Azure WebSites. Our current ingestion processes the former, not the latter.
Anyhow, also consider the ‘generic’ idea of a platform feature to ingest your own logs http://feedback.azure.com/forums/267889-azure-operational-insights/suggestions/7928931-collect-data-from-custom-containers-in-storage-acc
OMS can collect IIS logs for web roles. Extend this capability to Azure Web Apps IIS logs as well as Azure Web Apps application logs117 votes
ComputerIP is populated with the IP Address from which Azure Log Analytics is receiving data. For nodes behind a firewall/proxy or OMS Gateway this mean to have the external IP Address of the proxy.
ComputerIP must contain IP(s) information collected by the Agent on the computer hosting it to enable Compliance and Security Scenario on the console.
RemoteIPAddress could be added as the External IP address for proxy based agents or will contains the same address of the ComputerIP for agents not behind a proxy/firewall/Gateway.
This have a serious impact on compliance in the actual implementation.100 votes
Thanks. This is a good idea will consider it.
Would like to request a data retention interval by data type (Similar to what is done in SCOM.) Specifically, the ability to set retention timeframes on "Performance Data", "Event data", and "Analytic Data."88 votes
On Microsoft Azure you can enable Azure Storage logging. The logging information is saved in a $logs container in your StorageAccount. It would be great if we can add this log information to OpInsights. More information about how you can enable this type of logging: https://msdn.microsoft.com/en-us/library/azure/dn782840.aspx72 votes
Thanks for the suggestion. We’re looking to add more support for Azure services and this will help us prioritize.
Need to have the ability to modify the extraction criteria for existing Custom Fields. I have added a handful based on SharePoint ULS, but they aren't always matching properly. The only way that I have found to "improve" them is to remove the column and re-add it again.69 votes
Possibility to delete logs by type/date. For example if there are enormous amount of logs/custom logs generated by accident, it can increase the bill by thousand of dollars. And if the plan is Premium with 1 year retention policy, billing amount can become huge.64 votes
Log Analytics cost is incurred when data is sent into the service.
Once the data has been made available for searching there is no cost saving by deleting the data (except in case of retention)
That said, there are cases where it might be useful to delete logs.
Specifically, to support GDPR, we have introduced Purge API – https://docs.microsoft.com/en-us/azure/log-analytics/log-analytics-personal-data-mgmt#how-to-export-and-delete-private-data
Please note that this API must not be treated as general-purpose data delete API, but used for GDPR purposes only
I want to be able to filter stuff I don't want to collect in logs. For example with ACS (in SCOM) I could apply filters that didn't collect system logins. I would like this functionality in all logs, for example I would want to filter IIS logs to remove data from certain IP addresses.
I can see customers wanting to use this type of functionality when the costs of data start to pile up.61 votes
This feature is already in progress, limited preview is expected later in 2018
Windows Events collected today are only from the 'classic' NT-style eventlogs (Application/System) as well as from the Crimson logs (Vista and above) that are saved in ETVX format.
It would be nice to enable collection of ETW Trace Logs too (.ETL), like /Analytics and /Debug logs.57 votes
Feedback received in email and posted on behalf of the user.
We see ETW support more suited for ‘diagnostics’ rather than ‘operational’ scenarios, anyhow – and our focus is more on the latter, at least right now.
but wonder how many people would like to see this?
Here the requirement is clear/obvious. We just have not prioritized this work yet.
The overall ‘performance’ data collection needs to be refined – not just for Linux.
Right now we only collect/provide hourly aggregates of some specific performance counters related to HyperV for the ‘Capacity Intelligence Pack’ scenario.
Real time monitoring scenario might need some different shape of performance data to start with, before we enable this for Linux or for Windows alike, i.e. http://feedback.azure.com/forums/267889-azure-operational-insights/suggestions/6519061-collect-custom-windows-performance-counters
Since there are devices like Raspberry PI with ARM architecture, it would be great if you provided binaries for ARM based Linux systems as well. Currently, I am unable to run the agent on Raspberry with Raspbian despites the tutorials available on various sites.43 votes
We are looking at making the experience for building the OMS Agent for Linux from source easier for different architectures
One amazing idea is create custom fields on custom log sample process. Another good idea is add more timestamp samples (like ISO 8601 format, YYYYMMDDThhmmss.fffK where YYYY: Year, MM: Month, DD: Day in month, T: Delimiter, hh: Hour, mm: Minutes, ss: Seconds, fff: Milliseconds, K: Time zone offset) or add the possobility to create a custom timestamp.
It will be possible delete some imported custom logs to make some tests?42 votes
We’re planning on allowing you to import/export Custom Logs & Fields via the UI & ARM Templates. We’re currently implementing the ARM support today for most of Settings in OMS.
Thanks for sharing some of the timestamps you need. Feel free to e-mail them to me here: evanhi(at)microsoft.com
We’re actively planning way for you to specify timestamps yourselves.
Please implement a NLog target for the OMS data collector API40 votes
Recursive Log collection paths for Custom Logs
This will help users like me with folders that have logs + subfolders with logs.39 votes
Request to introduce user defined delimiter for Custom logs
We run into issues where we're unable to delimit RabbitMQ log timestamp format
Unfortunately, there is no configuration for us to change that timestamp format in RabbitMQ and have to implement a heavy workaround in order to work around this to convert it to a date time format supported by Microsoft then forwarding it to OMS.38 votes
Thanks for the feedback. We do consider adding more timestamp formats.
connection for SCCM 2012 R2 so that we can see all hardware inv data on all managed servers32 votes
Have you checked out the new ‘change tracking’ feature? http://blogs.technet.com/b/momteam/archive/2014/09/24/wish-you-knew-which-configuration-change-caused-the-issue-or-what-changed-on-a-server.aspx
Our differentiating angle at this point is to provide supporting information and context to troubleshooting. We are more interested in ‘what changed’ and ‘what happened’ (time-based information) than info ‘state’ and ‘snapshot’ (=‘current’ state) or objects at the moment…
ConfigMgr is generally more focused on ‘pure’ inventory (=current state of things).
It's possible delete Custom Logs sended by HTTP Data Collector API?
Thanks in advance :)
Hugo Faria28 votes
- Don't see your idea?