Log Analytics

Welcome to the "Azure Log Analytics ":https://azure.microsoft.com/en-us/services/log-analytics/ Feedback page. We appreciate your feedback and look forward to hearing from you. Use this site for new ideas and bug reports or to request help.
NOTE – Log Analytics is now a part of Operations Management Suite. Learn more at http://microsoft.com/OMS

How can we improve Azure Log Analytics ?

You've used all your votes and won't be able to post a new idea, but you can still search and comment on existing ideas.

There are two ways to get more votes:

  • When an admin closes an idea you've voted on, you'll get your votes back from that idea.
  • You can remove your votes from an open idea you support.
  • To see ideas you have already voted on, select the "My feedback" filter and select "My open ideas".
(thinking…)

Enter your idea and we'll search to see if someone has already suggested it.

If a similar idea already exists, you can support and comment on it.

If it doesn't exist, you can post your idea so others can support it.

Enter your idea and we'll search to see if someone has already suggested it.

  • Hot ideas
  • Top ideas
  • New ideas
  • My feedback
  1. 6 hours SLA on indexing custom log data is a very long time to alert on

    According to this article https://azure.microsoft.com/en-us/support/legal/sla/log-analytics/v1_1/ SLA on indexing log data might take up to 6 hours. OMS has built in alerting that allows you to trigger actions within 5 minutes of data arrival. But if indexing takes more than 5 minutes - then what's the point of creating alert that might trigger on something that is no longer a problem, or not trigger at all if there is real problem. What is the average data indexing time? Log Analytics would be much more useful and have many more applications in real world if that indexing time is much lower. 6…

    359 votes
    Vote
    Sign in
    (thinking…)
    Sign in with: oidc
    Signed in as (Sign out)
    You have left! (?) (thinking…)

    We have recently published an article – https://docs.microsoft.com/en-us/azure/log-analytics/log-analytics-data-ingestion-time that details various aspects of data ingestion time for Log Analytics, and clarifies distinction between the financially-backed SLA and our Service-Level Objectives. In fact, the typical latency to ingest data into Log Analytics is between 3 and 10 minutes, with 95% of data ingested in less than 7 minutes.

    We are also actively working to bring this latency down even further, and many customers already report that they experienced a significant improvement, but more is coming.

  2. Use Windows Event Forwarding (WEF) to send events to OpInsights

    Would it be cool if you could configure Windows Server WEF (Windows Event Forwarding - http://technet.microsoft.com/en-us/library/cc748890.aspx ) to send to Advisor for Log Management scenario, without using the SCOM agent ?
    Alternatively, if one already has a forwarder/collector (WEF/WEC) architecture in place, could it be possible to use just one SCOM agent/gateway to pull the 'forwarded' logs stored on that collector from that single box to the cloud.

    333 votes
    Vote
    Sign in
    (thinking…)
    Sign in with: oidc
    Signed in as (Sign out)
    You have left! (?) (thinking…)
  3. Collect IIS Advanced logs

    Allow the collection and addition of custom fields using advanced logging or custom IIS modules. Example is to add x-forwarded-for to IIS logs in W3WC format.

    208 votes
    Vote
    Sign in
    (thinking…)
    Sign in with: oidc
    Signed in as (Sign out)
    You have left! (?) (thinking…)

    Let’s see how many come here and vote this, but we probably won’t special case this one log type ourselves.

    We are anyhow doing work to enable per-tenant schema (since your fields would be different than mine) – tracked as part of the ‘custom fields’ work http://feedback.azure.com/forums/267889-azure-operational-insights/suggestions/6519270-allow-to-perform-parsing-and-custom-fields-extract
    to be followed eventually by ‘custom logs’ http://feedback.azure.com/forums/267889-azure-operational-insights/suggestions/7113030-collect-text-log-files
    and
    http://feedback.azure.com/forums/267889-azure-operational-insights/suggestions/7928931-collect-data-from-custom-containers-in-storage-acc

    which will enable this scenario – and many more!

  4. Collect IIS Logs from Windows Azure Diagnostics storage (WAD) for Azure Web Sites

    Azure WebSites write to WAD in a different folder structure. The work of this other idea http://feedback.azure.com/forums/267889-azure-operations-insights/suggestions/6519377-collect-iis-logs-from-windows-azure-diagnostics-st enables reading those IIS logs for Azure Cloud Services (i.e. web role instances) but not for Azure Web sites.
    This new idea is for the latter scope.

    166 votes
    Vote
    Sign in
    (thinking…)
    Sign in with: oidc
    Signed in as (Sign out)
    You have left! (?) (thinking…)

    Cloud Services / Virtual Machines write with a different container/folder structure in Azure blob than Azure WebSites. Our current ingestion processes the former, not the latter.

    Anyhow, also consider the ‘generic’ idea of a platform feature to ingest your own logs http://feedback.azure.com/forums/267889-azure-operational-insights/suggestions/7928931-collect-data-from-custom-containers-in-storage-acc

  5. Collect Azure Web Apps IIS Logs and Application Logs

    OMS can collect IIS logs for web roles. Extend this capability to Azure Web Apps IIS logs as well as Azure Web Apps application logs

    123 votes
    Vote
    Sign in
    (thinking…)
    Sign in with: oidc
    Signed in as (Sign out)
    You have left! (?) (thinking…)
    triaged  ·  4 comments  ·  Log Management and Log Collection Policy  ·  Flag idea as inappropriate…  ·  Admin →
  6. Populate ComputerIP field with agent manager Computer IP address

    ComputerIP is populated with the IP Address from which Azure Log Analytics is receiving data. For nodes behind a firewall/proxy or OMS Gateway this mean to have the external IP Address of the proxy.
    ComputerIP must contain IP(s) information collected by the Agent on the computer hosting it to enable Compliance and Security Scenario on the console.
    RemoteIPAddress could be added as the External IP address for proxy based agents or will contains the same address of the ComputerIP for agents not behind a proxy/firewall/Gateway.
    This have a serious impact on compliance in the actual implementation.

    100 votes
    Vote
    Sign in
    (thinking…)
    Sign in with: oidc
    Signed in as (Sign out)
    You have left! (?) (thinking…)
  7. Data Retention Intervals By Data Type

    Would like to request a data retention interval by data type (Similar to what is done in SCOM.) Specifically, the ability to set retention timeframes on "Performance Data", "Event data", and "Analytic Data."

    100 votes
    Vote
    Sign in
    (thinking…)
    Sign in with: oidc
    Signed in as (Sign out)
    You have left! (?) (thinking…)
  8. Modify Extraction for Existing Custom Fields

    Need to have the ability to modify the extraction criteria for existing Custom Fields. I have added a handful based on SharePoint ULS, but they aren't always matching properly. The only way that I have found to "improve" them is to remove the column and re-add it again.

    79 votes
    Vote
    Sign in
    (thinking…)
    Sign in with: oidc
    Signed in as (Sign out)
    You have left! (?) (thinking…)
  9. Log Filtering

    I want to be able to filter stuff I don't want to collect in logs. For example with ACS (in SCOM) I could apply filters that didn't collect system logins. I would like this functionality in all logs, for example I would want to filter IIS logs to remove data from certain IP addresses.
    I can see customers wanting to use this type of functionality when the costs of data start to pile up.

    76 votes
    Vote
    Sign in
    (thinking…)
    Sign in with: oidc
    Signed in as (Sign out)
    You have left! (?) (thinking…)
  10. Collect Azure Storage Logging files

    On Microsoft Azure you can enable Azure Storage logging. The logging information is saved in a $logs container in your StorageAccount. It would be great if we can add this log information to OpInsights. More information about how you can enable this type of logging: https://msdn.microsoft.com/en-us/library/azure/dn782840.aspx

    72 votes
    Vote
    Sign in
    (thinking…)
    Sign in with: oidc
    Signed in as (Sign out)
    You have left! (?) (thinking…)
  11. Delete logs from OMS workspace by type/date

    Possibility to delete logs by type/date. For example if there are enormous amount of logs/custom logs generated by accident, it can increase the bill by thousand of dollars. And if the plan is Premium with 1 year retention policy, billing amount can become huge.

    64 votes
    Vote
    Sign in
    (thinking…)
    Sign in with: oidc
    Signed in as (Sign out)
    You have left! (?) (thinking…)

    Log Analytics cost is incurred when data is sent into the service.
    Once the data has been made available for searching there is no cost saving by deleting the data (except in case of retention)

    That said, there are cases where it might be useful to delete logs.

    Specifically, to support GDPR, we have introduced Purge APIhttps://docs.microsoft.com/en-us/azure/log-analytics/log-analytics-personal-data-mgmt#how-to-export-and-delete-private-data
    Please note that this API must not be treated as general-purpose data delete API, but used for GDPR purposes only

  12. Collect ETW Trace Logs

    Windows Events collected today are only from the 'classic' NT-style eventlogs (Application/System) as well as from the Crimson logs (Vista and above) that are saved in ETVX format.

    It would be nice to enable collection of ETW Trace Logs too (.ETL), like /Analytics and /Debug logs.

    60 votes
    Vote
    Sign in
    (thinking…)
    Sign in with: oidc
    Signed in as (Sign out)
    You have left! (?) (thinking…)
  13. Delete Custom Logs sended by HTTP Data Collector API

    Hi,

    It's possible delete Custom Logs sended by HTTP Data Collector API?

    Thanks in advance :)

    Best Regards

    Hugo Faria

    56 votes
    Vote
    Sign in
    (thinking…)
    Sign in with: oidc
    Signed in as (Sign out)
    You have left! (?) (thinking…)
  14. 49 votes
    Vote
    Sign in
    (thinking…)
    Sign in with: oidc
    Signed in as (Sign out)
    You have left! (?) (thinking…)

    Here the requirement is clear/obvious. We just have not prioritized this work yet.

    The overall ‘performance’ data collection needs to be refined – not just for Linux.

    Right now we only collect/provide hourly aggregates of some specific performance counters related to HyperV for the ‘Capacity Intelligence Pack’ scenario.

    Real time monitoring scenario might need some different shape of performance data to start with, before we enable this for Linux or for Windows alike, i.e. http://feedback.azure.com/forums/267889-azure-operational-insights/suggestions/6519061-collect-custom-windows-performance-counters

  15. Make the OMS agent work on Linux with ARM architecture

    Since there are devices like Raspberry PI with ARM architecture, it would be great if you provided binaries for ARM based Linux systems as well. Currently, I am unable to run the agent on Raspberry with Raspbian despites the tutorials available on various sites.

    47 votes
    Vote
    Sign in
    (thinking…)
    Sign in with: oidc
    Signed in as (Sign out)
    You have left! (?) (thinking…)
  16. Custom Logs (import and delete) and add custom timestamps

    One amazing idea is create custom fields on custom log sample process. Another good idea is add more timestamp samples (like ISO 8601 format, YYYYMMDDThhmmss.fffK where YYYY: Year, MM: Month, DD: Day in month, T: Delimiter, hh: Hour, mm: Minutes, ss: Seconds, fff: Milliseconds, K: Time zone offset) or add the possobility to create a custom timestamp.
    It will be possible delete some imported custom logs to make some tests?

    45 votes
    Vote
    Sign in
    (thinking…)
    Sign in with: oidc
    Signed in as (Sign out)
    You have left! (?) (thinking…)

    We’re planning on allowing you to import/export Custom Logs & Fields via the UI & ARM Templates. We’re currently implementing the ARM support today for most of Settings in OMS.

    Thanks for sharing some of the timestamps you need. Feel free to e-mail them to me here: evanhi(at)microsoft.com

    We’re actively planning way for you to specify timestamps yourselves.

  17. User specified delimiter for custom logs

    Request to introduce user defined delimiter for Custom logs

    We run into issues where we're unable to delimit RabbitMQ log timestamp format
    dd-MMM-yyyy::HH:mm:ss
    Unfortunately, there is no configuration for us to change that timestamp format in RabbitMQ and have to implement a heavy workaround in order to work around this to convert it to a date time format supported by Microsoft then forwarding it to OMS.

    44 votes
    Vote
    Sign in
    (thinking…)
    Sign in with: oidc
    Signed in as (Sign out)
    You have left! (?) (thinking…)
  18. NLog target for OMS data collector API

    Please implement a NLog target for the OMS data collector API

    43 votes
    Vote
    Sign in
    (thinking…)
    Sign in with: oidc
    Signed in as (Sign out)
    You have left! (?) (thinking…)
  19. Recursive Log Collection paths

    Recursive Log collection paths for Custom Logs

    This will help users like me with folders that have logs + subfolders with logs.

    39 votes
    Vote
    Sign in
    (thinking…)
    Sign in with: oidc
    Signed in as (Sign out)
    You have left! (?) (thinking…)
  20. sccm support\configuration manager

    connection for SCCM 2012 R2 so that we can see all hardware inv data on all managed servers

    32 votes
    Vote
    Sign in
    (thinking…)
    Sign in with: oidc
    Signed in as (Sign out)
    You have left! (?) (thinking…)

    Have you checked out the new ‘change tracking’ feature? http://blogs.technet.com/b/momteam/archive/2014/09/24/wish-you-knew-which-configuration-change-caused-the-issue-or-what-changed-on-a-server.aspx
    Our differentiating angle at this point is to provide supporting information and context to troubleshooting. We are more interested in ‘what changed’ and ‘what happened’ (time-based information) than info ‘state’ and ‘snapshot’ (=‘current’ state) or objects at the moment…

    ConfigMgr is generally more focused on ‘pure’ inventory (=current state of things).

← Previous 1 3 4 5 6
  • Don't see your idea?

Feedback and Knowledge Base