Now it automatically adjusts - i.e. when looking at 7 days, each bar becomes 6 hours. It would be nice to decide what interval to choose.
6 hours is an odd interval. If I am looking at 7 days I would rather see how many of those results are there each day/24 hrs intervals/buckets.
If I am querying 1 or 2 days, I probably want to see a hourly breakdown.
The idea is to offer a drop down to allow selecting specific aggregation intervals.5 votes
Thanks for offering this feature. Currently the plan is to upgrade the portal with many new features, the timeline is being re-designed as part of it.
Until that, I can only recommend you to use the query to generate charts that describe this in the manner that fits your data best.
We’ve recently upgraded the query language. Here’s an example of the new syntax, using 3-hour bins over the last two days of events:
| where TimeGenerated > now(-2d)
| summarize count() by bin(TimeGenerated, 3h)
| render timechart
Need more status update than "let it run overnight" "wait several hours". It would be great to get additional status. For example: we've successfully connected to your onpremise System Center server. OR you need to setup a connection before we can pull data. OR we're currently pulling data (1GB out of 10GB).7 votes
There are a few different requirements and different things that can happen for each intelligence pack. Some of that troubleshooting needs to be done on-premises (the service doesn’t know what it has never seen…). Refer to our troubleshooting blog post for the latest http://blogs.technet.com/b/momteam/archive/2014/05/29/advisor-error-3000-unable-to-register-to-the-advisor-service-amp-onboarding-troubleshooting-steps.aspx
Nevertheless, onboarding has been greatly simplified (a few times, leading to GA of the service) in the last year, including the introduction of the ‘Settings’ tile (hub), and the scale of the service has improved to deal with higher data rates (not making you wait too long, basically, see this http://feedback.azure.com/forums/267889-azure-operational-insights/suggestions/6519343-real-time-near-realtime-data-collection ).
We think we are in a much better place than when you filed this idea, do you agree?
I would like to be able to provide a summary for a business service. For instance, if I had a 3 tier distributed application defined in Operations Manager, report on configuration, performance, security against the DA.25 votes
This isn’t really a prioritized scenario at this point, as we don’t really bring all of the OpsMgr data to the cloud, but only for specific scenarios (to which you can opt in/out by adding/removing ‘intelligence packs’).
I updated the category of this idea to match it is a new Intelligence Pack/scenario suggestion.
Under direct attached storage tab it would be helpful to have another column of servers that show the top throughput list. Perhaps anpther column showing lowest would also be helpful.9 votes
I had a lot of confusion over how the Malware tile shows the worst status of last 7 days and neither indicates this in the view nor can be configured with either fewer days or current status. Only by going to the detail pane and selecting a shorter time frame can you see the near-term status. It would be OK if the top level tile said "worst state in last 7 days" and even better if the top level tile saved my preference for example "worst state last 24 hours".10 votes
If you notice the more recent IP’s are starting to provide information with what time the data refers to. There might never be a ‘global’ time window that all scenarios can snap to, but we are trying to make the tiles more informative as to what period they are showing.
Also in ‘my dashboard’ (where there IS a global time selector) you will have to deal with the time dimension, which can’t always be global – see the consideration that Stas wrote on his blog here https://cloudadministrator.wordpress.com/2014/10/19/system-center-advisor-restarted-time-matters-in-dashboard-part-6/
Currently I'm testing our QA environment in SCA and it would be nice when I'm ready to start moving our production gear over to have the option to reset/delete all data as opposed to closing the account, typing in a reason why, and then setting everything back up.9 votes0 comments · Agent Management, Data Metering and Usage (Portal) · Flag idea as inappropriate… · Admin →
Real-time cleanup would be expensive in the current architecture. When you close the account, data isn’t deleted immediately, but within several days (detailed in the terms of service and/or privacy statement), as part of grooming.
Right now you’d have to close and re-open account for this.
Anyhow I am leaving the idea open, but we feel that at this point this is a ‘nice to have’ – not a critical one – and would take many cycles and resources from doing much higher priority work.
Hi all. I have been trying to get my security team to allow us to join the preview, however they have been pushing back. Is there a way to control users' ability to only view data from inside the corporate network? I.e. not over the web.
Also, within the product, can you give role based access, e.g. application teams only have access to app data etc?50 votes
There is no differentiation between local disk on Hyper-V servers and clustered storage. this is also not apparent intelligence of VMs performance when running on SMB3 file shares. I'm looking for things like "average latency for my VMs disk access" which happens to be running on a SMB3 file share.9 votes
Certain info in Event logs especially the security log could be useful to hackers. This needs to be treated as sensitive info. Assume this info could be compromised.8 votes
This would be invaluable in investigating failure issues and correlating them to external problems (ie, SAN problems)27 votes
Windows Server 2003 and before were using TEXT log files http://support.microsoft.com/kb/168801
Windows Server 2008 and beyond use ETL traces – http://blogs.msdn.com/b/clustering/archive/2008/09/24/8962934.aspx
Also refer to these generic ideas:
Text log files collection tracked here http://feedback.azure.com/forums/267889-azure-operational-insights/suggestions/7113030-collect-text-log-files
Collection of ETW traces is tracked here http://feedback.azure.com/forums/267889-azure-operational-insights/suggestions/6691402-collect-etw-trace-logs
connection for SCCM 2012 R2 so that we can see all hardware inv data on all managed servers32 votes
Ability to pull in Four Square Checkin's, Twitter Feeds, weather etc so sentiment analysis can be added to log analysis for business realated event analysis10 votes
Not our immediate priority, as we as focusing more on operational insights, not business ones.
For merging with non-machine, non-operational data we think the place directionally could be PowerBI – check the idea of providing a ‘connector’ to it http://feedback.azure.com/forums/267889-azure-operational-insights/suggestions/6519374-integrate-with-powerbi-allow-to-query-and-refres
Add APM collection to log management19 votes
This (assuming it is from SCOM APM events?) is currently not prioritized but let’s see where it lands.
Would it be cool if you could configure Windows Server WEF (Windows Event Forwarding - http://technet.microsoft.com/en-us/library/cc748890.aspx ) to send to Advisor for Log Management scenario, without using the SCOM agent ?
Alternatively, if one already has a forwarder/collector (WEF/WEC) architecture in place, could it be possible to use just one SCOM agent/gateway to pull the 'forwarded' logs stored on that collector from that single box to the cloud.359 votes
This is currently under development, scheduled to be in preview later in 2018
Here the requirement is clear/obvious. We just have not prioritized this work yet.
The overall ‘performance’ data collection needs to be refined – not just for Linux.
Right now we only collect/provide hourly aggregates of some specific performance counters related to HyperV for the ‘Capacity Intelligence Pack’ scenario.
Real time monitoring scenario might need some different shape of performance data to start with, before we enable this for Linux or for Windows alike, i.e. http://feedback.azure.com/forums/267889-azure-operational-insights/suggestions/6519061-collect-custom-windows-performance-counters
This could put your KPI's in perspective and show how 'others' are doing - i.e. the average virtual:physical core ratio across companies of the same size is X and in your environment is Y, or similar.11 votes
- Don't see your idea?