3 votes3 comments · Azure Monitor-Log Analytics » Solutions / Packs Gallery and new IP ideas · Flag idea as inappropriate… · Admin →
‘owned’ currently means ‘added’ to this workspace.
We got the term for consistency with how it says in the Windows Store, but there is no concept of a ‘user’ acquiring an IP and then adding it to the workspace(s). You always just add it to the workspace and that’s it.
But we’ll consider a better/clearer terminology if this is unclear, thanks for the feedback.
You seem to like just ‘Added’?
Plus, very simply put, the IP's the workspaces 'owned' in a certain workspace (== those that have been added to the workspace) are the ones for which you see tiles in overview screen.
There is NO concept of a USER owning an IP and ADDING it to his various workspaces.
The ONLY thing that you do is ADD to the workspace - when you do, it shows as 'owned' (in the workspace).
The workspace is the unit of configuration (adding/removing - if added, the workspace 'owns' it - not the user); there is no concept of a user purchase of an IP - at least not at the moment.
Hope it clarifies,
125 votes3 comments · Azure Monitor-Log Analytics » Solutions / Packs Gallery and new IP ideas · Flag idea as inappropriate… · Admin →
No current plans for BizTalk Support, however this is something in our teams backlog and will revisit it in the future. Thank you
I briefly looked at the BizTalk 2013 MP here http://www.microsoft.com/en-us/download/details.aspx?id=39617
You have an opportunity to do what I described in this blog post http://blogs.msdn.com/b/dmuscett/archive/2014/11/05/iis-mp-event-alerting-rules-s-opinsights-searches-equivalents.aspx and extract some of those rules, and build the search equivalents (typically much smaller/readable one-liner, rather than a large XML fragment - here my goal is to drastically simplify intelligence/knowledge authoring compared to SCOM).
As we introduce more data sources (i.e. performance counters http://feedback.azure.com/forums/267889-azure-operational-insights/suggestions/6519061-collect-custom-windows-performance-counters ) a lot more scenarios will be possible (there are a lot of perf collection rules and rmonitors based on perf counters in that MP).
Some of those other technology-related IP's (i.e. SQL Assessment) are not around real-time monitoring of those workloads, at this stage - they currently are periodic assessment of health and risk based on Microsoft support best practices.
Currently, for a more 'stateful' and reactive type of monitoring that brings the best of both worlds, you can try the 'Alert Management' IP - http://blogs.technet.com/b/momteam/archive/2014/11/12/manage-your-operations-manager-alerts-from-azure-operational-insight-with-the-new-alert-management-intelligence-pack.aspx - have you considered just using SCOM with the BizTalk MP and triage your alerts in OpInsights (soon even on your mobile phone) ?
13 votesunder review · 3 comments · Azure Monitor-Log Analytics » Browser Support · Flag idea as inappropriate… · Admin →
These are certainly 'nice to have's, but not trivial to do and not strategic.
We need to allow the infra for multiple dashboards http://feedback.azure.com/forums/267889-azure-operational-insights/suggestions/6651387-allow-to-create-multiple-dashboards
and then export of those http://feedback.azure.com/forums/267889-azure-operational-insights/suggestions/6519372-allow-to-export-an-intelligence-pack-bundle-that-c
and community publishing http://feedback.azure.com/forums/267889-azure-operational-insights/suggestions/6519273-allow-me-to-submit-an-intelligence-pack-bundle-to
we need more specific scope about dashboards improvements such as tile size, positioning, etc. Eventually those dashboards (now simple) will be THE intelligence packs pages/drill downs that now are 'coded' pages.
This is 'extensibility' which I thought you needed; multiple panes is more of an OS/browser functionality and doesn't carry us much forward...
11 votes4 comments · Azure Monitor-Log Analytics » Solutions / Packs Gallery and new IP ideas · Flag idea as inappropriate… · Admin →
See comments below.
Also, for 'state', I'll explain better/some more what I mean by 'inferring it from the data'.
You could decide to use the worst severity in the windows 'system' event log as an indicator of health (worse between warning or error - lower being worse in windows, but 0 is 'success' and 4 is 'information... for weird backwards compatibiilty reasons) - we use the MIN function
Type=Event EventLog=System (EventLevel=1 OR EventLevel=2) | Measure Min(EventLevel) by Computer
Now you hav to look at the grid and mentally map those '1' to RED and those '2' to YELLOW. Or you can throw another filter in the query an only pick Critical - so you only get 'critical' computers - implying any other one that does not appear in the list must be in a 'better' state.
Some other data typs had similar properties to let you 'rank' the worst 'known info' about a given computer (grouping by the 'Computer' field) for example in Malware assessment there are special 'rank' fields precisely for this purpose, higher is worse in this case so we use MAX frunction
Type=ProtectionStatus | measure max(ThreatStatusRank) as WorstRank by Computer
Type=ProtectionStatus | measure max(ProtectionStatusRank) as WorstRank by Computer
See? You can basically 'derive' something like a 'state' by applying statistical functions to the data!
Hope it makes sense/clarifies what I meant by 'not persisting state'.
You can save those searches and pin them to your dashboard and soon see them on your phone!
State in Operations Manager is persisted and updated (with a LOT of database activity - and performance hit) continuosly, based on a number of 'monitors' present in management packs = in order to know 'state', you need to have a criteria that determines what 'state' even means (when is it 'green'? when is it 'red'?)
Do you intend to SYNCHRONIZE what is in SCOM to the Cloud? (i.e. like it's now done for the Alert management IP?)
That might be doable, but would still only a 'copy' of what's in SCOM, for consultation purposes... that's why I was asking, but I didn't understand the answer.
In the current thinking and with the type of backend we use, we don't really intend to PERSIST any *state* information in the cloud. We don't even have *objects*. It's not like SCOM. This is all entirely based on DATA.
We have 'types' of data, but they are really not object types - they are just a field name - described here http://blogs.msdn.com/b/dmuscett/archive/2014/10/19/advisor-search-first-steps-how-to-filter-data-part-i.aspx
We'd rather want to be able to INFER STATE by looking at the data and the KPI's that matter to you.
I have described some of this - and some converstion between SCOM alerting rules and 'searches' equivalent syntax in this blog post http://blogs.msdn.com/b/dmuscett/archive/2014/11/05/iis-mp-event-alerting-rules-s-opinsights-searches-equivalents.aspx
The simplest example I can give of this is to look at when a machine has last reported some data - if the most recent piece of data is OLDER than 4 hours, I want to see the computer name in the results
* | measure Max(TimeGenerated) as LastData by Computer | Where LastData < NOW-4HOURS
and if you have results... well, that IS showing you machines in a 'bad state' (=not sending frequently enough).
And you can pin that to a tile in the dashboard and make it colr if there are more than ZERO results.
There's your 'state' but we have not WRITTEN it anywhere.
Of you can just look for a set of 'bad' events or conditions that yuo know should not happen. As soon as you see a result, that is your 'bad state'.
You just have to PIN the query that shows the 'state' (or rather the criteria to get to that state) that you are interested in. You are essentially calculating it every time, but with this type of architecture is actually way faster to do this way.
In the future those searches could be running real time and produce alerts - http://feedback.azure.com/forums/267889-azure-operational-insights/suggestions/6519198-long-running-saved-searches-or-scheduled-that-ca
Let us know if this clarifies the current thinking at least a little. We understand this is a shift from previous/traditional/stateful monitoring in Operations Manager, and it is very deliberate.
Great feedback. We are looking at potentially delivering a solution in this area. Thank you!
"Microsoft-Windows-DSC/Operational" log works, anyhow. But I am not sure how much info that alone has.
Also commented on the ETW idea: our team already has an implementation of an ETL parser module for the agent, but right now this is specialized to collect some very specific telemetry from the VMM stack in Cloud Platform Systems - learn more about CPS at http://www.microsoft.com/cps
If there is enough interest we will think of making this code more generic to support other scenarios such as this one.
Well, yes Stefan - in theory. Except that it currently we only pick up 'classic' and EVTX eventlogs, not those /analytics and /debug logs that are ETL traces under the hood - vote this one for that http://feedback.azure.com/forums/267889-azure-operational-insights/suggestions/6691402-collect-etw-trace-logs
then yes, after that is in place, this scenario can probably use that information as data source.
Would you consider this a part of 'Change Tracking' ? Or a separate IP? Can you elaborate a little?
60 votesunder review · 5 comments · Azure Monitor-Log Analytics » Log Management and Log Collection Policy · Flag idea as inappropriate… · Admin →
Our team has an implementation of an ETL parser module for the agent, but right now this is specialized to collect some very specific telemetry from the VMM stack in Cloud Platform Systems - learn more about CPS at http://www.microsoft.com/cps
If there is enough interest we will think of making this code more generic to support other scenarios.Daniele Muscetta shared this idea ·
19 votes1 comment · Azure Monitor-Log Analytics » Log Management and Log Collection Policy · Flag idea as inappropriate… · Admin →
This (assuming it is from SCOM APM events?) is currently not prioritized but let’s see where it lands.
Gordon have you checked the 'Alert Management' IP yet?
With that, APM Alerts ("Server Application Exception" and "Server Performance Exception" and those APM Alerts will be also pulled out and visible in search.
They carry the full XML payload of the original alert.
i.e. sample query
Type:Alert AlertSeverity:Warning AlertState!=closed AlertName:"Server Application Exception"
or I can even search the full text index and search for a specific exception, of function name, etc
Of course a shape that would offer faceting over 'request time' or 'exception class' would be better suited - but I thought I'd mention it as a step in that direction...
49 votes1 comment · Azure Monitor-Log Analytics » Log Management and Log Collection Policy · Flag idea as inappropriate… · Admin →
Here the requirement is clear/obvious. We just have not prioritized this work yet.
The overall ‘performance’ data collection needs to be refined – not just for Linux.
Right now we only collect/provide hourly aggregates of some specific performance counters related to HyperV for the ‘Capacity Intelligence Pack’ scenario.
Real time monitoring scenario might need some different shape of performance data to start with, before we enable this for Linux or for Windows alike, i.e. http://feedback.azure.com/forums/267889-azure-operational-insights/suggestions/6519061-collect-custom-windows-performance-counters
49 votes0 comments · Azure Monitor-Log Analytics » Workspace Settings / Administration · Flag idea as inappropriate… · Admin →
25 votes0 comments · Azure Monitor-Log Analytics » Solutions / Packs Gallery and new IP ideas · Flag idea as inappropriate… · Admin →
This isn’t really a prioritized scenario at this point, as we don’t really bring all of the OpsMgr data to the cloud, but only for specific scenarios (to which you can opt in/out by adding/removing ‘intelligence packs’).
I updated the category of this idea to match it is a new Intelligence Pack/scenario suggestion.
19 votes1 comment · Azure Monitor-Log Analytics » Log Management and Log Collection Policy · Flag idea as inappropriate… · Admin →
We are doing work on at the moment on custom fields – http://feedback.azure.com/forums/267889-azure-operational-insights/suggestions/6519270-support-regular-expressions-regex-or-xpath-to-pe – which represents a stepping stone to allow custom data types into the system.
The first iteration will only extract new (per tenant) fields for existing types, but later we need to address also the collection/gathering aspect (i.e. is your custom data already in azure? http://feedback.azure.com/forums/267889-azure-operational-insights/suggestions/7928931-collect-data-from-custom-containers-in-storage-acc or is your data something that comes from an existing log – http://feedback.azure.com/forums/267889-azure-operational-insights/suggestions/7113030-collect-text-log-files) to allow customers to define what logs you want, where they are, how do they look, how to parse them, etc.
Basically, we might or might not address this item as an out of the box ‘solution’ but the current thinking is to leave it open until the generic platform capabilities can support it (this and any other logs, at that point).
10 votes0 comments · Azure Monitor-Log Analytics » Workspace Settings / Administration · Flag idea as inappropriate… · Admin →
If you notice the more recent IP’s are starting to provide information with what time the data refers to. There might never be a ‘global’ time window that all scenarios can snap to, but we are trying to make the tiles more informative as to what period they are showing.
Also in ‘my dashboard’ (where there IS a global time selector) you will have to deal with the time dimension, which can’t always be global – see the consideration that Stas wrote on his blog here https://cloudadministrator.wordpress.com/2014/10/19/system-center-advisor-restarted-time-matters-in-dashboard-part-6/
We continue to investigate this. Due to the current roadmap and strategy this will remain in our backlog. Please continue to share feedback related to this topic to help us make an informed decision at a later time.
293 votes13 comments · Azure Monitor-Log Analytics » Log Management and Log Collection Policy · Flag idea as inappropriate… · Admin →
This feature request is still under review and team is actively prioritizing with existing backlog. Will keep the thread updated as we move forward.