We have identified an issue with Windows Server 2016 when it is booted in Secure Boot mode that causes this error. We are working with the signing team to resolve this issue.
We are still actively working through a resolution for this with internal signing teams. Thanks for your patience.
It should be a minute, are you trying to see data from the Gateway specifically?
What are you looking to see on your server?
3 votes2 comments · Log Analytics » Log Management and Log Collection Policy · Flag idea as inappropriate… · Admin →
Thank you for reporting this. A fix for this issue has been deployed. Please let us know if you still see the issue.
8 votesplanned · 1 comment · Log Analytics » Search UI and Language · Flag idea as inappropriate… · Admin →
Thanks for your feedback. This work is now planned.
How many are interested in this?
Also see the partially-related idea http://feedback.azure.com/forums/267889-azure-operational-insights/suggestions/6756464-add-copy-and-paste-in-windows-phone-app
48 votes3 comments · Log Analytics » Solutions / Packs Gallery and new IP ideas · Flag idea as inappropriate… · Admin →
Hi, do you see this as an extension of the analysis of the ‘SQL Assessment’, or as a completely new solution tailored just for Azure SQL instances/databases?
What is most important to you in this area?
Sorry for the tardy reply.
Yes, indeed it would be some different solution with different best practices etc.
We (OMS) are not currently working on that ourselves, nor is the CSS organization that owns the SQL Assessment.
We had actually spoken with the Azure SQL team in the past and there was some interest there to do that themselves as 'first class', but I am not sure where that idea is at. I would try suggesting the same idea (either 'standalone' or 'in OMS' - your preference :-)) in the Azure SQL forum http://feedback.azure.com/forums/217321-sql-database
Please review the steps in the following wiki for troubleshooting guidance:
The Capacity Planning solution provides capacity planning for System Center Virtual Machine Manager (VMM) private clouds, so you will need VMM connected to System Center Operations Manager (SCOM) and SCOM connected to the Operations Management Suite.
In the search window, if you do a search for:
You should get 3 subheadings (we call them perspectives):
1/ Logs - this is the default and will show the counters as log entries.
2/ Minify - this doesn't currently apply to perf data
3/ Metrics - clicking on this will show the perf counters as graphs
For monitoring VMs in Azure you do not need a SCOM Server.
There are a couple of solutions that are specific to SCOM: Capacity Planning and Alert Assessment. If you do not have SCOM you can remove these solutions.
Setting up OMS to monitor Azure VMs is described here:
To collect performance counters you will need to install the agent.
Related feedback to this request includes:
Collect perf counters from Azure storage (azure diagnostics)
Collect perf counters from UNIX/Linux
Capacity Planning without using VMM
Even if ApplicationProtocol is reported as Unknown, you should be able to see the underlying transport/network protocols as TCP, UDP, etc.
At this stage, ApplicationProtocol is currently limited to a well-known set of protocols based on standard port mapping logic in the MP. If the port number on the wire data does not follow the standard port number, the application protocol will be shown as Unknown.
It’s on our radar to see how this can be improved in future iterations – we just recently enabled this first version of wiredata and are gathering more feedback on how to evolve it, we are still in the process of documenting it.
We can't special case too much (i.e. lookup on the machine's configuration) just for SQL or any other workload - it's not a scalable approach to maintain on our end. Also it would only work on the RECEIVING (server) side, not on the client.
It’s on our radar to see how protocol detection can be improved in future iterations.
On a parallel track, if you know your ports, you might be also interested in TAGGING - this is not currently tracked as an idea here, but represents a variation of 'custom fields' that we are thinking about - http://blogs.technet.com/b/momteam/archive/2015/08/18/create-your-own-fields-in-oms-with-custom-fields.aspx - the difference is that you wouldn't COPY over the extracted value to a new field, but use the extracted value to determine how to 'TAG' (=give a static value to a new field) certain records (i.e. in this case with a string representing 'SQL' protocol for your known, chosen custom ports).
5 votes2 comments · Log Analytics » Agent Management, Data Metering and Usage (Portal) · Flag idea as inappropriate… · Admin →
Can you elaborate?
On the free tier there is no SLA for the time it takes to process the data. Some scenarios (events/logs) are more ‘real time’ than other ones – i.e. some of the assessments run once a week… it depends, it’s not clear what you are seeing.
We have an equivalent idea here for the SQL/AD assessment - those that run once a week - http://feedback.azure.com/forums/267889-azure-operational-insights/suggestions/8673676-manually-trigger-sql-assessments
Shall I merge it?
This is not something 'global' - it needs to be implemented on a per-solution or per-scenario basis, hence I need to understand if the ask is mainly for those CSS assessments?
We now support retaining data for up to 2 years:
7 votes2 comments · Log Analytics » Workspace Settings / Administration · Flag idea as inappropriate… · Admin →
Thanks for your feedback. Indeed. In SCOM, you can look at those rules.
The old ‘Configuration Assessment – that was opaque. But all the new solutions just import MP’s whose rules you can see in SCOM, take apart, and study. All in all, we think this is much more transparent approach than the older one… does it make sense?
Understood, but today we don't have infrastucture to show those things in the cloud.
The start of that is the customer-defined policy they can setup/decide themselves (i.e. Logs, and soon Performance Counters).
But for the other 'solutions' these are not stored as individual config items per tenant - they just are the MPBs that go down (to both SCOM and Direct agent), at this stage - so the only way to know *for sure* is to look at them.
While documentation can always follow, transparency is a good thing to start - as opposed to the previous opacity, don't you think? Sure not everybody might have the skills to look at the XML - but you *can* do that. And in Direct agent, while you don't have the SCOM console and Authoring pane, you can still look at those MP's XML from disk directly (maybe open them in MPViewer?).
Not saying 'no' - just that to surface in a way similar to SCOM will need a lot more infrastructure that has not been built yet.
Verifiying what's already downloaded in SCOM is another point too - what about TEST and validation environments?
Pls also check this related thread http://feedback.azure.com/forums/267889-azure-operational-insights/suggestions/7161777-intelligence-pack-updates which might be a cheaper/faster thing to do to help drive those conversations.
You can do conversions in the search language, for example:
Type=Perf CounterName=“Available Memory MB” | measure avg(div(CounterValue, 1024)) as MemoryGB by Computer
Do you ALSO need to do this type of format changes/math against AGGREGATED results (from Measure) ?
With datetime you have today some form of that with measure count() by TImeGenerated interval 1DAY/6HOURS/whatevevr interval - which allows to somehow 'bucketize' times without trimming them down as strings.
But besides that, supporting the same with other data types might be very expensive initially and we were so far considering it more of a P2, where the P1 would be the support in 'SELECT' that I described in the previous comment.
Let us know your point of view, it is always appreciated!
There *IS* some limited DateTime MATH, anyhow - but that is to choose the dates to use in filters, i.e.
this part is documented.
In the case of conversion/formatting function, we were thinking they would belong in the 'SELECT' command by introducing a 'AS' option to it (like you can assign a name to 'AggregatedResult' column with 'AS' in a MEASURE command). A bunch of these formatting capabilities are also present in Select-Object in powershell, so it could sound similar (as usual, cannot be identical).
I.e. could be something similar to this -
Type=Event | Select Computer as MachineName
csUriStem="/foo/bar" | select TimeTaken/1000 as Seconds
CounterName="Available MBytes" | Select Average/1024 as Giga
would that work?
Just gathering feedback on whether the current thinking makes sense.
2 votes2 comments · Log Analytics » Workspace Settings / Administration · Flag idea as inappropriate… · Admin →
If the workspace is closed, the web service API don’t allow the management group – which is still ‘connected’ from its point of view to do anything.
If you did re-register the group to a new workspace, anyhow, it should start working again.
Those errors could be transient – MP’s download is attempted every 10 minutes – how often do you see those errors?
Have you checked the actual list of MP’s imported in the environment? “Verify if things are working” – procedure 1 here http://blogs.technet.com/b/momteam/archive/2014/05/29/advisor-error-3000-unable-to-register-to-the-advisor-service-amp-onboarding-troubleshooting-steps.aspx
Also check the MPB’s that come with update rollups are imported (Windows Update does not automatically update those, as explained in the KB article for each rollup)?
If a rule runs every 5 minutes:
60 (min) * 24 (hours) / 5 = 288
Unless you see close to 288 failures a day - 10 a day are just network glitches, VIP swaps, redeployments of part of the service in Azure, a hiccup at your provider or along the way.... stuff like that.
With 10 failures, you should have 278 more retries that worked :)
3 votes3 comments · Log Analytics » Workspace Settings / Administration · Flag idea as inappropriate… · Admin →
Ok, I got this answer from the engineers and confirmed the current limitation – we will have to make a change for this:
WorkspaceName is used to register the portal DNS, and it should be globally unique.
We cannot reuse the workspace name now, because we dodn’t yet remove the DNS mapping when we do the CloseWorkspace operation.
Correct, we will have to make a change for this:
WorkspaceName is used to register the portal DNS, and it should be globally unique.
We cannot reuse the workspace name now, because we don’t yet remove the DNS mapping when we do the CloseWorkspace operation.
40 votes7 comments · Log Analytics » Agent Management (OnPrem components) / Connectivity / Setup · Flag idea as inappropriate… · Admin →
In the current implementation, Management groups CAN already be removed, but only once they are ‘stale’ == have not reported ANY data for >14days, the link to remove will appear.
The number of Directly reporting agents in ‘settings’ page is the actual number of servers registered, but the drill down will take you to search (where servers presence is inferred from the data).
We will be working on options to de-register directly connected servers, similarly to we offer for SCOM management groups.
Yes, *eventually* - as the retention period is elapsed. Not because you removed it. We only groom based on the TIME and your data plan.
For Management Groups, see this older thread http://feedback.azure.com/forums/267889-azure-operational-insights/suggestions/6519336-how-to-remove-a-management-group-from-advisor
We also need to provide a means to UN-authorize/disconnect agents (like you can REMOVE now not reporting SCOM management groups).
Deletion of DATA is NOT a goal, anyhow.
Our ‘big data’ architecture doesn’t make it easy to delete things selectively. Eventually data will stale out and be groomed based on the retention period.
We do need to provide a mean do de-authorize and ‘force disconnect’ agents, anyhow.
There are some more info about this topic in this other thread (I think I will later merge this with it as they are essentially the same ask) http://feedback.azure.com/forums/267889-azure-operational-insights/suggestions/6519218-purging-non-reporting-servers and read the comment thread as it contains information about how to get this info from search.
Also, in 'Servers and Usage' page, now those tiles for direct agents count are based off search data. If machines simply 'go away' and it was intentional (i.e. scale back, removed the server, whatever), they'll also 'fall off' the count on the tile after data is not seen from them in 14 days. Everything is currently seen in terms of 'data flow'.
61 votes5 comments · Log Analytics » Log Management and Log Collection Policy · Flag idea as inappropriate… · Admin →
This feature is already in progress, limited preview is expected later in 2018
If you use Operations Manager, you can do this with Windows Events by defining your rules onprem - http://blogs.technet.com/b/momteam/archive/2014/08/27/anatomy-of-an-event-collection-rule-for-advisor-preview-advanced-targeting.aspx - rather than in the cloud policy (which is more simplified).
About the Security Intelligence Pack, I agree there needs to be some level of filtering there (certain EventID's are useful for a scenario but not for another - even within the same security space), so we are planning to have some configuration of what to collect in that sense.
Not sure of the feasibility for IIS - today we just copy over the FILES without looking at them/parsing them onprem... so looking for IP addresses might be overkill there. But we could allow something like a 'only certain websites' type of filtering for that, for now it's all or nothing.
Bottom line - it depends on the data type, but yes we understand what you are saying :-)
30 votes4 comments · Log Analytics » Workspace Settings / Administration · Flag idea as inappropriate… · Admin →
This idea is quite broad – states, distributed applications, RBAC….
The distributed application/health state part sounds like this other one http://feedback.azure.com/forums/267889-azure-operational-insights/suggestions/6519314-business-service-distributed-application-health – should we merge them?
There is also another idea filed for 'access control' around just the (future, multiple) custom dashboards created http://feedback.azure.com/forums/267889-azure-operational-insights/suggestions/6657570-per-user-access-control-for-dashboards
For general 'scoping' (but not 'enforcing') to data of only a certain set of machines, etc - please also look at the recently enabled subquery functionality as another building block/stepping stone in that direction: http://feedback.azure.com/forums/267889-azure-operational-insights/suggestions/6519234-filter-groups-of-computers-thru-subqueries-in-n
I'll leave as is, not merge, for now, as there are multiple voters.... but RBAC is a totally different beast than showing state of objects - that scope cannot likely be combined.
There are specific ideas tracking things that RBAC should better protect/limit i.e. http://feedback.azure.com/forums/267889-azure-operational-insights/suggestions/6519299-only-allow-administrators-not-users-to-onboa feel free to file additional specific ones.
Another thing we might be able to create access controls for are dashboards (i.e. specific drill downs/solutions/dashboards - now there is only one 'my dashboard', but we intend all pages to eventually be dashboards - http://feedback.azure.com/forums/267889-azure-operational-insights/suggestions/6651387-allow-to-create-multiple-dashboards
But actually separating data in search (i.e. a given user sees or doesn't see certain records) would be yet another gigantic architecture change.
What is likely more feasible in the future is to allow a federation of workspaces or 'uber' tenant seeing multiple smaller workspaces, rather than separating data within the same workspace - see this http://feedback.azure.com/forums/267889-azure-operational-insights/suggestions/6519233-improve-multitenancy-for-managed-services-provider
We’re looking at using information from the Windows Security Center to collect status from non-Microsoft antimalware products.
Have you also checked this thread http://feedback.azure.com/forums/267889-azure-operational-insights/suggestions/6519211-windows-server-2008-r2-sp1-servers-are-shown-as-n ?
As per admin response above,
[...] Right now we’re only detecting Windows Defender and System Center Endpoint Protection (SCEP) real-time clients.
If we don’t find one of these clients we use data from the Malicious Software Removal Tool and mark the server as not having real-time protection.
Signatures out of date will only show for servers that haven’t updated their AV/malware signatures in 7 days or more.
If SCEP is detected and real time monitoring is disabled we’ll report this as “no real time protection” instead of “not reporting”. [...]
that's all that is supported by AntiMalware IP at the moment. The data is produced by the agent thru scripts using various WMI and other API calls, to give you a rich shape of data that is easy to query.
Richards's point below is just that you CAN do some of the same things in a 'lightweight' manner (i.e. from Azure PaaS getting events from Windows Azure Diagnostics) and we are exploring this as an alternative to have broader applicability - but it's not the current implementation. Just something you can do on your own with Log Management today.
Of course, if you have other products/workloads/software that logs events - you can use Log Management for a multitude of scenarios: monitoring, troubleshooting, auditing, etc... you just need to know where your software logs and what it logs. i.e. see this blog post about emulating SCOM's Alert Rules with Searches (example for IIS, but the idea applies to anything that logs, really) http://blogs.msdn.com/b/dmuscett/archive/2014/11/05/iis-mp-event-alerting-rules-s-opinsights-searches-equivalents.aspx
But this is not currently native functionality of the specialized 'AntiMalware' IP.