6 votes0 comments · Azure Monitor-Application Insights » Metrics & charting · Flag idea as inappropriate… · Admin →
6 votes1 comment · Azure Monitor-Application Insights » Metrics & charting · Flag idea as inappropriate… · Admin →
Having different retention policies for different telemetry items is a very interesting feature request. Enabling this is fundamentally non-trivial, so please keep upvoting and including the use cases where this is necessary. Please keep in mind that you can today ensure retention longer than 90d by using continuous export to archive subsets of the telemetry to external storages/data warehouses of your choice.
-AppInsights Product Management
We have not started on this yet due to higher priorities. Will keep you posted.
Thank you for the feedback, team is reviewing the item to prioritize it accordingly with other priority features for the semester. We will continue to revisit it and update the status accordingly. Thank you
102 votes6 comments · Azure Monitor-Application Insights » Service monitoring and diagnostics · Flag idea as inappropriate… · Admin →
Thanks for your feedback. We are working on prioritizing this work, and will post an update soon.
Great! StackExchange is going to get me covered. My team can stop custom logging my Redis dependency once this is in place.
This feature request is very similar to what OMS supports in its alert rule configuration. We expect to have similar capabilities over the next few months. I do not have an ETA at this time, and will keep you posted.
I voted for this, though it might add confusion since the different telemetry types have different colors too. Either way, there really should be some better visualization to differentiate traces based on severity.
We are exploring how to implement this feature without compromising privacy. Thank you for all the feedback!
- Matt, Product Manager
I'll also add, that I have created a custom ITelemetryInitializer class that adds request form parameter key/value pairs to request telemetry. I have a dictionary of keys that are considered sensitive and get masked so they don't get written in plain text to AI. It would be nice to get something like this out of the box.
The kinds of scenarios this helps with is that if you get recurring errors, but not all the time, so can't easy determine what the root cause is. If you can see the request body, you can find commonality for the failed/successful requests and see if perhaps all the failures occur with a particular request argument. This helped us a couple of times.
I like Eduard Los' idea about opting-in to specific requests that get the request body logged. Logging the whole body as 1 single property may not be as queryable as splitting out request parameters, but is certainly more applicable across different request types. Maybe multiple options? (LogBodyForPath, LogBodyForRoute, LogFormParametersForPath, LogFormParametersForRoute)
This would be great! However, it would need to be tunable to, for example, mask sensitive data being sent in.
14 votes8 comments · Azure Monitor-Application Insights » Service monitoring and diagnostics · Flag idea as inappropriate… · Admin →
Would a URL link and/or an email work for such clients?
-AI Product Team
This is also helpful just to get the details to support or other people who may not be able to get to a TFS/VSTS work item. Or, for that matter, don't want to. As a consultant, we often work with clients who are disinclined to have to get into the ALM tool, but may still need some of the event details for one reason or another.
This is a feature currently being investigated.
I’m coordinating with the proper teams to get a more comprehensive update on this item. Someone will follow up on the thread ASAP.
I’ve discussed an idea like this with a few of the web tooling Program Managers recently, and we all agree that the feature is indeed a legitimate one we should consider. The discussions are really early.
We do have support for uploading files to blob storage accounts directly from within VS. We also have web publishing capabilities. Add to that the support we have in Razor for being able to load things like jQuery from disk when the site’s running in debug mode to later loading from a CDN, and it seems like something we could develop.
We’re looking into the options now. I’ll keep this thread updated in the future.
We’ve added the ability to set a retention policy on data from the Windows Azure Diagnostic agent to our backlog.
Thank you for the suggestion. This seems like a reasonable way to make sure that you collect all of the diagnostic data from your virtual machines. I have passed this idea on to the WAD team.
Many of the SDKs available since this original post have been updated to support more unit testing scenarios. The management libraries are testable, and we have aspirations to provide mocking capabilities but this isn’t fully completed.
I’m not sure about other areas of the SDK, for instance being able to mock the role environment or instances. We’ve discussed this in the past (and ongoing) and you know I’m a supporter of more testing features across the SDK landscape.
I’m marking this as “Started,” though, because all the areas of Windows Azure are committed to providing better testing features, and it is a very big priority for the ADX team that our SDKs provide the development community with good unit testing support. I’m not sure this one would ever be “completable,” as the platform keeps growing, but you can rest assured we’re addressing this more aggressively now.