Support for up to five read-only replicas within the same region is now in public preview. This feature is aimed to scale out read-heavy workloads.
Check out the documentation to learn more about replicas:
- Overview: docs.microsoft.com/en-us/azure/mysql/concep..
- How to create replicas from the portal: docs.microsoft.com/en-us/azure/mysql/howto-..
Azure CLI support will be following shortly.
Andrea Lam on behalf of the Azure Database for MySQL product team
Read Replica isn't replication, it a Lock In, it prevents us from keeping replica outside Azure, not at all useful.
Last octet is dropped by AI by design, since it represents PII (personally identifiable information). As you mentioned you can do this through the SDK or enrichment (i.e. customer can proactively collect PII, but not out of box). Can you describe a scenario where the full IP is required to triage/diagnose a production issue?
Whole purpose of Application Insights to find failures and attacks !! How on earth we find IP of an attacker constantly trying to harm server, is it not one of the biggest potential benefit of using Application Insights?
There are natural PII concerns in auto-collecting POST parameters, and it’s something, as Dan mentioned, you can do yourself with telemetry initializer. That said, please keep upvoting so we can see how many folks are impacted by not having a more streamlined way of collecting POST parameters.
-AppInsights Product Management
We’d like to hear a bit more about specifically what you have in mind with regard to this suggestion. Please reply to this suggestion with as much detail about what you’d like.
5 votes1 comment · Application Insights » Service monitoring and diagnostics · Flag idea as inappropriate… · Admin →
473 votes8 comments · Networking » Domain Name Service (DNS, Traffic Manager) · Flag idea as inappropriate… · Admin →
We’re tracking this on our long-term backlog. However, it’s unlikely that we’ll support this in the near future.
Thank you for the suggestion and please keep the votes and ideas coming.
2,629 votes110 comments · Networking » Domain Name Service (DNS, Traffic Manager) · Flag idea as inappropriate… · Admin →
Thanks for the suggestion. We recognise the strategic importance of DNSSEC and it is a key feature on our long-term backlog.
DNSSEC represents a very large engineering investment, and hence we have to prioritize carefully vs other work. The most customer data we can get supporting the need for DNSSEC support, the better prioritization decisions we can make. We appreciate your votes and your comments.
6 votes1 comment · Networking » Security (ACLs, Firewalls, Intrusion Detection) · Flag idea as inappropriate… · Admin →
This is really good feedback. We will look into this.
— Anavi N [MSFT]
Thanks for your feedback! We have plans to improve our delete experience in the coming year. In addition, we now require you to type the name of your Storage account to confirm deletion.
Soft delete + Two Factor Delete is different from typing the name of storage account,
1. Mostly we use copy and paste of name which is easily visible.
2. Under hacked account or angry administrator could destroy blob storage with useful data. We did loose important data stored in S3 when our account credentials were leaked.
We need more details on what you are looking for regarding “multiplexing HTTP traffic to a single host” The CDN is built as a reverse proxy and can be used for both static and dynamic content. For dynamic content that you don’t want to be cached by the CDN you can either set the appropriate cache control header (e.g. max-age) or use the bypass cache capability in the rules engine in the Azure CDN Premium to control this for specific content. Long term we are working on enabling this capability also in Azure CDN Standard.
Let's say I have a website located in US East cost, may or may not be hosted with Azure. And people access it by secure.myportal.com, every other static resource is already served by cdn, but dynamic user specific content is served from US East cost to every point in the world.
As an alternative, we can setup one different website in each Azure region and have all websites use connection string to SQL, which is not an ideal situation. This requires different endpoint name for each region for app service. This requires additional dns management to serve different cname based on location. Lots of work to make websites available on various locations.
Just like CDN, if each POP could serve as a proxy to my original website, each POP could keep open HTTP/2 connection and multiplex traffic of all end clients. This will significantly improve speed for end user as end user would be connecting to local POP and local POP would keep one persistent HTTP/2 connection with certain TTL.
Let's say we have an web site, master.myapp.com that is hosted in US East region. CDN only benefits us for static content served without cookies. For users in different geographic regions master.myapp.com performs little slower (authenticated sessions etc). If there was cdn.myapp.com, which will route all traffic to master.myapp.com (including cookies, authenticated sessions). In same region connection to cdn.myapp.com would be quick, and if your nodes keep dns and keep alives to master.myapp, that would be superfast.
Thank you for your feedback. We are currently in public preview of blob storage lifecycle management. The feature set offers a rich, rule-based policy engine which you can use to transition your data to the best access tier and to expire data at the end of its lifecycle. See our post on the Azure Blog to get started: https://azure.microsoft.com/en-us/blog/azure-blob-storage-lifecycle-management-public-preview/preview/.
For any further questions, or to discuss your specific scenario, send us an email at DLMFeedback@microsoft.com.
854 votes30 comments · Networking » Domain Name Service (DNS, Traffic Manager) · Flag idea as inappropriate… · Admin →
This remains on our long-term backlog as something we want to offer as part of the Azure DNS service in due course. Thank you everyone for the feedback so far, and please continue to share your comments.
Along with powerful DNS Cloud ...
Wildcard if possible.
This automated deployment of multi-VM solutions is something we would like to add to IaaS.
A common need for users of Azure Table Storage is searching data in a Table using query patterns other than those that Table Storage provides efficiently, namely key lookups and partition scans. Using Azure Search, you can index and search Table Storage data (using full text search, filters, facets, custom scoring, etc.) and capture incremental changes in the data on a schedule, all without writing any code. To learn more, check out Indexing Azure Table Storage with Azure Search: https://docs.microsoft.com/en-us/azure/search/search-howto-indexing-azure-tables
Storing any textual data on Table Storage is useless until this feature is provided, also we want this to be on Table Storage, not on SQL Azure.
In addition to on-pre SQL Server, are folks interested in sync’ing SQL Azure to SQL Server in Azure VM or AWS? Or other non-Azure properties?
@LiamCa, Replication is different then Data Sync, replication is a must because, what happens Azure's DB gets corrupt or loss of data happens and MS says its your fault, we only promise uptime, you should have taken backup !!! thats what all hositng providers do !! Replication is smarter choice to have most recent backup as well as do some local reporting and other similar benefits.