Internal load balancer vnet peering
Currently when you connect 2 VNETS using a global vnet peer you cannot access internal load balancer between the networks. E.g if you have a resource behind a load balancer in vnet1 and you try to connect to the load balancer from vnet2 then you cannot connect.
This causes problems for SQL Server Availability groups running over 2 regions meaning you need an internal load balancer in each region. If you then have a web farm spread over the 2 regions only web servers within the region hosting the listener address can connect to the listener. This basically removes one of the main benefits of using always on in SQL.
Standard Load Balancer is supportede across Global VNet Peering. Basic Internal Load Balancer is not supported. This is documented here: https://docs.microsoft.com/en-us/azure/virtual-network/virtual-networks-faq#what-are-the-constraints-related-to-global-vnet-peering-and-load-balancers
We recommend using Standard Load Balancers.
- Anavi N [MSFT]
Hélder Pinto commented
Looks like this feature has been implemented in Standard Load Balancer:
Michael Lecuona commented
This feedback has a status of started in the networking forum:
Wanted to use Global VNet Peering between two regions for our SQL Server AlwaysOn Availability Group only to find it doesn't work. Having to revert back to VPNs which are more expensive. Would be much better if this worked...
Jeb Garcia commented
This seems like it would be a common use case to do for people using AKS where there's an internal load balancer.
Ben Parry commented
The lack of peering for internal load balancers is preventing us from routing to services in a kubernetes cluster.
I would like this feature because it allows us to use Azure Internal Load Balancers infront of active/active NVAs from peered VNets.