How can we improve Azure Networking?

Increase Idle Timeout on Internal Load Balancers to 120 Mins

We use Azure Internal Load Balancers to front services which make use of direct port mappings for backend connections that are longer than the 30 min upper limit on the ILB. That is, our ILBs accept port connections on a nominated set of ports and pass those connections to the backend services running on the same ports.
We are experiencing dropped TCP connections from clients connecting to the backend services via the ILB. After investigating the issue in collaboration with the Azure Networking Team it was verified that altering the default OS TCP keep alive duration to below 30mins would mitigate the issues manifest from the DNAT that is performed by the ILB. However reducing the timeout to below 30min at the OS level on all servers and containers across our estate is undesirable due to the scale on impact it would have.
Therefore we would like to have the idle timeout upper limit on the ILB increased to 120mins to bring it in line with the default limits used by the Linux NW stack.

126 votes
Vote
Sign in
(thinking…)
Sign in with: Microsoft
Signed in as (Sign out)
You have left! (?) (thinking…)
T Collins shared this idea  ·   ·  Flag idea as inappropriate…  ·  Admin →

2 comments

Sign in
(thinking…)
Sign in with: Microsoft
Signed in as (Sign out)
Submitting...
  • Dave Paddon commented  ·   ·  Flag as inappropriate

    We have the same issue, I have raised a support case as our transfers are sometimes failing. AWS supports 4000 seconds (~66 minutes) so something of that mark would work for me

Feedback and Knowledge Base