Yearly update: the above is still accurate.
This continues to be unplanned. Please keep the votes coming!
This remains unplanned.
This is an awesome idea, and we’re exploring a few options to make it a reality.
However, the 600 connection limit per instance should be enough for most applications if you’re reusing or closing connections. If you truly need 600 open connections you are likely to run into the 10 minute timeout per execution.
Even after we add this you will still need to be mindful of your connection management.
Keep the votes coming!
—AlexStephen Wing commented
Understood that finite resources must be protected. However, Dan has an excellent point that putting a limit on the connections makes things frustrating from our perspective because we expect for Functions to scale out as needed, without us having to worry about the nuts and bolts of resources used in the background. In our own case, we're running up against the limit even though 95%+ of our connections are being used to connect to other Azure services (as opposed to third-party services).
Is there any way Functions could be engineered so that additional connections beyond the 300 could still be made available to us, but we pay for them on an as-needed basis dynamically, as we do with function execution (in GB/s)? If this were built into the pricing model, it would be so much more palatable, and keeping in the spirit of "serverless" computing. It would also encourage users to investigate excessive usage of connections and reduce them if possible, since their monthly bill would partially reflect the number of connections used.
On a related note, it would be really helpful if Microsoft could put out a document explaining what types of operations actually use up these "connections", as well as best practices for efficiently managing these connections and disposing of them when done (it's my understanding, for example, that each connection remains in use for 4 minutes, even though the function execution has actually completed--kind of unfair for this to happen and be outside of our control, but we still are in effect penalized for it). Perhaps also the Microsoft recommendation engine built into Azure could could include (voluntary) machine-based Function code inspection that can make a recommendation when sockets are not being used efficiently. For example, see:
Update: Still planned!
This is something that we have enabled internally, and are in the planning process of highlighting TCP connections for customers in the “Diagnose and Solve problems” tab. However, we do not have an ETA yet.
Thanks for the feedback!
Azure Functions TeamStephen Wing commented
Yes, this is something that's badly needed, we're running into issues with this also in Functions. Maarten, you may wish to look at the following link to possibly resolve your underlying issue:
Unfortunately, this does not solve our issue, because it's primarily calls to other Azure services that are using up all of our connections...