The dynamic tier should never run out of sockets
If you have too many connections, you can get SocketExceptions. The dynamic tier was meant to stop us from having to think about server instances, but with a connection limit, the dynamic tier is useless and we are back to the standard service plans.
We’re looking at a lot of improvement around this experience. However, because of the nature of Functions (you control the code,) you will always need to be considerate of socket exhaustion.
My case is even more frustrating, my C# (pseudo)code looks like this:
And when I get 10k items in the container I start encountering all kinds of socket exceptions, failed Azure Functions, silent function fails etc. I'm using .NET Azure storage SDK 9.1.1 and it's just spawning insane amounts of connections without resuing, closing them in-time or maybe not closing at all
Stephen Wing commented
Understood that finite resources must be protected. However, Dan has an excellent point that putting a limit on the connections makes things frustrating from our perspective because we expect for Functions to scale out as needed, without us having to worry about the nuts and bolts of resources used in the background. In our own case, we're running up against the limit even though 95%+ of our connections are being used to connect to other Azure services (as opposed to third-party services).
Is there any way Functions could be engineered so that additional connections beyond the 300 could still be made available to us, but we pay for them on an as-needed basis dynamically, as we do with function execution (in GB/s)? If this were built into the pricing model, it would be so much more palatable, and keeping in the spirit of "serverless" computing. It would also encourage users to investigate excessive usage of connections and reduce them if possible, since their monthly bill would partially reflect the number of connections used.
On a related note, it would be really helpful if Microsoft could put out a document explaining what types of operations actually use up these "connections", as well as best practices for efficiently managing these connections and disposing of them when done (it's my understanding, for example, that each connection remains in use for 4 minutes, even though the function execution has actually completed--kind of unfair for this to happen and be outside of our control, but we still are in effect penalized for it). Perhaps also the Microsoft recommendation engine built into Azure could could include (voluntary) machine-based Function code inspection that can make a recommendation when sockets are not being used efficiently. For example, see: