How can we improve Microsoft Azure Functions?

The dynamic tier should never run out of sockets

If you have too many connections, you can get SocketExceptions. The dynamic tier was meant to stop us from having to think about server instances, but with a connection limit, the dynamic tier is useless and we are back to the standard service plans.

23 votes
Sign in
(thinking…)
Sign in with: oidc
Signed in as (Sign out)

We’ll send you updates on this idea

Dan Friedman shared this idea  ·   ·  Flag idea as inappropriate…  ·  Admin →

This is an awesome idea, and we’re exploring a few options to make it a reality.
However, the 600 connection limit per instance should be enough for most applications if you’re reusing or closing connections. If you truly need 600 open connections you are likely to run into the 10 minute timeout per execution.
Even after we add this you will still need to be mindful of your connection management.

Keep the votes coming!
—Alex

2 comments

Sign in
(thinking…)
Sign in with: oidc
Signed in as (Sign out)
Submitting...
  • oleg commented  ·   ·  Flag as inappropriate

    My case is even more frustrating, my C# (pseudo)code looks like this:

    await Task.WhenAll(container.GetAllBlobs().Select(blob=>blob.DownloadToFileAsync()));

    And when I get 10k items in the container I start encountering all kinds of socket exceptions, failed Azure Functions, silent function fails etc. I'm using .NET Azure storage SDK 9.1.1 and it's just spawning insane amounts of connections without resuing, closing them in-time or maybe not closing at all

  • Stephen Wing commented  ·   ·  Flag as inappropriate

    Understood that finite resources must be protected. However, Dan has an excellent point that putting a limit on the connections makes things frustrating from our perspective because we expect for Functions to scale out as needed, without us having to worry about the nuts and bolts of resources used in the background. In our own case, we're running up against the limit even though 95%+ of our connections are being used to connect to other Azure services (as opposed to third-party services).

    Is there any way Functions could be engineered so that additional connections beyond the 300 could still be made available to us, but we pay for them on an as-needed basis dynamically, as we do with function execution (in GB/s)? If this were built into the pricing model, it would be so much more palatable, and keeping in the spirit of "serverless" computing. It would also encourage users to investigate excessive usage of connections and reduce them if possible, since their monthly bill would partially reflect the number of connections used.

    On a related note, it would be really helpful if Microsoft could put out a document explaining what types of operations actually use up these "connections", as well as best practices for efficiently managing these connections and disposing of them when done (it's my understanding, for example, that each connection remains in use for 4 minutes, even though the function execution has actually completed--kind of unfair for this to happen and be outside of our control, but we still are in effect penalized for it). Perhaps also the Microsoft recommendation engine built into Azure could could include (voluntary) machine-based Function code inspection that can make a recommendation when sockets are not being used efficiently. For example, see:

    https://aspnetmonsters.com/2016/08/2016-08-27-httpclientwrong/

Feedback and Knowledge Base