Add max calls/per day|hour|minute configuration for throttling
Now- any light ddos attack that Azure will not recognize- will affect me and my account. If I know that my service shouldn't receive more than 10000 calls per day, but I can't setup limits on incoming requests.
"Daily Usage Quota (GB-Sec)"- not bad idea, but it's something internal and synthetic for me. Call/per day- is much more native metrics for users.
Moving this work item to unplanned, as it is clear that this request is no for a global throughput limit.
We do now offer the ability to limit your maximum instances in the Premium plan, which will allow you to avoid swamping downstream resources. https://docs.microsoft.com/en-us/azure/azure-functions/functions-premium-plan#plan-and-sku-settings
We prioritized that above a max call per X limit, because our only control to limit throughput is to outright deny some number of requests above a threshold.
Keep the feedback coming!
Azure Functions Team
Keith van der Meulen commented
I have a storage queue to which I submit jobs that triggers a function that sends them to a third party rate-limited API. I would like an option to rate-limit my function so that if a burst of jobs are submitted to the queue, the function will only run once every `n` seconds so that I don't start getting 429 errors back from the API. The current singleton option in the host.json is global which would affect all my functions, not just the job submission function.
Merging this with duplicate request for throttling.
Could we have a setting that limits the number of outgoing dependency calls per second? I can see this used to limit the number of calls per second to a downstream API so as to avoid flooding it.
Olav Tollefsen commented
This does not appear to solve the issue of setting a max rate of requests (per day / per second or whatever) in order to avoid having to deal with massive amounts of 429 errors if the function is calling a service which have a limit on the number of requests it can deal with.
I think you're missing the point, it was a request for the consumption program.
The problem is that in the consumption plan there is no maximum instances,
And this creates problems later on, for example : SQL open Connections, Redis cache act.
Is it going to be solution for the consumption plan ?
Merging this with a duplicate related to global throttling controls. We believe that an actual per-execution throttle will be more effective than a server limit.
This is the solution in the AWS :
But do not repeat their mistakes,
they ties together a solution for the two separate problems:
Reserved instances for each function.
limit the number of instances in a particular function.
These two features are very important.
Interesting, I can't seem to get past 20 nodes, but would like to provision x number of nodes before running a simulation
Today, a function app can scale out to up to 200 servers and there's no way to limit that. That's a problem when your functions are connecting to connection limited resources such as SQL DBs. When a Queue triggered function connects to a SQL DB to perform a task, having 200 servers fire up the same function (even when batchSize=1) creates a lot of connections to the DB. This means you need to configure your DB to a more expensive plan to allow it. In my case, even 4000DTUs weren't enough since my app has 5 connection expensive functions and the queue had >40K messages. It'd be useful if I could limit it to 30 servers and not have to pay for a super expensive DB.
I'd like to have this ability to protect our functions from misuse.