Add max calls/per day|hour|minute configuration for throttling
Now- any light ddos attack that Azure will not recognize- will affect me and my account. If I know that my service shouldn't receive more than 10000 calls per day, but I can't setup limits on incoming requests.
"Daily Usage Quota (GB-Sec)"- not bad idea, but it's something internal and synthetic for me. Call/per day- is much more native metrics for users.
the current plan is to provide throttling controls on a per instance base, which would enable limiting the amount of executions. This can work for scenarios where downstream resources cannot be strained or even DOS attacks.
Merging this with a duplicate related to global throttling controls. We believe that an actual per-execution throttle will be more effective than a server limit.
This is the solution in the AWS :
But do not repeat their mistakes,
they ties together a solution for the two separate problems:
Reserved instances for each function.
limit the number of instances in a particular function.
These two features are very important.
Interesting, I can't seem to get past 20 nodes, but would like to provision x number of nodes before running a simulation
Today, a function app can scale out to up to 200 servers and there's no way to limit that. That's a problem when your functions are connecting to connection limited resources such as SQL DBs. When a Queue triggered function connects to a SQL DB to perform a task, having 200 servers fire up the same function (even when batchSize=1) creates a lot of connections to the DB. This means you need to configure your DB to a more expensive plan to allow it. In my case, even 4000DTUs weren't enough since my app has 5 connection expensive functions and the queue had >40K messages. It'd be useful if I could limit it to 30 servers and not have to pay for a super expensive DB.
I'd like to have this ability to protect our functions from misuse.