Add max calls/per day|hour|minute configuration for throttling
Now- any light ddos attack that Azure will not recognize- will affect me and my account. If I know that my service shouldn't receive more than 10000 calls per day, but I can't setup limits on incoming requests.
"Daily Usage Quota (GB-Sec)"- not bad idea, but it's something internal and synthetic for me. Call/per day- is much more native metrics for users.
Here’s the latest as there seem to be 2 types of ask here and so two seperate updates. Need comments for if this issue should close to be focused on one or other:
1. I want to control how many calls my function can make to another API (the 3rd party API rate limiting).
– In all plans we now have a way to specify the max instances. This can limit how far a function app instance can scale: https://docs.microsoft.com/en-us/azure/azure-functions/functions-scale#limit-scale-out
2. I want to stop my function from triggering more than x times an hour.
Nothing planned in this in the short term. Using API Management for HTTP functions with throttles would be our recommendation for HTTP, nothing out of box for non-HTTP triggers yet.
Ladies, I think this word itself has a similarity of WOW, Women are known for their understanding, difficult work, assurance and love. They can do everything without exception. They can make the unimaginable things to turn out to be valid and every single movement of human life is incorporated with ladies. http://business24.in/ They can standard with men in any angle. Beginning from the Birth of youngster to driving a nation, ladies have their own frenzy in all the things they do. With regards to driving an organization or business they do marvels with their abilities and they can improve bargains than men. As business people ladies are doing colossal work and raising their business to higher pinnacles.
Truth be told, trust on the long periods of administration that they have offered to individual clients, and after that settle on a sensible choice of purchasing the administration from a store of an ideal choice. https://aminserve.com/rdp Attempt to discover surveys and perceive how solid client service they have. Endeavor and fathom on the off chance that they have a grumbling handling framework. Appreciate if there should arise an occurrence of an issue, who will deal with what? You are dishing out your dollars for something that necessities go last and has a solid assistance after buy.
Keith van der Meulen commented
I have a storage queue to which I submit jobs that triggers a function that sends them to a third party rate-limited API. I would like an option to rate-limit my function so that if a burst of jobs are submitted to the queue, the function will only run once every `n` seconds so that I don't start getting 429 errors back from the API. The current singleton option in the host.json is global which would affect all my functions, not just the job submission function.
Merging this with duplicate request for throttling.
Could we have a setting that limits the number of outgoing dependency calls per second? I can see this used to limit the number of calls per second to a downstream API so as to avoid flooding it.
Olav Tollefsen commented
This does not appear to solve the issue of setting a max rate of requests (per day / per second or whatever) in order to avoid having to deal with massive amounts of 429 errors if the function is calling a service which have a limit on the number of requests it can deal with.
I think you're missing the point, it was a request for the consumption program.
The problem is that in the consumption plan there is no maximum instances,
And this creates problems later on, for example : SQL open Connections, Redis cache act.
Is it going to be solution for the consumption plan ?
Merging this with a duplicate related to global throttling controls. We believe that an actual per-execution throttle will be more effective than a server limit.
This is the solution in the AWS :
But do not repeat their mistakes,
they ties together a solution for the two separate problems:
Reserved instances for each function.
limit the number of instances in a particular function.
These two features are very important.
Interesting, I can't seem to get past 20 nodes, but would like to provision x number of nodes before running a simulation
Today, a function app can scale out to up to 200 servers and there's no way to limit that. That's a problem when your functions are connecting to connection limited resources such as SQL DBs. When a Queue triggered function connects to a SQL DB to perform a task, having 200 servers fire up the same function (even when batchSize=1) creates a lot of connections to the DB. This means you need to configure your DB to a more expensive plan to allow it. In my case, even 4000DTUs weren't enough since my app has 5 connection expensive functions and the queue had >40K messages. It'd be useful if I could limit it to 30 servers and not have to pay for a super expensive DB.
I'd like to have this ability to protect our functions from misuse.