dynamic "newBatchThreshold" on queue triggered functions.
When i host a queue triggered azure function on an app service, i would like the ability to programatically alter "newBatchThreshold" for a queue triggered function at runtime.
So that when extraordinary conditions occur, i can decrease the number of messages causing stress on the system.
I have many messages passing through a queue triggered function. Most of the time they are easy to process, so i have "newBatchThreshold" set to 64.
This normally causes 20-30% database load processing these messages.
We recently processed a batch of data where the messages all caused a substantially more complex than normal workflow (instead of about 1 in 1000 triggering this) and our database was snowed under.
There is nothing i could to to tell our app service.
Feature request: in a manner similer to the dynamic scale-out of an app service, It would be nice if i could say 'on a timed out database connection, decrease "newBatchThreshold" by 2.'
A second scenario where this would have been usefull: there was a system outage while there were many messages in the queue.
with a 100% failure rate (after several rapid retries), each message was poisioned. We posined messages at about 100x our normal throughput, and during this time there was 100% failures.
Feature request: in a manner similer to the dynamic scale-out of an app service, It would be nice if i could say 'if error rate is more than 95% of all messages, set "newBatchThreshold" to 1.'
That way during an outage, you don't try 100 operations in parallel, when you suspect all of them would fail.
(incedently, we managed to hit the 100GB azure app insights logging cap in 3 hours with all the error messages logged; we normally have 5-10gb of log per day)
This is great feedback!
We have no immediate plans to implement this, but we do have plans to tweak the batching behavior in our triggers and I will update this item if it is easy to add this functionality at the same time.
Azure Functions PM Team