Concurrency limit should avoid queueing new runs
I have a pipeline with a basic time-based trigger that runs it hourly. I only want one instance of it running at a time so I set the concurrency property to 1. However, instead of doing nothing, ADF will queue new instances if the pipeline is already running. This is not the behavior that I want. The trigger is set up to ensure the pipeline runs hourly and that is sufficient for my purposes. Queueing new instances just causes a backlog and eventually pipeline failures.
I know I can accomplish what I want with a tumbling window trigger, but that is more complex to set up, plus it means I need an instance of the tumbling window trigger for every pipeline. It would be great if I could just tell ADF to not queue new instances if I hit my concurrency limit.
ra ra commented
That's a basic necessary feature !
airflow have it by just declaring a DAG ( a pipeline for azure_data_factory ) with
I would agree, I have the same issue
At least it would be nice if one could choose whether overlapping runs should be queued or discarded. It's easy to see a need for both scenarios