Add pay as you go connector APIs rather than rate limiting
It would be very useful to have the option to exceed the current limits set for some of the connectors, such as sending messages to a queue one by one can very quickly hit the limit.
Depending on the connector rate limiting can either protect the backend from abuse or the shared application resource to the service from being taken by a single user. Rate limits are re-evaluated based on usage and scale needs. If there’s a particular connector that is not meeting your throughput requirements let us know. We also recommend using the Circuit Breaker enterprise integration pattern for handling endpoint rate limits.