Great ask. Keep the votes coming. Nothing planned short term.
Please fast track this. IMO this is is the next big thing for .NET Azure Functions. It's an even bigger enabler than the (ever popular) DI ask.
One easy example of how this would help is being able to run code before and after functions to handle cross cutting concerns. Both Function proxies and Function Filters are severely limited in this way and this a capability that every proper runtime should support.
Supporting ASP.NET Core would allow us to implement middleware and filters AND give us access to the many popular and battle-tested middleware that exist already (IdentityServer, Ocelot, graphql-aspnetcore etc.).
Great suggestion – currently planned and design is underway.
That's excellent news, Jeff. Thank you.
There are many documented performance and reliability issues on the durable functions GitHub repo where the root cause is Azure Storage. Making it easy to choose and use alternative data stores such as Redis or CosmosDb is a good thing (lower latencies for all the storage activities at the very least).
This is an awesome idea, and we’re exploring a few options to make it a reality.
However, the 600 connection limit per instance should be enough for most applications if you’re reusing or closing connections. If you truly need 600 open connections you are likely to run into the 10 minute timeout per execution.
Even after we add this you will still need to be mindful of your connection management.
Keep the votes coming!
Update: Still planned!
This is something that we have enabled internally, and are in the planning process of highlighting TCP connections for customers in the “Diagnose and Solve problems” tab. However, we do not have an ETA yet.
Thanks for the feedback!
Azure Functions Team
The TCP connections pane has disappeared in the new version of the "Diagnose & Solve Problems" page. And not only in that, the pane wasn't very useful in the first version of the page. It showed the number of connections (good) but gave absolutely no details to help you diagnose what was creating the connections e.g. the counts of connections to different ip addresses? or the counts of connections to other azure services (usually the case).
Exceeding the host thresholds is one of the most frustrating and frequent problems (ran into it on three consecutive projects with enterprise clients) we run into when doing anything remotely high volume with Azure Functions. There needs to be better diagnostics around this.
Nothing planned short term but a great ask. This is an item that could potentially be contributed if interest to do so earlier. Hoping we get this planned soon.
Reopening this user voice item as our support for Skip/Take (Offset/Limit) was only limited to single partition queries.
The newly released .NET SDK v3 now includes support for x-partition queries using Offset/Limit. You can learn more about v3 SDK and try it and provide feedback on our github repo here.
We will also be back-porting this functionality to our .NET v2 SDK. This work will begin shortly and we anticipate it to be released in September.
Once that is released we will mark this feature as complete.
Thank you for your patience and votes.
16 votesMayo shared this idea ·
Yes, please! This use case: Event Hubs Archive --Data Factory--> Data Lake Store <-- U-SQL ingest <--Scheduler is vital. Right now there are mucho blockers on the adl-a side, with support for Avro and the empty Avro files (file header, no blocks) generated by event hub capture.
Event better, it would be a wow moment to extract this data (from avro) into a table and automatically keep it up to date when new files arrive.
Using Data Lake Analytics to process event hub capture files (Avro) is a huge use case and right now it's a fairly awful experience on the Data Analytics side.
There are multiple versions of the MS Avro libraries floating around (with different bugs e.g. seekable vs non seekable streams), none of them currently handle the empty avro file (header but no blocks) sent by event hub capture....it's a mess.
We almost had a wow moment with event hub capture --> data lake --> data lake analytics. It fell apart on the data lake analytics side.
Please implement this. Please.