Mayo

My feedback

  1. 79 votes
    Sign in
    (thinking…)
    Sign in with: Microsoft
    Signed in as (Sign out)

    We’ll send you updates on this idea

    10 comments  ·  Azure Functions » Feature  ·  Flag idea as inappropriate…  ·  Admin →
    Mayo supported this idea  · 
    Mayo commented  · 

    Please fast track this. IMO this is is the next big thing for .NET Azure Functions. It's an even bigger enabler than the (ever popular) DI ask.

    One easy example of how this would help is being able to run code before and after functions to handle cross cutting concerns. Both Function proxies and Function Filters are severely limited in this way and this a capability that every proper runtime should support.

    Supporting ASP.NET Core would allow us to implement middleware and filters AND give us access to the many popular and battle-tested middleware that exist already (IdentityServer, Ocelot, graphql-aspnetcore etc.).

  2. 417 votes
    Vote
    Sign in
    (thinking…)
    Sign in with: Microsoft
    Signed in as (Sign out)
    You have left! (?) (thinking…)
    planned  ·  11 comments  ·  API Management » Integration  ·  Flag idea as inappropriate…  ·  Admin →
    Mayo supported this idea  · 
  3. 7 votes
    Sign in
    (thinking…)
    Sign in with: Microsoft
    Signed in as (Sign out)

    We’ll send you updates on this idea

    2 comments  ·  Azure Functions  ·  Flag idea as inappropriate…  ·  Admin →
    Mayo commented  · 

    That's excellent news, Jeff. Thank you.

    Mayo supported this idea  · 
    Mayo commented  · 

    +1

    There are many documented performance and reliability issues on the durable functions GitHub repo where the root cause is Azure Storage. Making it easy to choose and use alternative data stores such as Redis or CosmosDb is a good thing (lower latencies for all the storage activities at the very least).

  4. 57 votes
    Vote
    Sign in
    (thinking…)
    Sign in with: Microsoft
    Signed in as (Sign out)
    You have left! (?) (thinking…)
    2 comments  ·  Additional Services » ClearDB  ·  Flag idea as inappropriate…  ·  Admin →
    Mayo supported this idea  · 
  5. 27 votes
    Sign in
    (thinking…)
    Sign in with: Microsoft
    Signed in as (Sign out)

    We’ll send you updates on this idea

    2 comments  ·  Azure Functions  ·  Flag idea as inappropriate…  ·  Admin →

    This is an awesome idea, and we’re exploring a few options to make it a reality.
    However, the 600 connection limit per instance should be enough for most applications if you’re reusing or closing connections. If you truly need 600 open connections you are likely to run into the 10 minute timeout per execution.
    Even after we add this you will still need to be mindful of your connection management.

    Keep the votes coming!
    —Alex

    Mayo supported this idea  · 
  6. 25 votes
    Sign in
    (thinking…)
    Sign in with: Microsoft
    Signed in as (Sign out)

    We’ll send you updates on this idea

    6 comments  ·  Azure Functions  ·  Flag idea as inappropriate…  ·  Admin →

    Update: Still planned!
    -
    This is something that we have enabled internally, and are in the planning process of highlighting TCP connections for customers in the “Diagnose and Solve problems” tab. However, we do not have an ETA yet.

    Thanks for the feedback!
    Alex
    Azure Functions Team

    Mayo commented  · 

    The TCP connections pane has disappeared in the new version of the "Diagnose & Solve Problems" page. And not only in that, the pane wasn't very useful in the first version of the page. It showed the number of connections (good) but gave absolutely no details to help you diagnose what was creating the connections e.g. the counts of connections to different ip addresses? or the counts of connections to other azure services (usually the case).

    Exceeding the host thresholds is one of the most frustrating and frequent problems (ran into it on three consecutive projects with enterprise clients) we run into when doing anything remotely high volume with Azure Functions. There needs to be better diagnostics around this.

    Mayo supported this idea  · 
  7. 55 votes
    Sign in
    (thinking…)
    Sign in with: Microsoft
    Signed in as (Sign out)

    We’ll send you updates on this idea

    3 comments  ·  Azure Functions  ·  Flag idea as inappropriate…  ·  Admin →
    Mayo supported this idea  · 
  8. 3,842 votes
    Vote
    Sign in
    (thinking…)
    Sign in with: Microsoft
    Signed in as (Sign out)
    You have left! (?) (thinking…)
    128 comments  ·  Azure Cosmos DB  ·  Flag idea as inappropriate…  ·  Admin →

    Reopening this user voice item as our support for Skip/Take (Offset/Limit) was only limited to single partition queries.

    Update.

    The newly released .NET SDK v3 now includes support for x-partition queries using Offset/Limit. You can learn more about v3 SDK and try it and provide feedback on our github repo here.
    github.com/azure/azure-cosmos-dotnet-v3

    We will also be back-porting this functionality to our .NET v2 SDK. This work will begin shortly and we anticipate it to be released in September.

    Once that is released we will mark this feature as complete.

    Thank you for your patience and votes.

    Mayo supported this idea  · 
  9. 174 votes
    Sign in
    (thinking…)
    Sign in with: Microsoft
    Signed in as (Sign out)

    We’ll send you updates on this idea

    under review  ·  14 comments  ·  Data Lake  ·  Flag idea as inappropriate…  ·  Admin →
    Mayo supported this idea  · 
  10. 16 votes
    Vote
    Sign in
    (thinking…)
    Sign in with: Microsoft
    Signed in as (Sign out)
    You have left! (?) (thinking…)
    0 comments  ·  Service Bus  ·  Flag idea as inappropriate…  ·  Admin →
    Mayo shared this idea  · 
  11. 25 votes
    Sign in
    (thinking…)
    Sign in with: Microsoft
    Signed in as (Sign out)

    We’ll send you updates on this idea

    under review  ·  4 comments  ·  Data Lake  ·  Flag idea as inappropriate…  ·  Admin →
    Mayo commented  · 

    Yes, please! This use case: Event Hubs Archive --Data Factory--> Data Lake Store <-- U-SQL ingest <--Scheduler is vital. Right now there are mucho blockers on the adl-a side, with support for Avro and the empty Avro files (file header, no blocks) generated by event hub capture.

    Event better, it would be a wow moment to extract this data (from avro) into a table and automatically keep it up to date when new files arrive.

    Mayo supported this idea  · 
  12. 95 votes
    Sign in
    (thinking…)
    Sign in with: Microsoft
    Signed in as (Sign out)

    We’ll send you updates on this idea

    under review  ·  17 comments  ·  Data Lake  ·  Flag idea as inappropriate…  ·  Admin →
    Mayo supported this idea  · 
    Mayo commented  · 

    Using Data Lake Analytics to process event hub capture files (Avro) is a huge use case and right now it's a fairly awful experience on the Data Analytics side.

    There are multiple versions of the MS Avro libraries floating around (with different bugs e.g. seekable vs non seekable streams), none of them currently handle the empty avro file (header but no blocks) sent by event hub capture....it's a mess.

    We almost had a wow moment with event hub capture --> data lake --> data lake analytics. It fell apart on the data lake analytics side.

    Please implement this. Please.

Feedback and Knowledge Base