lpunderscore

My feedback

  1. 159 votes
    Sign in
    (thinking…)
    Sign in with: Microsoft
    Signed in as (Sign out)

    We’ll send you updates on this idea

    Hello,

    We are working on a new capability that will allow customers to move Azure resources from one region to another. Could you please fill out this small survey so that we can find out more details on your requirements and assist you with your ask soon :)

    aka.ms/region-move-survery

    Cheers,
    Rajani,
    Program Manager – Azure Team

    lpunderscore supported this idea  · 
  2. 53 votes
    Sign in
    (thinking…)
    Sign in with: Microsoft
    Signed in as (Sign out)

    We’ll send you updates on this idea

    12 comments  ·  Data Lake  ·  Flag idea as inappropriate…  ·  Admin →
    lpunderscore supported this idea  · 
  3. 1,790 votes
    Vote
    Sign in
    (thinking…)
    Sign in with: Microsoft
    Signed in as (Sign out)
    You have left! (?) (thinking…)
    70 comments  ·  Azure Cosmos DB » Management  ·  Flag idea as inappropriate…  ·  Admin →

    We’re excited to announce that we are making this a lot easier with our preview of Autopilot. With Autopilot, Azure Cosmos DB will automatically manage and scale the RU/s of your containers based on the usage. This eliminates the need for custom scripting to change RU/s and makes it easier to handle bursty, unpredictable workloads.

    You can try out Autopilot in your Cosmos accounts by going to the Azure Portal and enabling the feature in the “Preview Features” blade.

    lpunderscore commented  · 

    We finally wrapped everything in custom cosmos db client which removes the retry policies of the microsoft sdk and uses our own policies to check for throttled requests and scale up as needed by placing scale up requests in a queue where a function picks it up and scales up the RUs. There is then a scale down process over a timed period without scale up requests..

    bam elastic RU scaling with a max and minimum RU setting...1 month of development efforts for something that could be done on your server side logic with almost no efforts.. And now we have to maintain an entire infrastructure /clap. But hey, it works.

    thanks for nothing

    lpunderscore commented  · 

    then you need to review your plans , this has got to be the most important feature for us... There are no reason whatsoever you not supporting this other than to milk us for more money... which is making you look very bad... remember Micro$oft?

    If we can control auto scaling in our code (ontrottle() -> scale 1000) -> retry) and this is instant... You can also do it automagically. <- not a typo.

    This just becomes a code maintenance **** when using 100s of functions where this needs to be included...

    lpunderscore commented  · 

    the RU/s needs to elastically scale. Your engine already supports "live" RU scaling through the portal with instant no impact scaling.

    It needs to be we set the MAX RU/s we want to support and we get charged for actual RU/s usage. There is no other way this will work.

    We use functions a LOT and they scale elastically based on required throughput. If they scale we need our DB to also scale together with the functions. Right now we have the portal open and monitor throughput and adjust manually each 5 minutes or so (ewwww). This could also be done inside each function to check for throttled requests and scale but that quickly becomes cluncky as ****.

    lpunderscore supported this idea  · 
  4. 4 votes
    Sign in
    (thinking…)
    Sign in with: Microsoft
    Signed in as (Sign out)

    We’ll send you updates on this idea

    0 comments  ·  Azure Functions » Bindings  ·  Flag idea as inappropriate…  ·  Admin →
    lpunderscore shared this idea  · 
  5. 698 votes
    Vote
    Sign in
    (thinking…)
    Sign in with: Microsoft
    Signed in as (Sign out)
    You have left! (?) (thinking…)
    32 comments  ·  Azure Cosmos DB » Gremlin API  ·  Flag idea as inappropriate…  ·  Admin →

    Update on this item.

    Bytecode implementation is now targeting the first half of 2020 to provide stability and performance improvements in the platform.

    Apologies for the delay. We are continuing to work on this. Will announce here when this becomes available.

    Thanks.

    lpunderscore supported this idea  · 
  6. 486 votes
    Vote
    Sign in
    (thinking…)
    Sign in with: Microsoft
    Signed in as (Sign out)
    You have left! (?) (thinking…)
    13 comments  ·  Azure Search  ·  Flag idea as inappropriate…  ·  Admin →
    lpunderscore supported this idea  · 
  7. 742 votes
    Vote
    Sign in
    (thinking…)
    Sign in with: Microsoft
    Signed in as (Sign out)
    You have left! (?) (thinking…)
    10 comments  ·  Azure Search » Indexing  ·  Flag idea as inappropriate…  ·  Admin →
    lpunderscore commented  · 

    Honestly how about some feedback? Are you looking at this? Considering it at least? This is a HUGE deal considering the amount of efforts and ressources it takes to load data into the index in the first place. Just consider indexing document contents, takes weeks to index millions of small txt documents for example even with 24 indexers running on 24 search units. Asking us to rebuild this every time we need to change a checkbox is not reasonable.

    MS feedback after 2 years would be appreciated...

    lpunderscore supported this idea  · 
  8. 60 votes
    Sign in
    (thinking…)
    Sign in with: Microsoft
    Signed in as (Sign out)

    We’ll send you updates on this idea

    5 comments  ·  Azure Active Directory » B2C  ·  Flag idea as inappropriate…  ·  Admin →
    lpunderscore supported this idea  · 
  9. 30 votes
    Vote
    Sign in
    (thinking…)
    Sign in with: Microsoft
    Signed in as (Sign out)
    You have left! (?) (thinking…)
    1 comment  ·  Azure Search » Crawlers  ·  Flag idea as inappropriate…  ·  Admin →
    lpunderscore commented  · 

    this would also be a solution to my problem :https://feedback.azure.com/forums/263029-azure-search/suggestions/20233354-make-the-blob-indexer-faster

    If we had this implemented, we could scale the indexing process using Azure Batch or hadoop or whatever... please implement something because it is seriously affecting our progress.

  10. 17 votes
    Vote
    Sign in
    (thinking…)
    Sign in with: Microsoft
    Signed in as (Sign out)
    You have left! (?) (thinking…)
    2 comments  ·  Azure Search » Indexing  ·  Flag idea as inappropriate…  ·  Admin →
    lpunderscore commented  · 

    As I think about this, another solution would be to allow us to call a high throughput api that can extract document content. We could then scale services with Azure Batch services for example and call that api to extract content from documents at scale.

    blob storage coud be integrated (passing blob storage uri + credentials) to the service.

    documents could be on prem which would allow for more throughput then blob storage if needed...

    We can then index the content using the search indexing api (which could use more performance as well imo)

Feedback and Knowledge Base