lpunderscore

My feedback

  1. 134 votes
    Sign in
    (thinking…)
    Sign in with: Microsoft
    Signed in as (Sign out)

    We’ll send you updates on this idea

    8 comments  ·  Azure Migrate  ·  Flag idea as inappropriate…  ·  Admin →
    lpunderscore supported this idea  · 
  2. 368 votes
    Vote
    Sign in
    (thinking…)
    Sign in with: Microsoft
    Signed in as (Sign out)
    You have left! (?) (thinking…)
    7 comments  ·  Azure Cosmos DB » SDK  ·  Flag idea as inappropriate…  ·  Admin →
    lpunderscore supported this idea  · 
  3. 53 votes
    Sign in
    (thinking…)
    Sign in with: Microsoft
    Signed in as (Sign out)

    We’ll send you updates on this idea

    12 comments  ·  Data Lake  ·  Flag idea as inappropriate…  ·  Admin →
    lpunderscore supported this idea  · 
  4. 1,569 votes
    Vote
    Sign in
    (thinking…)
    Sign in with: Microsoft
    Signed in as (Sign out)
    You have left! (?) (thinking…)
    58 comments  ·  Azure Cosmos DB » Management  ·  Flag idea as inappropriate…  ·  Admin →
    lpunderscore commented  · 

    We finally wrapped everything in custom cosmos db client which removes the retry policies of the microsoft sdk and uses our own policies to check for throttled requests and scale up as needed by placing scale up requests in a queue where a function picks it up and scales up the RUs. There is then a scale down process over a timed period without scale up requests..

    bam elastic RU scaling with a max and minimum RU setting...1 month of development efforts for something that could be done on your server side logic with almost no efforts.. And now we have to maintain an entire infrastructure /clap. But hey, it works.

    thanks for nothing

    lpunderscore commented  · 

    then you need to review your plans , this has got to be the most important feature for us... There are no reason whatsoever you not supporting this other than to milk us for more money... which is making you look very bad... remember Micro$oft?

    If we can control auto scaling in our code (ontrottle() -> scale 1000) -> retry) and this is instant... You can also do it automagically. <- not a typo.

    This just becomes a code maintenance **** when using 100s of functions where this needs to be included...

    lpunderscore commented  · 

    the RU/s needs to elastically scale. Your engine already supports "live" RU scaling through the portal with instant no impact scaling.

    It needs to be we set the MAX RU/s we want to support and we get charged for actual RU/s usage. There is no other way this will work.

    We use functions a LOT and they scale elastically based on required throughput. If they scale we need our DB to also scale together with the functions. Right now we have the portal open and monitor throughput and adjust manually each 5 minutes or so (ewwww). This could also be done inside each function to check for throttled requests and scale but that quickly becomes cluncky as ****.

    lpunderscore supported this idea  · 
  5. 4 votes
    Sign in
    (thinking…)
    Sign in with: Microsoft
    Signed in as (Sign out)

    We’ll send you updates on this idea

    0 comments  ·  Azure Functions » Bindings  ·  Flag idea as inappropriate…  ·  Admin →
    lpunderscore shared this idea  · 
  6. 640 votes
    Vote
    Sign in
    (thinking…)
    Sign in with: Microsoft
    Signed in as (Sign out)
    You have left! (?) (thinking…)
    30 comments  ·  Azure Cosmos DB » Gremlin API  ·  Flag idea as inappropriate…  ·  Admin →
    lpunderscore supported this idea  · 
  7. 464 votes
    Vote
    Sign in
    (thinking…)
    Sign in with: Microsoft
    Signed in as (Sign out)
    You have left! (?) (thinking…)
    13 comments  ·  Azure Search  ·  Flag idea as inappropriate…  ·  Admin →
    lpunderscore supported this idea  · 
  8. 721 votes
    Vote
    Sign in
    (thinking…)
    Sign in with: Microsoft
    Signed in as (Sign out)
    You have left! (?) (thinking…)
    9 comments  ·  Azure Search » Indexing  ·  Flag idea as inappropriate…  ·  Admin →
    lpunderscore commented  · 

    Honestly how about some feedback? Are you looking at this? Considering it at least? This is a HUGE deal considering the amount of efforts and ressources it takes to load data into the index in the first place. Just consider indexing document contents, takes weeks to index millions of small txt documents for example even with 24 indexers running on 24 search units. Asking us to rebuild this every time we need to change a checkbox is not reasonable.

    MS feedback after 2 years would be appreciated...

    lpunderscore supported this idea  · 
  9. 59 votes
    Sign in
    (thinking…)
    Sign in with: Microsoft
    Signed in as (Sign out)

    We’ll send you updates on this idea

    5 comments  ·  Azure Active Directory » B2C  ·  Flag idea as inappropriate…  ·  Admin →
    lpunderscore supported this idea  · 
  10. 30 votes
    Vote
    Sign in
    (thinking…)
    Sign in with: Microsoft
    Signed in as (Sign out)
    You have left! (?) (thinking…)
    1 comment  ·  Azure Search » Crawlers  ·  Flag idea as inappropriate…  ·  Admin →
    lpunderscore commented  · 

    this would also be a solution to my problem :https://feedback.azure.com/forums/263029-azure-search/suggestions/20233354-make-the-blob-indexer-faster

    If we had this implemented, we could scale the indexing process using Azure Batch or hadoop or whatever... please implement something because it is seriously affecting our progress.

  11. 16 votes
    Vote
    Sign in
    (thinking…)
    Sign in with: Microsoft
    Signed in as (Sign out)
    You have left! (?) (thinking…)
    2 comments  ·  Azure Search » Indexing  ·  Flag idea as inappropriate…  ·  Admin →
    lpunderscore commented  · 

    As I think about this, another solution would be to allow us to call a high throughput api that can extract document content. We could then scale services with Azure Batch services for example and call that api to extract content from documents at scale.

    blob storage coud be integrated (passing blob storage uri + credentials) to the service.

    documents could be on prem which would allow for more throughput then blob storage if needed...

    We can then index the content using the search indexing api (which could use more performance as well imo)

Feedback and Knowledge Base