[DocumentDB] Allow Paging (skip/take)
Allowing paging would be huge. Btw, Thank You for rolling out this feature at all! It looks wonderful! Can't wait to try it! However, I won't be able to use it for actual work projects until it supports paging and sorting.
Reopening this user voice item as our support for Skip/Take (Offset/Limit) was only limited to single partition queries.
The newly released .NET SDK v3 now includes support for x-partition queries using Offset/Limit. You can learn more about v3 SDK and try it and provide feedback on our github repo here.
We will also be back-porting this functionality to our .NET v2 SDK. This work will begin shortly and we anticipate it to be released in September.
Once that is released we will mark this feature as complete.
Thank you for your patience and votes.
McNamara, Jeff commented
Agreed with Russ this implementation needs work. The fact that it does not support cross-partition queries is also a big problem.
Noah Stahl commented
Following on to comment by Russ, this doesn't seem to be functioning at the database side currently. I'm seeing the client retrieve the entire data set and discard the OFFSET, rather than just receiving the intended LIMIT amount. More details:
This is an unusable implementation.
As the skip/limit is done in the aggregation pipeline, my experience is that your RU cost will be 1/10 of the number of items returned from your filter. For example if 10,000 items are matched on your filter it will charge you 1000 RU's from the aggregation pipeline.
If you have a large collection and you don't know how many records are going to be matched from a search criteria then you are in for some pain.
I am going to have to move to a MongoDB database as this is just to much of a limitation.
Diego Oliveira Sanchez commented
Related to @Vince who posted right before me. I am querying using the mongoDB API. I've noticed that querying the 3,000th item consumes many more RUs than querying the first few items. Like 50 times the amount of RUs. I am doing something like model.find(<query>).skip(3000).limit(25). Am I doing something wrong, or do we have to wait for the Cosmos team to implement an efficient skipping mechanism?
Question - is this an "efficient" implementation? To be specific, is it just as efficient in terms of latency and RUs consumed to read from offset 5 than it is to read from offset 50,000? Did you guys implement this in a way that under the covers, on the server side, you just page through results in a tight loop until you've burned through N results, and then return the next page of items? Or did you actually build an index for this so that it's efficient to immediately jump to the 50,000th item and read the next page of results?
Looking forward to your answer, thanks.
Anargyros Tomaras commented
Doesn't work correctly when passing this "SELECT * FROM t OFFSET 0 LIMIT 1" to the Cosmos SDK v2.0. Does this require a specific version of the SDK to work with sqlExpressions (not LINQ) ?
Oleg Gliznutsa commented
Offset/Limit don't work correctly with EnableCrossPartitionQuery option - e.g. db returns first 15 results when requesting OFFSET 10 LIMIT 5
Shayan Khan commented
It does not seem to be implemented correctly. I am getting incorrect results when sending a query from my node application
Dipak Yadav commented
Mani Gandham commented
Looks like this has been released now for the SQL API: https://docs.microsoft.com/en-us/azure/cosmos-db/sql-api-query-reference
Graeme Christie commented
Any Update on this?
Leng Yang commented
ETA on this feature? Or MongoDB as alternative
it's just the bad communication around it that bothers me
Microsoft is notorious for creating half-baked products and customer lock-in. Now, we're stuck waiting for this feature.
This is worst. No updates for almost one year
Jonathan Chase commented
Another comment asking for an ETA. I can't stress enough how much this single issue, when combined with the limitation of not being able to use continuation tokens in cross partition queries stops CosmosDB from being a viable option.
This is taking more then a year already, get your **** together, we are paying for this ...
Rasmus Larsen commented