Fork me on GitHub

If I remember correctly, there was some talk about backup/restore functionality for datomic cloud being in the works, but can’t find any news about it. Is that still being worked on?


am I misunderstanding the :limit option to index-pull?

 (d/index-pull db {:index :avet
                   :selector [:db/id]
                   :start [:story/group group-id]
                   :limit 5})))
I would expect to get no more than 5 results back. I get back 10 results (the total number of matching results) no matter what limit I specify.


is this feature implemented only for Datomic Cloud? It’s described in the on-prem index-pull documentation.

Joe Lane19:02:02

You can Call (take 5 (d/index-pull ...


Yes, I know, but I was planning to use :limit in conjunction with :offset to do pagination without realizing the full collection of results. (`:offset` does not appear to have any effect either for me.)

Joe Lane19:02:13

@enn Do you have a link to the docs you're reading?

Joe Lane19:02:39

And you're using on-prem, correct?

Joe Lane19:02:08

peer api or client api?


This is on a peer


Hi @enn does not include :limit. This is implemented in the client-api which is accessible in the latest client-pro release


The reason for this is at the top level of client in


Functions that support offset and limit take the following
additional optional keys:

  :offset    Number of results to omit from the beginning
             of the returned data.
  :limit     Maximum total number of results to return.
             Specify -1 for no limit. Defaults to -1 for q
             and to 1000 for all other APIs.


I can see how this is confusing in our docs, given the example shows the usage of :limit without the added context above. I will update the docs to reflect that.


I need to also discuss with the team if peer api will ever support index-pull with limit, but as Joe said, you can still take 5 etc.

👍 3

Thanks @jaret this was a confusion point for my team as well


Is the pull realized by advancing the outer seq, or only by reading each entry? E.g. if we go (drop 100 result-of-index-pull), does that do the work of 100 pulls or 0?


(I’m trying to discern if drop is an exact workalike to :offset or potentially much more expensive in the peer api)


My understanding is it does the work of 100 pulls. But I need to validate that understanding and am running that by Stu.

Joe Lane20:02:29

@enn Ideally when implementing a pagination api though, you wouldn't use offset like that. Rather, you would grab the last element of the prior page and use that in the value position of :start, or in your case, :group-id.


What you’re suggesting would be more complex than this in the general case. You would have to retain the last pull of the page, transform it back into the next :start vector (which may have grown longer if e.g. a group spans multiple pages), serialize that as a cursor for the client, then rehydrate it when it comes back and know to skip the first result if it is an exact match for :start. I can definitely see not wanting to take all that on in an initial implementation. It also makes it difficult to have a non-opaque cursor--a client may indeed want to skip 100 items or pipeline multiple fetches and be ok with the potentially inconsistent read.


IOW simple offset and limit still has its uses

Lennart Buit20:02:52

“Cursor based pagination” is the concept Joe Lane is referring to :)!

Lennart Buit20:02:04

pretty mind blown when I first saw it in GraphQL land, pretty cool actually!


Sure, ideally. But obviously it’s important enough that the client api has :limit and :offset 🙂


@jaret thanks for the clarification, I appreciate it. If you hear anything back on whether this will be supported in the future I’d love to hear.