This page is not created by, affiliated with, or supported by Slack Technologies, Inc.
2020-06-03
Channels
- # babashka (17)
- # beginners (166)
- # calva (97)
- # cider (4)
- # clara (2)
- # clj-kondo (46)
- # cljsrn (5)
- # clojure (334)
- # clojure-canada (1)
- # clojure-dev (144)
- # clojure-europe (14)
- # clojure-germany (5)
- # clojure-nl (10)
- # clojure-spec (1)
- # clojure-uk (46)
- # clojurescript (50)
- # conjure (1)
- # core-async (52)
- # core-typed (5)
- # cursive (3)
- # datomic (3)
- # emacs (11)
- # figwheel (16)
- # figwheel-main (9)
- # fulcro (29)
- # graalvm (19)
- # graphql (14)
- # helix (46)
- # hoplon (4)
- # hugsql (2)
- # jobs (2)
- # jobs-discuss (1)
- # juxt (15)
- # kaocha (6)
- # off-topic (9)
- # pedestal (7)
- # portkey (7)
- # re-frame (10)
- # reagent (29)
- # shadow-cljs (13)
- # spacemacs (70)
- # sql (13)
- # tools-deps (26)
- # xtdb (23)
What's the fastest way to count all entries in a node?
you can try: https://opencrux.com/docs#_attribute_stats This was from earlier in the slack log: > Note that you are able to add :full-results? true to the query map to easily retrieve the source documents relating to the entities in the result set. For instance to retrieve all documents in a single query:
{:find [e]
:where [[e :crux.db/id _]]
:full-results? true}
Yep - (:crux.db/id (crux.api/attribute-stats node))
will get you a good estimation of the number of documents indexed (all versions of all entities).
If you want a count of the entities the above query's good - if you're just going to (count (crux/q ...))
this query I'd recommend omitting the full-results? true
as this will make it significantly slower
Hey @U0S3YK6HK, thanks for raising this - could you post a stack trace?
I have since blown away the DB as everytime I ran the node it would blow up with that error
That'd be great, thanks - I've raised a Github issue: https://github.com/juxt/crux/issues/902
So interestingly I have played around with this trying to replicate it and I can't. I have transacted 1 million super small docs in 1 transaction and it succeeds. Granted before I was using Windows Native postgres and this postgres is in a docker container but that really shouldn't affect things
In order to dig into to this more I want to replicate the transaction that triggered this by using the original project but I don't want to bork my DB again. What would be the best way for me to backup and restore my DB and Crux index? Just backup my RocksDB folder and dump the db using PGbackup?
Hey @U0S3YK6HK sorry to keep you waiting for an official response. I think your plan is/was good, assuming your tx-log really is stored only in the postgres. However dumping postgres db alone should be enough, as Rocks will always rebuild itself from the log, and it would be a good test to verify your dump and restore works correctly. I.e. check the fully re-built node looks roughly the same as the old one (comparing the output from attribute-stats
is a safe bet)
Following up on this thread again I have managed to trigger the bug again, this time in a different project. I have added a stacktrace and will not touch the project until I get direction on how to best preserve or debug the problem.
My assumption is that the 123647 is the number of docs that I am trying to send in a single transaction
but it seems like limits such as this should be documented with the JBDC backend config
In order to dig into to this more I want to replicate the transaction that triggered this by using the original project but I don't want to bork my DB again. What would be the best way for me to backup and restore my DB and Crux index? Just backup my RocksDB folder and dump the db using PGbackup?
Following up on this thread again I have managed to trigger the bug again, this time in a different project. I have added a stacktrace and will not touch the project until I get direction on how to best preserve or debug the problem.