This page is not created by, affiliated with, or supported by Slack Technologies, Inc.
- # announcements (5)
- # aws (34)
- # beginners (145)
- # cider (48)
- # circleci (8)
- # clara (7)
- # clj-kondo (28)
- # cljs-dev (75)
- # cljsrn (4)
- # clojure (325)
- # clojure-czech (10)
- # clojure-europe (5)
- # clojure-italy (4)
- # clojure-nl (4)
- # clojure-spec (6)
- # clojure-sweden (3)
- # clojure-uk (70)
- # clojurescript (18)
- # clr (1)
- # community-development (2)
- # cursive (38)
- # data-science (7)
- # datascript (14)
- # datomic (22)
- # emacs (2)
- # figwheel (1)
- # fulcro (6)
- # graalvm (22)
- # graphql (11)
- # hoplon (12)
- # jackdaw (8)
- # jobs-discuss (16)
- # juxt (5)
- # leiningen (19)
- # luminus (5)
- # nrepl (2)
- # nyc (1)
- # off-topic (6)
- # overtone (2)
- # pedestal (10)
- # re-frame (6)
- # reagent (8)
- # reitit (1)
- # rewrite-clj (43)
- # ring (2)
- # shadow-cljs (124)
- # testing (1)
- # vim (22)
- # xtdb (77)
- # yada (4)
Hi 🙂 the ave and aev indexes are sorted using built-in binary sorting within the KV store which is helpful for attributes and range queries over them. However the range queries and the results aren't necessarily related to the ultimate sort order from the result, as that depends on the join order as well. The query planner is currently optimised for joining not sorting. One thing you could use to help is the external sort capability in
to do more advanced sorting on top of the lazy seq response, but this isn't in the public API as it stands today.
I'm trying to follow the road through https://juxt.pro/blog/posts/a-bitemporal-tale.html#_setup
@hoppy we have not seen this error before. I'm seeing some rocksdb issues on github with this description. going to read through them.
the memory kv produces this trace, however it only does it when being fired up in a jacked-in calva (cider) repl. starting from a command line repl seems to work (at least load)
that error I have seen before. Any chance I can get you to try it out with crux "19.04-1.0.4-alpha-SNAPSHOT" 🙂 ?
I'm digging a bit into the rocksdb thing. They are playing the parlor trick of building the .so with crossbuild and stuffing it in the jar and extracting it otfly. Ask me how I know this is a bitey dog. They built it on a centos container, so likely the .so isn't quite so tasty on arch, but you say you are getting away with this?
openjdk 11.0.3 2019-04-16 OpenJDK Runtime Environment (build 11.0.3+4) OpenJDK 64-Bit Server VM (build 11.0.3+4, mixed mode)
ok so I just reproduced the error by having a global
(def system (crux/start-...
in a namespace that was getting aot compiled. I'm assuming this is what you are doing to.
but then I have a understanding of what that bug is. I wonder if I can reproduce the rocksdb error with the same condition
started with rocksdb out of AUR, built it myself last night, but that didn't change anything
the undefined symbol in the error is from a library
its available to install from pacman. Can you try installing it?
I did not have it installed myself and could not find it on my system so not sure. I got the impression that it should be statically linked into the rocksdb .so file but maybe I'm wrong about that
right yes you provide it yourself. Well either version should work I have tried both today on my machine (the other version being the one in the blog)
linux-vdso.so.1 (0x00007fff4b7f3000) libpthread.so.0 => /usr/lib/libpthread.so.0 (0x00007fcaecf60000) librt.so.1 => /usr/lib/librt.so.1 (0x00007fcaecf50000) libstdc++.so.6 => /usr/lib/libstdc++.so.6 (0x00007fcaecdc0000) libm.so.6 => /usr/lib/libm.so.6 (0x00007fcaecc78000) libgcc_s.so.1 => /usr/lib/libgcc_s.so.1 (0x00007fcaecc58000) libc.so.6 => /usr/lib/libc.so.6 (0x00007fcaeca90000) /usr/lib64/ld-linux-x86-64.so.2 (0x00007fcaed9a0000)
Hi there again, I know I’m not using the right tool for the job but I’m storing small binary blobs as base64 strings in crux. I read that all top-level attributes are indexed automatically? What exactly does that mean? Can I use that to not have crux index an attribute by nesting it in another map? Thank you 🙂
Hey! "Can I use that to not have crux index an attribute by nesting it in another map" -- yep that should work
Thank you for your quick reply! That’s perfect. So crux checks the type of the value and only indexes simple types?
Only top-level fields (attributes) in your documents are indexed and that means the
aev indexes are populated accordingly
crux will index any key-value combination that conforms to the spec...let me find you the details on that
I think the answer is somewhere here: https://github.com/juxt/crux/blob/master/src/crux/codec.clj#L412
Yes, seems like it’s cut off at some point… maybe that is enough for my performance concerns and I will worry about it again if it’s actually a problem 😄 If I get this correctly, wrapping it in a map might not help but it will “freeze” the whole map instead: https://github.com/juxt/crux/blob/master/src/crux/codec.clj#L243-L245 But please don’t worry for now. The part where the buffer size is limited is totally enough safety for what I’m doing right now. Thank you again!
Hot off the press:
Strings over a certain length (128) only gets indexed as hashes. This disables range queries for the attribute, but exact match (would one want to) still works. you can see this limit in `crux.codec/max-string-index-length` (and its usage)
I’m hoping to share the little pet project I’m working at some point but no guarantees when that will be 😄
Slack is fine, although we also have a public
juxt-oss Zulip account which has quite a bit of this sort of activity too
One more thing I’m trying to figure out is how to implement transactional constraints properly. I suppose my struggles are trivial for someone with more experience implementing distributed data stores 😃
cas transaction works great on single entities.
When trying to implement for example a unique constraint for a certain attribute across entities, I can do that by cleaning up duplicate data later on using “fix-on-read” or “fix-on-write” (not ideal but possible). Also works.
What I’m struggling with right now is deletes with possible references:
If I have an entity and I like to delete it only if no one references it anymore,
I first do a query to check if there are not references, then I can issue a delete transaction.
I cannot ensure that no one created a new reference between my query and the deletion.
Then only thing I can come up with is doing “fix-on-read” using the historical data.
Guess this is not really specific to crux and I’m also glad about general resource on this topic 🙂
Also, are there any future plans to build a transactional layer on top of crux or is this simply the wrong use-case?
Never mind. I’m sure I will learn more about this some day. But for now I found a simpler solution for my use case (thanks to the talk at Clojure/north https://youtu.be/3Stja6YUB94?t=2349). I can simply create a single transactor node/thread. That way I can keep all guarantees on write. I’m sure at some point those patterns will also be listed in a place like the FAQ section 🙂
Hi again - sorry to keep you waiting on a response! Your use-case of "CAS deletion when no nodes contain a given attribute" is a good example of a higher-level transactional constraint and your single-writer solution is the correct answer for now. We would definitely like to see libraries and decorators emerge to address this area. Someone on reddit asked a similar question: https://www.reddit.com/r/Clojure/comments/bohl4a/clojurenorth_the_crux_of_bitemporality/enqodda/ Also this HN thread was interesting to read a couple of days ago: https://news.ycombinator.com/item?id=19907771 ...we will definitely be investing more thought and energy in this as well!