This page is not created by, affiliated with, or supported by Slack Technologies, Inc.
2020-05-19
Channels
- # announcements (12)
- # aws (17)
- # babashka (6)
- # beginners (40)
- # cider (14)
- # cljs-dev (14)
- # cljsrn (8)
- # clojure (110)
- # clojure-europe (46)
- # clojure-italy (1)
- # clojure-nl (4)
- # clojure-spec (14)
- # clojure-sweden (3)
- # clojure-uk (29)
- # clojurescript (52)
- # conjure (68)
- # cursive (33)
- # datomic (9)
- # figwheel-main (11)
- # fulcro (97)
- # ghostwheel (1)
- # graalvm (2)
- # helix (53)
- # hoplon (13)
- # joker (6)
- # kaocha (1)
- # leiningen (2)
- # meander (28)
- # mid-cities-meetup (1)
- # observability (1)
- # off-topic (112)
- # pathom (6)
- # pedestal (3)
- # re-frame (16)
- # reagent (16)
- # reitit (2)
- # shadow-cljs (27)
- # spacemacs (2)
- # sql (26)
- # testing (3)
- # utah-clojurians (3)
- # vim (2)
- # xtdb (32)
Could you use backup/restore to kickstart a lambda instance running a crux node?
If your node is always going to be small (e.g. < 100 MB) and you don't have strong latency demands then I expect it could be viable, I've not yet heard of anyone trying 🙂
I just thinking about these low volume hobby projects you whant to “drop” somewhere at minimal cost.
One could also use ddb for the kv store, but not sure if that’s viable. You’d need to ensure there’s only one indexer running at a time
I’ve got an experimental ddb tx log already which seems to work.
ddb as a kv backend will be slow, as the indexes are designed to be local to the query engine, but perhaps that's also good enough for hobbyists
regarding ddb kv, a query in crux will hit the kv store many times, right?
that's correct, it's "chatty" because the various ranges of index tuples are lazily and individually (i.e. one at a time) streamed out as the query executes
does a memkv need to fix into memory?
I guess so
sorry for that
ah, yes it does need to fit into memory indeed. The memkv implementation also makes very little attempt to be compact, performance-tuned or otherwise efficient, as we only built it for testing purposes, though there may be valid memory-only use-cases that justify us revisiting it with more engineering energy in future
so in my (toy) use case using rocksdb restore from a backup/snapshot on S3 might work best I guess.
@U899JBRPF is there a test suite for crux Document Stores and TxLogs? I know about crux-bench
but I found it non-trivial to have it run as it seems to interfere with my own AWS credentials when trying to download some bench data from S3.
Hey, yep we have some protocol level test namespaces. See the recently added s3 module https://github.com/juxt/crux/blob/cfeb368a5978b3a42d6b2a8f7c698d489e0f988b/crux-s3/test/crux/s3_test.clj
the doc_store_test specifically: https://github.com/juxt/crux/blob/cfeb368a5978b3a42d6b2a8f7c698d489e0f988b/crux-test/src/crux/doc_store_test.clj we don't have an equivalently narrow-scoped version for tx-log right now
ok, thanks!
Do these tests exercise concurrent access? I’m not talking about jesper-level verification rather at least simple invariants.
I don't think so. Not yet. The doc store test shouldn't really care, as writing & reading docs ought to be idempotent and naively concurrent (since docs either don't exist, exist, or are permanently tombstoned -- there should be no mutations or possibility for race conditions), but a hypothetical tx-log implementation test would ideally prove the essential linearisable behaviour in the face of large amounts of concurrent writes & reads
I'd like to do a transaction analogous to a SQL delete where
, matching on some attribute. What's the right way to do this in crux? I could query and then delete, but that seems inefficient over the HTTP api.
Hi, are you looking to do everything over HTTP in general? We have some new transaction function capabilities in the works currently that will make your requirement very simple to solve, but it's not released quite yet. In the meantime though, you are not going to be able to avoid a two stage process to achieve this over HTTP
My specific use case is a little peculiar: I'm actually writing an emacs lisp org-mode based client to crux, of sorts. So yes I think I am stuck with HTTP for the foreseeable future 😛 It also means I can live with slowness, but I'm also happy to try out the new capabilities.
Actually, I was leaning towards extending or replacing the built-in REST api with more expressive endpoints, but it sounds like that may be in the works on your end already.
Wow, that sounds pretty cool! If you're feeling especially curious you could try building and using the in-flight PR as the main chunk of work was literally just finished today, but otherwise please do check back in a week or so for the next release.
I think virtually everyone is just embedding Crux, as it's so much easier to do many things. Forking crux-http-server to build your own application-specific HTTP module could be sensible :)
Apparently I might not be stuck with HTTP, having stumbled upon https://github.com/clojure-emacs/clomacs which appears to be FFI done through nrepl. I think I'll play around with this while waiting for the next release. Thanks!
fyi, I'm not sure how to try master using deps.edn. Going off https://github.com/bhauman/rebel-readline/issues/176 it seems like crux may need to add a deps.edn at the top level, if you feel such an accommodation appropriate.