This page is not created by, affiliated with, or supported by Slack Technologies, Inc.
2020-02-19
Channels
- # announcements (10)
- # aws (3)
- # aws-lambda (1)
- # babashka (24)
- # beginners (57)
- # boot (5)
- # calva (20)
- # chlorine-clover (3)
- # cider (14)
- # clj-kondo (37)
- # clojars (17)
- # clojure (200)
- # clojure-dev (40)
- # clojure-europe (9)
- # clojure-france (7)
- # clojure-gamedev (5)
- # clojure-hungary (4)
- # clojure-italy (8)
- # clojure-losangeles (2)
- # clojure-nl (9)
- # clojure-uk (97)
- # clojurebridge (1)
- # clojured (3)
- # clojuredesign-podcast (23)
- # clojurescript (13)
- # code-reviews (2)
- # component (22)
- # core-typed (7)
- # cursive (64)
- # datascript (12)
- # datomic (60)
- # emacs (6)
- # fulcro (54)
- # graalvm (11)
- # graphql (3)
- # hoplon (25)
- # jobs (1)
- # joker (85)
- # juxt (5)
- # kaocha (10)
- # klipse (8)
- # malli (2)
- # off-topic (36)
- # parinfer (1)
- # pathom (1)
- # re-frame (9)
- # reagent (4)
- # reitit (1)
- # remote-jobs (1)
- # shadow-cljs (24)
- # spacemacs (1)
- # sql (39)
- # tools-deps (10)
- # tree-sitter (18)
- # xtdb (18)
How do you evict all the documents from Crux, including the already deleted ones? Especially the already deleted ones. Thanks
Hmm, finding all deleted documents across all of time is an interesting problem! What's the use-case? Testing / dev?
Ah, fair enough. I think (sh ...)
is your best bet for the moment, unfortunately. There might be a ns somewhere in the repo with something you could copy-paste /cc @U050V1N74
So the way I came across it is this: - I added a document, with today’s valid-time - deleted the document - added it again, but with a much older valid-time Now because the id of the document is the same, the most recent document in history is deleted, so “it’s gone”.
When using put
with a start valid time and an already crowded timeline you almost certainly want to specify an end valid time also (which could be MAX), to avoid that kind of "it's gone" behaviour
only similar thing we use is a test fixture that creates a data directory, runs its tests, then removes the directory when it's done, if that's helpful? https://github.com/juxt/crux/blob/master/crux-test/src/crux/fixtures.clj#L55
It might be in the future. My setup is Kafka in Docker, so I’m not sure how useful this would be in that scenario, just yanking the directory where Kafka keeps its offsets info… Kafka version of your snippet might be going into Kafka topics and purging the messages in them…
I wonder of there is a suitable way to use crux with aws lambdas. My understanding is that the labmdda would need to process as messages from kafka on spinup. Is that true?
Hi @U054UD60U - unless your data set is a few MB or less then I don't think there are any useful ways to use aws lamdas with Crux as it stands today. > My understanding is that the labmdda would need to process as messages from kafka on spinup. Is that true? Essentially yes that is the case. Specifically, all messages from the very beginning of the tx-log need to be processed each time the lambda starts up. Sorry it's not a more exciting answer...but I would be glad to continue the conversation about Crux + "serverless" patterns
Interesting. would there be the options to share snapshots of the kv store such that a node (not necessarily a lambda) could catch up quicker?
Yep that's absolutely possible and a sample with all the code to do that is already in repo (for k8s + s3); https://github.com/juxt/crux/blob/master/docs/example/standalone_webservice/bin/restore.sh
It won't help the lambda use-case as it would spend too much time & bandwidth backing-up and restoring. The only exception to doing it is when upgrading Crux to a new version of the index - in that case you still need to replay from the beginning of the tx-log.
you could spin up a Crux node outside the lambdas, and have the lambdas talk to it over the HTTP api
On the other hand my preferred german hoster has a cloud option with 2GB ram for €3/month. Should be sufficient for side projects.