This page is not created by, affiliated with, or supported by Slack Technologies, Inc.
2020-06-02
Channels
- # announcements (37)
- # babashka (9)
- # beginners (172)
- # calva (7)
- # cestmeetup (28)
- # chlorine-clover (27)
- # clj-kondo (2)
- # cljs-dev (45)
- # cljsrn (8)
- # clojure (185)
- # clojure-dev (27)
- # clojure-europe (6)
- # clojure-finland (3)
- # clojure-nl (5)
- # clojure-uk (13)
- # clojuredesign-podcast (4)
- # clojurescript (54)
- # conjure (19)
- # core-typed (1)
- # cursive (40)
- # datomic (9)
- # emacs (5)
- # figwheel-main (34)
- # fulcro (238)
- # graphql (14)
- # hugsql (3)
- # leiningen (4)
- # malli (6)
- # off-topic (12)
- # pedestal (5)
- # portkey (19)
- # protorepl (8)
- # rdf (2)
- # re-frame (23)
- # reagent (3)
- # reitit (16)
- # shadow-cljs (29)
- # spacemacs (12)
- # sql (1)
- # xtdb (15)
@jarohen with the (shared) S3 KV store, could it make sense to have a single tx-indexer running that would update the S3 KV store? So having nodes not with the base-topology but only having the query engine and the nessecary stores? :thinking_face:
We don't have an S3 KV store I'm afraid - we make a distinction between the document store (a golden store, shared between all nodes), which just stores raw documents, and the local, per-node KV store, which stores the query indices. We haven't written a remote KV store as yet, largely because of an assumption that it wouldn't be performant enough for queries to have the indices remote - the query engine makes a lot of small requests of the indexes.
If we could find an implementation that was sufficiently fast, then yes, it would seem sensible to share that work between multiple nodes 🙂
Oh i see. I took the doc store for the kv store somehow. Makes sense now.
What about a layered architecture with a shared slow kv store and a memory cache on nodes. Could that work? (For the pathological use case)
I don't see why not - there's a good few interesting implementation questions with that around what to cache, at what granularity, in what format, and how to invalidate it
Does crux support running queries with hypothetical information - stuff that isn't and shouldn't actually be added to the real DB? (Like datascript)
Hey @U08JKUHA9 🙂 not just yet I'm afraid, but it is on our roadmap, and we've recently been refactoring the query engine internals to make this easier - hopefully it's not too far away
Hello. I'm not actually a #crux user, but can't @U08JKUHA9 create a file/in-memory db, install stuff, run queries and remove/delete this db/file?
My guess is that it would work, but wouldn't be fast enough of a process to answer queries in real time
I worked in a #datomic app do a "bulk import" like this: - create "empty" in-memory db - transact new data on it (take some minutes) - query this in-memory db and write "computed" results in the "main" db. - delete temporary in-memory db
Ah I see. My thing is I would also need to see all data in the main db, so I worry it would take a long time to initialize a temporary db like that unless there is explicit support for it, or some other mechanism like forkdb
@U08JKUHA9 maybe write a kv store that delegates to the memory store and if not found to the main db?
@U054UD60U :thinking_face: cool idea