This page is not created by, affiliated with, or supported by Slack Technologies, Inc.
2018-03-02
Channels
- # aleph (6)
- # beginners (57)
- # boot (1)
- # cider (27)
- # clara (23)
- # cljs-dev (166)
- # clojure (287)
- # clojure-dev (23)
- # clojure-greece (1)
- # clojure-italy (2)
- # clojure-russia (13)
- # clojure-spec (34)
- # clojure-uk (36)
- # clojurescript (68)
- # core-async (63)
- # core-logic (1)
- # cursive (1)
- # data-science (1)
- # datomic (26)
- # duct (1)
- # emacs (10)
- # figwheel (8)
- # fulcro (2)
- # garden (16)
- # graphql (8)
- # hoplon (20)
- # jobs (2)
- # leiningen (10)
- # off-topic (16)
- # onyx (2)
- # portkey (5)
- # quil (1)
- # re-frame (63)
- # reagent (95)
- # reitit (6)
- # remote-jobs (1)
- # ring (6)
- # rum (1)
- # shadow-cljs (76)
- # spacemacs (26)
- # specter (11)
- # sql (7)
- # unrepl (68)
- # vim (2)
- # yada (2)
@marshall Brilliant! We'll look into it.
When datomic connection with sql database and :datasource
, the first used datasource will be cached and after that, all call will route to first datasource. It's fine with normal jdbc str, but it didnot work with :datasource.
Just to check, has anybody open sourced a Datomic exporter to other (SQL) databases? I've asked before, but hoping someone wrote one since then 😛
@marshall If custom txInstants can't come before the first timestamp in the database, should we start everybase with an init transaction in 1900 or so? Sorry if I didn't understand the docs correctly.
@clojurians873 custom txInstants cannot come before the last tx's instant, i.e. they must be strictly increasing
the point being if you're recreating a db of past transactionss, you need to be careful to transact in a historical order
@favila ah, that makes sense actually. Sounds like we should be viewing databases more as temporary artifacts that could be reconstructed from data at a whim, rather than have a single 'common source of truth' database.
but the use case here is constructing easier-to-use derived views of whatever that source of truth is
@favila interesting
for example, you could have a source-of-truth datomic db where tx is time of record and you explicitly model problem domain times
but every once in a while for analysis you can generate another db from that one where the tx time is the problem domain time
so as-of and since views of this db exactly correspond to a reconstructed record of historical events
@favila interesting idea... i'll ponder that for a bit. We're basically pulling from kafka and other log sources because they're in some cases too large to put in datomic
@favila good idea on the problem domain time...
(sorry for the spam), but the above post by some guy had us scared for a bit, but your solution sounds like the best
in the post he basically says you can't model your problem domain times in datomic, etc
to be fair, I don't think @U06GS6P1N is saying that you can't model your problem domain times in datomic in this post; he's (quote coherantly imho) pointing out that using datomic's transactor time (`:db/txInstant`) for domain-event times is not a good idea, and that you could/should model domain event times using your own schema.
Confirmed, and I agree with @favila said above
Well the hype when datomic was first released was that we would finally not have all those extra time related columns all over our data. But really what datomic solves is more specific
well I would say it depends what these columns were intended for in the first place. The way I see it, Datomic is great for change detection, but not especially helpful for managing versions. People hoped for the latter, but the former is a much more common need