This page is not created by, affiliated with, or supported by Slack Technologies, Inc.
2018-01-31
Channels
- # aleph (9)
- # bangalore-clj (1)
- # beginners (115)
- # cider (16)
- # clara (20)
- # cljs-dev (47)
- # cljsrn (50)
- # clojure (70)
- # clojure-dusseldorf (2)
- # clojure-italy (16)
- # clojure-sanfrancisco (1)
- # clojure-spec (9)
- # clojure-uk (37)
- # clojurescript (132)
- # cursive (21)
- # datomic (36)
- # dirac (53)
- # fulcro (34)
- # graphql (6)
- # hoplon (96)
- # jobs (2)
- # juxt (2)
- # keechma (2)
- # leiningen (5)
- # off-topic (3)
- # om (2)
- # om-next (3)
- # parinfer (3)
- # re-frame (17)
- # remote-jobs (1)
- # shadow-cljs (57)
- # specter (12)
- # sql (43)
- # unrepl (11)
- # yada (5)
@stuarthalloway / @marshall. We’re about ready to remove CSS/HTML from the Database (due to missing blob support), but I assume I’ll have to nuke the history of these entities in order to reduce the memory load. How do I go about that?
@laujensen how old is your db? you may have a better time of it if you build a new db by replaying the transaction log and eliding the data you want removed from the transactions. @stuartsierra calls this ‘decanting’
the only way to actually remove data is to excise it, which has very poor performance semantics. it’s not meant for large data removals
Yikes. Hope I wont have to set :no-history, dump the db and re-import it, then re-add history
do you care about the history of this data, or only the present state, @laujensen? because you could just build a new db with the present state
you can always keep the original db around to answer history questions
I do care about the history, but the large blobs we’ve saved is killing performance, so if I have to make a choice, I choose speed
i guess what i’m asking is, does your app use historical data, or is it only accessed ad-hoc manually
if only ad-hoc, i’d say start afresh and archive the current one 🙂
We use it for many things, among others you can pull up past versions of any webpage on your site
yeah, then decanting is your best long-term option
@val_waeselynck sync-schema would fail on a forked connection, no?
@U09QBCNBY I'm assuming you're talking about Datomock? Why would it fail?
oh I was just poking in the dark really. I was testing a migration against a forked connection that added unique, which of course requires adding an index first and sync-schema’ing. I had an odd failure, but it must be something on my end
In Datomock , sync
is supported and easy to implement (since the coordination is only local). From what I understand, sync-schema
does less work than sync
, and therefore is supported as well.
However, the burden of not forking too early is on you;
so maybe you'll want to sync-schema then datomock.api/fork.
@U09QBCNBY I'm not sure what you're doing so it's hard to answer. If you're dry-running a migration on a connection that was forked via Datomock, there's no syncing to do - just wait for the transaction to return and you're good. If you've run a migration on an actual Transactor and need to see the effects of that in a forked Connection, then you can sync-schema the real connection then fork.
ah. hmm. so this is a dry run migration that adds unique to an existing attribute. That requires first transacting :db/index true for the existing attr, then calling sync-schema, and then asserting unique.
@U09QBCNBY if you want to master the principles behind Datomock, here's a crash course 🙂 Imagine Datomic connections are Clojure agents holding db values, and (defn fork-conn [conn] (agent @conn))
. That's it!
@U09QBCNBY but do keep me posted if my assumptions seem incorrect, maybe Datomic does some dark magic in this case that I don't know about. Datomock basically gives you everything that datomic.api/with
provides.
@laujensen one other option to consider: store the blobs in Datomic with a content-address prefix in the value itself
you would have to write application logic everywhere such values were used to strip/add the prefixes
pretty gross, but in the spirit of covering all options…
What’s the natural way to truncate a Datomic database? I know that I could retract every entity, but that seems like a lot of work. So far I’ve been deleting the database and creating another with the same name. That definitely “truncates” the database, but it leaves any currently-connected clients in a bad state. Truncation is useful because I want to migrate data into a clean database each time.