This page is not created by, affiliated with, or supported by Slack Technologies, Inc.
2018-04-25
Channels
- # architecture (4)
- # bangalore-clj (3)
- # beginners (11)
- # chestnut (1)
- # cider (24)
- # cljs-dev (14)
- # clojure (97)
- # clojure-finland (1)
- # clojure-gamedev (19)
- # clojure-italy (11)
- # clojure-nl (31)
- # clojure-norway (1)
- # clojure-uk (52)
- # clojurescript (71)
- # core-async (4)
- # cursive (60)
- # datascript (8)
- # datomic (115)
- # emacs (29)
- # figwheel (11)
- # fulcro (3)
- # garden (1)
- # hoplon (1)
- # lein-figwheel (1)
- # leiningen (7)
- # luminus (13)
- # mount (1)
- # off-topic (51)
- # onyx (31)
- # pedestal (2)
- # portkey (1)
- # re-frame (22)
- # reagent (22)
- # reitit (6)
- # remote-jobs (1)
- # schema (1)
- # shadow-cljs (73)
- # specter (2)
- # sql (1)
- # unrepl (3)
- # vim (11)
- # yada (4)
:repositories {“” {:url “”
:username [:gpg :env/datomic_username]
:password [:gpg :env/datomic_password]}}
then set DATOMIC_USERNAME
and DATOMIC_PASSWORD
in your CircleCI environment variables
For those who worry about GDPR: this Gist demonstrates an alternative to Excision for erasing data from Datomic. Hope this helps, feedback welcome. https://gist.github.com/vvvvalvalval/6e1888995fe1a90722818eefae49beaf
count me among those worried
@octo221 I take it you are not happy with this solution?
but I don't understand why cloud doesn't support excision since it's such an important feature
given that on-prem does
Neither do I. I will soon publish an article which should help alleviate the lack of Excision on Cloud.
have you thought about the possibility of using multiple DBs as an alternative too ?
a db per user
No, my approach is rather a complementary mutable KV store, turns out you can get a loooong way with that.
oh wait i just saw another thead on that!@
hmm i want datomic only
if possible
@val_waeselynck what’s the biggest database you’ve used this on?
@robert-stuttaford BandSquare's, about 500k txes and 37M datoms, took about 4 hours to complete on dev storage on my local machine (note that this does not mean 4 hours of downtime).
Note that pipelining could theoretically used to speed things up (as soon as you're confident there won't be errors), but I could not get it to work. Unfortunately, I suspect this is a Datomic concurrency bug, but have not worked yet though a minimal repro.
i think it’s important to mention the implications on your gist, @val_waeselynck that this is a much slower process than excision - similar to replacing an engine in a car, rather than removing a tiny piece while it’s driving
i wonder how long it’d take to process our 72,891,554 txes
From my measurements, you probably won't do better than 30k tx/min
@robert-stuttaford You're right, I added a comment. https://gist.github.com/vvvvalvalval/6e1888995fe1a90722818eefae49beaf#gistcomment-2569689
@U09LZR36F I tried to explain it in the Gist - please tell me if it's not clear?
My intention for new systems is to try to avoid this problem by designing around it - using multiple databases
Sure, on the other hand you may not always get things right upfront 🙂 (I know I haven't) in which case you will probably need a safety net
@U9MKYDN4Q you mean a mutable one for personal data?
Possibly, but not necessarily. Could also be interesting to use multiple datomic databases - maybe even a database per user + one that has all the interconnections. Won’t work in all circumstances though
totally can have multiple databases on a transactor, just like you can have multiple dbs on a mysql server or mongo server
there is peer memory overhead for each database of course
@U0JUM502E no, since in such cases you need to specify explicitly in which db you are matching a particular Datalog clause.
@val_waeselynck Not sure I understand, say you do a query such as
[:find ?e ?like
:in $db1 $db2
:where [e :user/likes ?like]]
wouldn’t you just get a mix of entity id’s?You'd have to write it as
[:find ?e ?like
:in $db1 $db2
:where [$db1 e :user/likes ?like]]
I think Datalog simply won't let you do what you suggested
Say, I have a question about rules. Is it possible to have variables that only exist inside of the rule be exported to the calling query? Eg, if I have something like this:
(def rules '[[(tracks ?artist)
[?artist :artist/albums ?album]
[?album :album/tracks ?tracks]]])
...can I then access the ?tracks
value outside of the rule, or bind it to another var or something?Ah, so input to a rule doesn't need to already be bound to something?
'[(tracks ?artist ?track)
[?artist :artist/albums ?album]
[?album :album/tracks ?track]]
Right, that makes sense. I'll mess around with it, thanks!
if you want to require a parameter to be bound (sometimes important for performance), surround the arguments with a vector
'[(tracks [?artist] ?track)
[?artist :artist/albums ?album]
[?album :album/tracks ?track]]
that means this rule can only run "in one direction" from a bound artist to an unbound track
Ah, ok. I think I was confused about what that syntax meant
so the rule name is bad, it's really describing a constraint you want to satisfied among all rule parameters, not input-output
but both 'tracks for artists' and 'artists for tracks' are valid names, because the rule expresses both
I see what you mean. artist-tracks
would make sense if I used the vector args to make the artist required, yes?
I was hoping the name expressed the bidirectionality better (i.e. not with a required-bound arg)
Gotcha, yeah
you have to name the constraint the rule itself expresses, not the "output" (because there isn't really any)
if you
d/delete-database
is there any way to get it back ?in theory you could shut down the transactor and manipulate the proper values in storage to "resurrect" it
Cloud
Can you issue a support ticket to the portal at http://support.cognitect.com
@marshall I haven't done it, I was wondering
what happens
actually I was hoping that delete meant delete
presumably dbs in cloud are stored in S3
and delete-database would delegate deletion to whatever AWS deletion mechanism they have
meaning it's out of datomic's hands right ?
bottom line is delete-database
means the db becomes api-inaccessible, but does not guarantee that all bits that back the db were erased.
on on-prem, there is a separate process that does that, which you run at will. not sure yet what cloud does, but it's probably a similar process
not to be too pedantic about it, but "bits are erased" is itself just a guarantee that whatever storage-level api you have cannot access them anymore. e.g. with an sql storage, you may still need to vaccuum to remove the bits from the db's storage; and then you may need to write over the blocks on disk; etc
that’s one of the things that I suspect is going to make the GDPR stuff so hard to enforce/define/resolve
I thought that if you issued a 'delete' command to an AWS service, then it's Amazon's responsibility to ensure that the deletion is done correctly
I don't know what guarantees they make. At a minimum that is a guarantee that a "read" of that same item will not succeed via the s3/dynamodb/whatever api
and of course they could always be copying everything anyway
you wouldn't be to know
if you wanted to be sure, could you chase the individual segments and delete them ?