This page is not created by, affiliated with, or supported by Slack Technologies, Inc.
2019-02-14
Channels
- # announcements (1)
- # beginners (206)
- # calva (2)
- # cider (64)
- # cljs-dev (12)
- # clojars (2)
- # clojure (177)
- # clojure-europe (2)
- # clojure-finland (1)
- # clojure-italy (2)
- # clojure-losangeles (5)
- # clojure-nl (7)
- # clojure-russia (69)
- # clojure-spec (41)
- # clojure-uk (92)
- # clojurescript (60)
- # core-async (16)
- # cursive (48)
- # data-science (6)
- # datomic (73)
- # duct (5)
- # events (2)
- # figwheel-main (5)
- # fulcro (29)
- # hoplon (1)
- # off-topic (52)
- # pathom (11)
- # reagent (4)
- # reitit (5)
- # remote-jobs (1)
- # rum (7)
- # shadow-cljs (58)
- # slack-help (10)
- # spacemacs (3)
- # testing (3)
- # tools-deps (5)
Question regarding Transaction
- I have 3 attributes, say x
, y
, & z
, where z = (+ x y)
, and z
will be evaluated and committed only if it gets the values of both x
and y
. Now, at t=0
, my db has just x = 3
. At t=1
, y=5
gets into the system, so z
gets evaluated and becomes z=8
and now both y
and z
are getting committed in a read & write transactions
(composed together). At the same time (at t=1
), some other thread changed x
to 4
. But, since Datomic doesn’t have any read transaction
that can be composed with the write transaction
, how can I make sure that my read & write transaction
should retry. Note that I am not committing x
in my write transaction
.
Hi, I have a syntax/feature question. Is there an equivalent in Datomic d/q
for a T-SQL in
clause?
[(ground [1 2 3]) [?match ...]] [?e ?a ?match]
is better when ?a is indexed and ?match is selective
Thanks Francis, will give it a look
e.g. :where [?e :col in [1,2,3]
e.g. I don't know that every possible type of mysql backup will give me a consistent snapshot
@favila I see, so how do you deal with the "real time backup" issue? I can make backups every night but what about the data that is being inserted the next day before the backup?
yeah, by storage I mean something like aws dynamo and postgresql point-in-time recovery features
if you want "if this machine catches on fire I don't lose data" that's clustering/replication territory not backups
It looks like postgresql point in time recovery will make something safe to restore directly to postgresql
but usually I think of a backup as something to either stand up the same data in another storage system or to correct some catastrophic developer error
e.g. I deliberately do not want to restore to an up-to-the-second backup because an hour ago someone fatfingered a DELETE
we keep both kinds of backups, mysql (done automatically by google's hosted mysql backups) and datomic
this is mostly "I don't want to lose todays work because I only run backups at night" issue
datomic restore of an empty storage will likely be much slower than that storage's native backup method
in the era of hosted, clustered, SLAed database-as-a-service I don't really see the point of the storage's own backup for something like datomic
but the datomic backup is "your" copy, you know you can use it even if your storage host goes up in flames