This page is not created by, affiliated with, or supported by Slack Technologies, Inc.
2018-04-09
Channels
- # beginners (108)
- # boot (14)
- # cider (8)
- # clara (13)
- # cljs-dev (63)
- # cljsrn (5)
- # clojure (57)
- # clojure-brasil (1)
- # clojure-italy (69)
- # clojure-losangeles (10)
- # clojure-nl (6)
- # clojure-poland (2)
- # clojure-spec (6)
- # clojure-uk (50)
- # clojurescript (116)
- # core-async (1)
- # cursive (9)
- # data-science (8)
- # datascript (4)
- # datomic (43)
- # duct (2)
- # editors (1)
- # fulcro (29)
- # instaparse (7)
- # jobs (6)
- # keechma (3)
- # mount (16)
- # off-topic (61)
- # om (10)
- # onyx (5)
- # parinfer (17)
- # pedestal (2)
- # portkey (5)
- # quil (2)
- # re-frame (84)
- # reagent (9)
- # remote-jobs (2)
- # ring-swagger (2)
- # shadow-cljs (17)
- # slack-help (1)
- # tools-deps (29)
- # vim (23)
I'm starting datomic via this command:
bin/run -m datomic.peer-server -h localhost -p 8998 -a myaccesskey,mysecret -d hello,datomic:
per a tutorial. i think i have to specify what logback.xml
file i'm using. what's the flag for that? I'm looking for something like this:
bin/run -L ./bin/logback.xml -m datomic.peer-server -h localhost -p 8998 -a myaccesskey,mysecret -d hello,datomic:
Context in case I'm going about this the wrong way:
In cider-repl. I want logging to cider-repl
to be less noisy; right now it's at least 30kb of logging per datomic repl command. I have commented-out sections of the logback.xml
file in the bin
directory of datomic-pro-0.9.5561, but they're not showing in my cider-repl
. And as a test, I changed this line
<pattern>%d{yyyy-MM-dd HH:mm:ss.SSS} %-5level %-10contextName %logger{36} - %msg%n</pattern>
to this
<pattern>%d{yyyy-MM-dd HH:mm:ss.SSS} %-5level %-10contextName %logger{36} - testfoobar - %msg%n</pattern>
and didn't see testfoobar
in my cider-repl
logging, so I think the run
datomic program isn't picking up my logback.xml
file.If there's comprehensive for Datomic pro starter, I'm happy to read it. I couldn't find it. Thanks everyone!
logback.xml is probably loaded as a resource so you should just put it on your classpath
Thanks for your response! I thought I had a much more tricky problem than I did. I had put logback.xml in my classpath, but it wasn't picking up my modified logback. Turns out, there was already a logback.xml in my project, which was clobbering my modified one. Silly problem!
Any way to pull the tx-instant from an entity’s db/id using the pull or pull-many function?
@caleb.macdonaldblack no, neither entity nor pull support access to the ‘t’ in ‘eavt’. you need to use d/q, d/datoms, or d/tx-range. entity and pull start with ‘e’ and give you ways to discover ‘v’ through ‘a’. ‘t’ is never an ‘v’ in this scheme.
Ahh ok cheers
Do I understand correctly that a consequence of this would be that if your schema doesn’t expose synthetic timestamps for things (e.g., if you rely upon :db/txInstant
as a proxy for “when did thing
happen” instead of explicitly storing a :thing/created-at
datom), then you cannot use the Datomic Client API to know about when things happened in your db?
no, you just need to query for it, not pull
but I think it’s also helpful be clear in your thinking about the difference between “when did thing happen” and “when did I record that thing happened”
ah, I see, thanks for the clarification
it can be a useful thing to conflate the two, but don’t forget that you’re doing so :)
I am very clear in the difference between the two, but I also have inherited a medium-sized schema where the decision to conflate the two things was made long ago. I think it’s a reasonable choice for the data at hand since the events in question don’t have a need for high resolution to wall-clock time, but I like to keep a bead on what all we can and can’t do with Client
(for example we have a lot of logic in DB fn
s, so I’m keeping my on-prem-on-AWS footprint up-do-date and waiting to see what Cloud will offer to handle them - this restriction would have been another thing I had to know about while doing the Indiana-Jones-sandbag-and-idol dance trying to decide when to migrate to Cloud :D)
well if you haven’t, that’s probably something to talk to @marshall about
It’s on my list. I’m very interested in Datomic Cloud but I have other dragons to slay before I get to the point of looking at upgrading our stack.
Hi, I'm not running datomic - I am interested in some consistency edge cases around GC and excision. What happens if your db 'value' holding one root pointer see's GC'd or excised nodes? Do you get an exception? Do you just not see the datoms? Lets say i (def dbval (db conn))
and then run a GC job or excise some data, several days later, I seek all datoms in dbval
- what happens?
“The reason that garbage is not deleted immediately on creation of a new tree is that not all consumers will immediately adopt the latest tree. Garbage collection should not be run with a current or recent time, as this has the potential to disrupt index values used by long-running processes. Except during initial import, garbage collection (gcStorage) older-than value should be at least a month old.”
Hey all. Inside of my tx-data of a transaction I have a list of datoms. I.e. #datom[17592220704601 134 17592186149960 13194174193496 true]
. I realise I can retrieve the eav
as well as the tx
. My understanding is that the true
denotes as to whether it was actually transacted. My question is, how do I get at that value?
covered here: https://docs.datomic.com/on-prem/clojure/index.html#datomic.api/history
more specifically here: https://docs.datomic.com/on-prem/javadoc/index.html
Does anyone know of an easy way to do something like d/touch
but include all reverse references? Context: I find myself typing (inspect-tree (d/touch (d/entity db 1234)))
and would like to be able to traverse reverse references instead of having to look them up.
@bmaddy The set of reverse references from any entity is an open set (since any entity can have any attribute). I’d probably use the Datoms API and the VAET index to find what you’re looking for
I have designed a cross-region failover capability like so:
- backups are taken every 10 minutes automatically to an S3 bucket in us-east-1
- those backups are copied to a different bucket in us-east-2
by cross-region-replication
Now I am trying to exercise it:
- create new, empty DynamoDB table in us-east-2
- bin/datomic restore-db <the us-east-2 bucket> <the new, empty table> (<optionally, a t-value from bin/datomic list-backups)
I see the restore fail like this:
bin/datomic restore-db datomic: the-t-value
Copied 0 segments, skipped 0 segments.
Copied 0 segments, skipped 0 segments.
java.util.concurrent.ExecutionException: java.lang.Exception: Key not found: 583bfcc4-3b77-4dec-b0ba-2c0bda16dbed
at java.util.concurrent.FutureTask.report(FutureTask.java:122)
at java.util.concurrent.FutureTask.get(FutureTask.java:192)
at clojure.core$deref_future.invokeStatic(core.clj:2292)
at clojure.core$future_call$reify__8097.deref(core.clj:6894)
at clojure.core$deref.invokeStatic(core.clj:2312)
at clojure.core$deref.invoke(core.clj:2298)
at datomic.backup_cli$status_message_loop.invokeStatic(backup_cli.clj:24)
at datomic.backup_cli$status_message_loop.invoke(backup_cli.clj:18)
at datomic.backup_cli$restore.invokeStatic(backup_cli.clj:53)
at datomic.backup_cli$restore.invoke(backup_cli.clj:43)
restores from the backups in us-east-1
appear to work okay (I haven’t let one run to completion yet but they do at least begin to copy segments)
Does this likely mean that all my backups in Ohio are corrupted somehow? Is there something about how backup-db
puts data into S3 that would not be amenable to CRR?
to be clear, bin/datomic list-backups
shows the same set of t
values available in both the us-east-1
and us-east-2
buckets, but restoring from the us-east-2
bucket for any of the t
values I’ve tried leads to the above error (including the key that is not found being the same key each time)