This page is not created by, affiliated with, or supported by Slack Technologies, Inc.
2016-09-01
Channels
- # admin-announcements (1)
- # aws (1)
- # beginners (14)
- # boot (19)
- # cljs-dev (10)
- # cljsrn (2)
- # clojure (64)
- # clojure-android (4)
- # clojure-dev (5)
- # clojure-greece (7)
- # clojure-italy (10)
- # clojure-russia (42)
- # clojure-spec (117)
- # clojure-uk (78)
- # clojurescript (160)
- # cloverage (1)
- # conf-proposals (1)
- # cursive (8)
- # datomic (93)
- # editors (8)
- # editors-rus (5)
- # figwheel (1)
- # flambo (14)
- # hoplon (95)
- # jobs (2)
- # jobs-rus (1)
- # lambdaisland (4)
- # lein-figwheel (6)
- # leiningen (3)
- # om (106)
- # onyx (33)
- # planck (6)
- # proton (3)
- # protorepl (2)
- # random (2)
- # re-frame (9)
- # reagent (5)
- # ring (1)
- # untangled (61)
- # yada (50)
@marshall No, it was not a restore of a db. I created a database from scratch (using datomic-pro 0.9.5372), then I added the schema-definitions, and then the data. Everything fresh from scratch.
@mrmcc3 you had a nifty aws cli command that listed the datomic amis -- i can't find it in the history here -- what was it, again, please? š
@robert-stuttaford aws ec2 describe-images --owners 754685078599
maybe that?
that's the one! thank you!
karol.adamiec are you the person who had this issue?
user-data: pid is 2173
user-data: ./startup.sh: line 26: kill: (2173) - No such process
/dev/fd/11: line 1: /sbin/plymouthd: No such file or directory
initctl: Event failed
did you overcome it?
yes š
my current theory is my ami is too old
99% it was malformed license key
oh. bugger. yes. that's quite possible
seeing as i've just replaced it
would expect a nice WRONG licence in there though š
thank you, karol
no prb
giving back š
let us know if it was license when done and dusted
how did you know which among the AMIs returned by that command to use, @karol.adamiec ?
i did not use that in the end. i picked amis from CloudFormation template json
ok, cool
"AWSRegionArch2AMI":
{"ap-northeast-1":{"64p":"ami-952c6a94", "64h":"ami-bf2f69be"},
"us-west-1":{"64p":"ami-3a9fa47f", "64h":"ami-789fa43d"},
"ap-southeast-1":{"64p":"ami-ecfaa8be", "64h":"ami-92faa8c0"},
"us-west-2":{"64p":"ami-1b13652b", "64h":"ami-f51264c5"},
"eu-central-1":{"64p":"ami-e0a4a9fd", "64h":"ami-e2a4a9ff"},
"us-east-1":{"64p":"ami-34ae4c5c", "64h":"ami-82a94bea"},
"eu-west-1":{"64p":"ami-6d67a11a", "64h":"ami-a566a0d2"},
"ap-southeast-2":{"64p":"ami-2d41da17", "64h":"ami-c942d9f3"},
"sa-east-1":{"64p":"ami-df238ec2", "64h":"ami-ad238eb0"}}},
recent run on newest datomic from yesterday
thanks -- the one we're using is present there, so we're good. just gotta get this license key right
figuring out how to include it in a CFN Fn::Join
is fun
yeah, i had to figure out how to include it in terraform. ;/
fun as hell
how's the terraform going?
it's too late for us to switch now, but i'm planning to revise things again with that
managed to put up datomic in dev env with a press of a button so i would say nice š
took some time though
modeule from @mrmcc3 is a gold, just needs some ironing a bitā¦ i have put that into future tasks of mine
the thing for me is making the infrastructure code approachable for others. right now our CFN codebase is hella scary
peeking at datomic cftemplate is definitely scary š
terraform looks a lot simpler to reason about, even if it ends up doing the same stuff
yeah, for me it is even i do not want to look at aws console. it is easier to trace sec groups and put together complete flow in your mind if it is in terraform
a recent change to the module means terraform looks up the correct AMI for you. You can verify it with terraform plan
before you apply changes. pretty slick
What's needed to connect a Java program to an existing database, besides adding the "datomic-free" dependency and calling Peer.connect with the right URI? Doing so throws an exception for me about an internal Datomic class not being found or something.
Here is our java Mbrainz example. Maybe it will serve as a comparison to catch anything you might have missed: https://github.com/Datomic/mbrainz-sample/blob/master/examples/java/datomic/samples/Mbrainz.java
@jaret, i'm having the same issue that karol had earlier, where the transactor instance dies during initialisation right after extracting the datomic runtime. i've made very sure that my license key is represented correctly; i used ensure-* to generate cf via datomic and my license string is identical. how do i diagnose further, given that i can't ssh in and dig around?
this is in our UAT env. not a prod problem
user-data: inflating: datomic-pro-0.9.5394/datomic-pro-0.9.5394.jar
user-data: inflating: datomic-pro-0.9.5394/README-CONSOLE.md
user-data: pid is 2180
user-data: ./startup.sh: line 26: kill: (2180) - No such process
/dev/fd/11: line 1: /sbin/plymouthd: No such file or directory
initctl: Event failed
Stopping atd: [ OK ]
tested the key locally, it works fine
maybe there's a variant of s3:\/\/$DATOMIC_DEPLOY_BUCKET\/$DATOMIC_VERSION\/startup.sh
that logs a lot more?
@robert-stuttaford I am asking around to see what we can do. Usually testing locally is my cure-all
thank you
i'm pretty sure i'm doing something dumb, but i'm at a loss as to what it could be
have you looked at generated user data script to verify the licence?
I'm getting two simultaneous exceptions: ExceptionInInitializerError and org.fressian.handlers.IWriteHandlerLookup
It's getting the second one when trying to load a class via URLClassLoader.findClass.
Other deps possibly conflict in the project? Can inspect with mvn dependency:tree -Dverbose
@jaret could i show you the UserData my CFN produces? perhaps you spot something?
@teng Can you provide a repro case to me (possibly offline - email me at <mailto:[email protected]|[email protected]> )? I have tried creating multiple databases with large imported data sets and I donāt see that behavior if Iām using 0.9.5372
Personally, we eventually decided against using the AMIs, especially because of the lack of ssh access for diagnosing install problems. It's not too difficult to just fire up a plain ubuntu ec2 instance and install the transactor files on there, and you can still use the ensure-transactor bits to set up the AWS permissions
What I would really like is a docker container with the transactor in it, but that doesn't appear to exist
@timgilbert : our friends at Pointslope do maintain this: https://github.com/pointslope/docker-datomic-example
blog post about ^ : https://pointslope.com/blog/datomic-pro-starter-edition-in-15-minutes-with-docker/
Hmm, will look into that, thanks @marshall.
Is it guaranteed that all the tx ids in a transaction report are the same? e.g.
(= 1
(count (reduce (fn [txs datom]
(conj txs (.tx datom))) #{} (:tx-data tx-report))))
Is the code I posted always true, given that tx-report
is a transaction report returned from transact
?
For example, here is a tx-report:
{:status :ready,
:val {:db-before datomic.db.Db
:db-after datomic.db.Db
:tx-data [#datom[17592186045421 67 17592186045418 13194139534313 true]
#datom[17592186045421 67 17592186045419 13194139534313 true]
#datom[17592186045421 67 17592186045420 13194139534313 true]],
:tempids {-9223350046625933567 17592186045418,
-9223350046625933568 17592186045419,
-9223350046625933569 17592186045420,
-9223350046625933566 17592186045421}}}
Is it guaranteed that the tx value for each datom in :tx-data
is the same?Yes it is always true. tx-report is recorded as :tx-data in EAVT form and the tx values returned from transact should be the same. I will double check with @marshall but that is my understanding.
@jaret About the .keySet
: I was referring to calling .keySet
on an entity returning a set of strings instead of keywords. For example:
(.keySet (d/entity db-init 17592186045419))
=> #{":foo/bar"}
Yes @kenny , @jaret is correct. All the datoms in tx-data returned by transact are in the same transaction, and therefore all have the same tx value
I need to (very) occasionally do a sweep and update "large" amounts of data (commiting ~14k datoms). Should I break that update up into chunks of a certain size?
@codonnell: yes, I wouldn't suggest 14k datoms in a single transaction. I'd recommend keeping transactions to the hundreds of datoms. Low thousands would be ok, but not ideal
@marshall: alright, thanks. I'm guessing one at a time would also not be ideal? I can play with batch size to see what works best.