Fork me on GitHub
#datomic
<
2016-09-01
>
tengstrand09:09:10

@marshall No, it was not a restore of a db. I created a database from scratch (using datomic-pro 0.9.5372), then I added the schema-definitions, and then the data. Everything fresh from scratch.

robert-stuttaford12:09:57

@mrmcc3 you had a nifty aws cli command that listed the datomic amis -- i can't find it in the history here -- what was it, again, please? šŸ™‚

karol.adamiec12:09:21

@robert-stuttaford aws ec2 describe-images --owners 754685078599

robert-stuttaford12:09:23

that's the one! thank you!

robert-stuttaford12:09:54

karol.adamiec are you the person who had this issue?

robert-stuttaford12:09:56

user-data: pid is  2173 
user-data: ./startup.sh: line 26: kill: (2173) - No such process
/dev/fd/11: line 1: /sbin/plymouthd: No such file or directory
initctl: Event failed

robert-stuttaford12:09:08

did you overcome it?

robert-stuttaford12:09:16

my current theory is my ami is too old

karol.adamiec12:09:20

99% it was malformed license key

robert-stuttaford12:09:30

oh. bugger. yes. that's quite possible

robert-stuttaford12:09:40

seeing as i've just replaced it

karol.adamiec12:09:44

would expect a nice WRONG licence in there though šŸ™‚

karol.adamiec12:09:31

giving back šŸ˜„

karol.adamiec12:09:58

let us know if it was license when done and dusted

robert-stuttaford12:09:51

how did you know which among the AMIs returned by that command to use, @karol.adamiec ?

karol.adamiec12:09:30

i did not use that in the end. i picked amis from CloudFormation template json

karol.adamiec12:09:30

"AWSRegionArch2AMI":
  {"ap-northeast-1":{"64p":"ami-952c6a94", "64h":"ami-bf2f69be"},
   "us-west-1":{"64p":"ami-3a9fa47f", "64h":"ami-789fa43d"},
   "ap-southeast-1":{"64p":"ami-ecfaa8be", "64h":"ami-92faa8c0"},
   "us-west-2":{"64p":"ami-1b13652b", "64h":"ami-f51264c5"},
   "eu-central-1":{"64p":"ami-e0a4a9fd", "64h":"ami-e2a4a9ff"},
   "us-east-1":{"64p":"ami-34ae4c5c", "64h":"ami-82a94bea"},
   "eu-west-1":{"64p":"ami-6d67a11a", "64h":"ami-a566a0d2"},
   "ap-southeast-2":{"64p":"ami-2d41da17", "64h":"ami-c942d9f3"},
   "sa-east-1":{"64p":"ami-df238ec2", "64h":"ami-ad238eb0"}}},

karol.adamiec12:09:51

recent run on newest datomic from yesterday

robert-stuttaford12:09:44

thanks -- the one we're using is present there, so we're good. just gotta get this license key right

robert-stuttaford12:09:59

figuring out how to include it in a CFN Fn::Join is fun

karol.adamiec12:09:18

yeah, i had to figure out how to include it in terraform. ;/

robert-stuttaford12:09:11

how's the terraform going?

robert-stuttaford12:09:25

it's too late for us to switch now, but i'm planning to revise things again with that

karol.adamiec12:09:14

managed to put up datomic in dev env with a press of a button so i would say nice šŸ™‚

karol.adamiec12:09:21

took some time though

karol.adamiec12:09:22

modeule from @mrmcc3 is a gold, just needs some ironing a bitā€¦ i have put that into future tasks of mine

robert-stuttaford12:09:44

the thing for me is making the infrastructure code approachable for others. right now our CFN codebase is hella scary

karol.adamiec13:09:06

peeking at datomic cftemplate is definitely scary šŸ™‚

robert-stuttaford13:09:14

terraform looks a lot simpler to reason about, even if it ends up doing the same stuff

karol.adamiec13:09:33

yeah, for me it is even i do not want to look at aws console. it is easier to trace sec groups and put together complete flow in your mind if it is in terraform

mrmcc314:09:30

a recent change to the module means terraform looks up the correct AMI for you. You can verify it with terraform plan before you apply changes. pretty slick

sdegutis14:09:13

What's needed to connect a Java program to an existing database, besides adding the "datomic-free" dependency and calling Peer.connect with the right URI? Doing so throws an exception for me about an internal Datomic class not being found or something.

jaret15:09:40

@sdegutis did you "import datomic.Peerā€?

jaret15:09:42

Here is our java Mbrainz example. Maybe it will serve as a comparison to catch anything you might have missed: https://github.com/Datomic/mbrainz-sample/blob/master/examples/java/datomic/samples/Mbrainz.java

robert-stuttaford15:09:13

@jaret, i'm having the same issue that karol had earlier, where the transactor instance dies during initialisation right after extracting the datomic runtime. i've made very sure that my license key is represented correctly; i used ensure-* to generate cf via datomic and my license string is identical. how do i diagnose further, given that i can't ssh in and dig around?

robert-stuttaford15:09:42

this is in our UAT env. not a prod problem

robert-stuttaford15:09:02

user-data:   inflating: datomic-pro-0.9.5394/datomic-pro-0.9.5394.jar  
user-data:   inflating: datomic-pro-0.9.5394/README-CONSOLE.md  
user-data: pid is  2180 
user-data: ./startup.sh: line 26: kill: (2180) - No such process
/dev/fd/11: line 1: /sbin/plymouthd: No such file or directory
initctl: Event failed
Stopping atd: [  OK  ]

robert-stuttaford15:09:03

tested the key locally, it works fine

robert-stuttaford15:09:04

maybe there's a variant of s3:\/\/$DATOMIC_DEPLOY_BUCKET\/$DATOMIC_VERSION\/startup.sh that logs a lot more?

jaret15:09:44

@robert-stuttaford I am asking around to see what we can do. Usually testing locally is my cure-all

robert-stuttaford15:09:21

i'm pretty sure i'm doing something dumb, but i'm at a loss as to what it could be

karol.adamiec15:09:01

have you looked at generated user data script to verify the licence?

sdegutis15:09:24

@jaret yes, did not help

sdegutis15:09:35

I'm getting two simultaneous exceptions: ExceptionInInitializerError and org.fressian.handlers.IWriteHandlerLookup

sdegutis15:09:11

It's getting the second one when trying to load a class via URLClassLoader.findClass.

marshall15:09:19

@sdegutis: what versions of Datomic and of Clojure?

sdegutis15:09:34

Clojure 8, Datomic-free 0.9.5372

marshall15:09:15

Other deps possibly conflict in the project? Can inspect with mvn dependency:tree -Dverbose

sdegutis15:09:33

That could be it.

robert-stuttaford15:09:41

@jaret could i show you the UserData my CFN produces? perhaps you spot something?

sdegutis15:09:39

Fwiw it was the default Jooby project. I will try it in a fresh Java project now.

sdegutis15:09:09

Great, it works in a fresh Maven project.

sdegutis15:09:17

Turns out Jooby just has some sort of conflicting dependency.

sdegutis15:09:20

Dang šŸ˜ž

marshall15:09:46

You can probably track it down with that mvn command and a bit of time

marshall15:09:03

Might be something you can exclude or upgrade

sdegutis16:09:30

Thanks a ton marshall.

marshall17:09:05

@teng Can you provide a repro case to me (possibly offline - email me at <mailto:[email protected]|[email protected]> )? I have tried creating multiple databases with large imported data sets and I donā€™t see that behavior if Iā€™m using 0.9.5372

timgilbert19:09:39

Personally, we eventually decided against using the AMIs, especially because of the lack of ssh access for diagnosing install problems. It's not too difficult to just fire up a plain ubuntu ec2 instance and install the transactor files on there, and you can still use the ensure-transactor bits to set up the AWS permissions

timgilbert19:09:53

What I would really like is a docker container with the transactor in it, but that doesn't appear to exist

timgilbert19:09:12

Hmm, will look into that, thanks @marshall.

kenny19:09:17

Why is there no .keySet function in Datomic Clojure API?

kenny20:09:03

Is it guaranteed that all the tx ids in a transaction report are the same? e.g.

(= 1
   (count (reduce (fn [txs datom]
                    (conj txs (.tx datom))) #{} (:tx-data tx-report))))

jaret21:09:17

@kenny you should be able to use java.util.hashmap.keyset().

jaret21:09:27

I am also not sure I understand your second question

kenny21:09:02

Is the code I posted always true, given that tx-report is a transaction report returned from transact?

kenny21:09:48

For example, here is a tx-report:

{:status :ready,
 :val {:db-before datomic.db.Db 
       :db-after datomic.db.Db
       :tx-data [#datom[17592186045421 67 17592186045418 13194139534313 true]
                 #datom[17592186045421 67 17592186045419 13194139534313 true]
                 #datom[17592186045421 67 17592186045420 13194139534313 true]],
       :tempids {-9223350046625933567 17592186045418,
                 -9223350046625933568 17592186045419,
                 -9223350046625933569 17592186045420,
                 -9223350046625933566 17592186045421}}}
Is it guaranteed that the tx value for each datom in :tx-data is the same?

kenny21:09:25

In this case it is ^

kenny21:09:05

(map .tx (:tx-data tx-report))
=> (13194139534313 13194139534313 13194139534313)

kenny21:09:18

Is that always true?

jaret21:09:50

Yes it is always true. tx-report is recorded as :tx-data in EAVT form and the tx values returned from transact should be the same. I will double check with @marshall but that is my understanding.

kenny21:09:44

It makes sense that they would be, just wanted to confirm

kenny21:09:00

@jaret About the .keySet: I was referring to calling .keySet on an entity returning a set of strings instead of keywords. For example:

(.keySet (d/entity db-init 17592186045419))
=> #{":foo/bar"}

kenny21:09:12

Pff.. I'm silly:

(keys (d/entity db-init 17592186045419))
=> (:foo/bar)

marshall22:09:26

Yes @kenny , @jaret is correct. All the datoms in tx-data returned by transact are in the same transaction, and therefore all have the same tx value

Chris Oā€™Donnell22:09:22

I need to (very) occasionally do a sweep and update "large" amounts of data (commiting ~14k datoms). Should I break that update up into chunks of a certain size?

marshall23:09:06

@codonnell: yes, I wouldn't suggest 14k datoms in a single transaction. I'd recommend keeping transactions to the hundreds of datoms. Low thousands would be ok, but not ideal

marshall23:09:49

Of course it depends on the size of individual datoms as well as schema

Chris Oā€™Donnell23:09:30

@marshall: alright, thanks. I'm guessing one at a time would also not be ideal? I can play with batch size to see what works best.