Fork me on GitHub
#datomic
<
2017-11-07
>
timgilbert05:11:13

That forum software is super slick, nice job

lmergen08:11:07

the real question of course is whether the datomic forum is powered by datomic :)

val_waeselynck09:11:51

My computer crashed and now I can't start my dev transactor anymore without it crashing. I'm seeing:

val_waeselynck09:11:57

Launching with Java options -server -Xms1g -Xmx1g -XX:+UseG1GC -XX:MaxGCPauseMillis=50
Starting datomic:, storing data in: ../data/datomic/data ...
System started datomic:, storing data in: ../data/datomic/data
Critical failure, cannot continue: Heartbeat failed

val_waeselynck09:11:27

The logs show :

2017-11-07 10:20:38.670 INFO  default    datomic.lifecycle - {:event :transactor/heartbeat-failed, :cause :conflict, :pid 4647, :tid 25}
2017-11-07 10:20:38.672 ERROR default    datomic.process - {:message "Critical failure, cannot continue: Heartbeat failed", :pid 4647, :tid 23}
2017-11-07 10:20:38.674 INFO  default    datomic.process-monitor - {:MetricsReport {:lo 1, :hi 1, :sum 1, :count 1}, :HeartbeatMsec {:lo 5001, :hi 5005, :sum 20010, :count 4}, :Alarm {:lo 1, :hi 1, :sum 1, :count 1}, :AlarmHeartbeatFailed {:lo 1, :hi 1, :sum 1, :count 1}, :SelfDestruct {:lo 1, :hi 1, :sum 1, :count 1}, :AvailableMB 628.0, :event :metrics, :pid 4647, :tid 23}
2017-11-07 10:20:38.759 INFO  default    o.a.activemq.artemis.core.server - AMQ221002: Apache ActiveMQ Artemis Message Broker version 1.4.0 [dc68a7d7-c39c-11e7-9092-12754bafa2ff] stopped, uptime 24.966 seconds

val_waeselynck09:11:43

How can I fix this ?

marshall12:11:53

@val_waeselynck Are you able to run a dev txor against a new storage (i.e. move/rename the data dir)

val_waeselynck08:11:38

@U05120CBV it works if I start from a clean data dir, I guess data corruption occurred when my machine crashed.

val_waeselynck08:11:50

I guessed I'll just restore a backup

val_waeselynck09:11:07

Restoring to a clean data dir worked. Thanks @U05120CBV @U06GLTD17!

Ben Kamphaus14:11:08

@val_waeselynck zombie transactor process somewhere? :cause :conflict can be multiple transactors running without a license that supports HA , otherwise possibly due to the local H2 that backs dev not being robust against many failure cases (not intended for that purpose), i.e. failing without all acked writes to storage having been persisted to disk and not being available.

wds_17:11:31

Hey guys, having an issue retrieving large amounts of nodes using datomic pull. I saw in the documentation that there is a limit of 1000 nodes. How would I use the (limit :attr nil) syntax is this query here below?

['(pull ?e [* {:pid/a         [:db/ident]
                        :pid/b       [:db/ident]
                        :problem/root [* {:problem/foo  [:db/ident]
                                                      :problem/bar [:db/ident]}]}])])
we need to pull more than 1000 nodes under problem/root

favila17:11:22

Replace :problem/root with (limit :problem/root nil)

wds_17:11:49

thank you, trying now

igor.i.ges17:11:12

['(pull ?e [* {:pid/a         [:db/ident]
                        :pid/b       [:db/ident]
                        (limit :problem/root nil) [* {:problem/foo  [:db/ident]
                                                      :problem/bar [:db/ident]}]}])])
still 1000
['(pull ?e [* {:pid/a         [:db/ident]
                        :pid/b       [:db/ident]
                        :problem/root [(limit * nil) {:problem/foo  [:db/ident]
                                                      :problem/bar [:db/ident]}]}])])
still 1000
['(pull ?e [* {:pid/a         [:db/ident]
                        :pid/b       [:db/ident]
                        :problem/root (limit [* {:problem/foo  [:db/ident]
                                                      :problem/bar [:db/ident]}] nil)}])])
;syntax error

wds_17:11:49

@U2XL48J00 and I tried the suggested solution to no avail

favila17:11:56

@U2XL48J00 That limit is in the wrong spot

favila17:11:10

map-spec = { ((attr-name | limit-expr) (pattern | recursion-limit))+ }

favila17:11:13

(from the docs)

igor.i.ges17:11:11

the last example was pure frustration. First example replaces attr-name with limit-expr in the outer map spec. the second example replaces attr-name with limit-expr in the list spec. So when you say, in the wrong spot, what exactly do you mean? (as i understand attr-expr = limit-expr | default-expr but it does not seem to produce intended effect) Thank you.

favila17:11:25

the first one should work

igor.i.ges17:11:39

and yet it didn't

favila17:11:06

try with d/pull instead of query pull

favila17:11:14

maybe it is a bug

favila17:11:16

you can also try a very small limit (e.g. 2) to see if it is working at all

igor.i.ges17:11:36

turns out nil aka no limit doesn't seem to work in this case, but setting a specific number works (in example 1)

favila17:11:22

same behavior with d/pull?

favila17:11:56

possible workaround? [* (limit :problem/root nil) {:problem/root [*]}]

favila17:11:11

anyway that is definitely a bug

favila17:11:38

(to ignore nil limit on map key)

igor.i.ges18:11:57

does not work with d/pull either

conan17:11:57

Hi all, looking for a bit of help getting a transactor running on AWS against dynamodb storage. I've created the ddb table using ensure-transactor, and i've created the cloudformation stack using ensure-cf, create-cf-template and create-cf-stack. I've now got a stack in place, but the ec2 instance it creates just shuts down as soon as it starts; no logs make it into my s3 bucket (although the bucket has been created fine). Can anyone point me in the direction of some resources about debugging this? Thanks

conan17:11:00

i'm using a t2.small instance, which appears in the list of supported instances in my cf-template.json

marshall18:11:39

The t2.small is pretty limited for resources; what are you using for xmx, and your other memory settings (mem index, obj cache)?

conan11:11:23

1G for Xmx

conan11:11:29

the others are set to the developer defaults

conan11:11:07

What would be really useful would be to know how I can debug this, there doesn't seem to be any way of getting logs unless I happen to request them at exactly the right time