Fork me on GitHub
#datomic
<
2016-02-08
>
kschrader00:02:31

got it thanks

kschrader00:02:52

@bkamphaus: is there any way to know how much memory an index will take up?

pesterhazy10:02:46

I'm seeing

Transaction error clojure.lang.ExceptionInfo: :db.error/transactor-unavailable Transactor not available {:db/error :db.error/transactor-unavailable}
pretty regularly

pesterhazy10:02:16

it always recovers but this is a bit worrying. (This is using AWS, official AMIs with dynamo)

pesterhazy10:02:38

could this be related to GC pauses in the peer?

dm311:02:29

yes, that could trigger it

dm311:02:43

same way as a broken network

pesterhazy13:02:37

the GC pauses we see are only 5 seconds, though -- would that be sufficient?

pesterhazy13:02:02

not that 5 second GC pauses aren't indicative of a problem in our code simple_smile

dm313:02:09

is there a timeout parameter of some sorts?

bkamphaus14:02:43

@pesterhazy also large transactions on the peer or indexing not keeping up on transactor. gc pause of just 5 seconds could possibly impact it if times poorly or in quick succession.

pesterhazy14:02:29

this peer is processing hardly any transactions

bkamphaus14:02:58

Timeout tolerance can be set by upping transactor heartbeat (Datomic level), or on peer changing datomic.peerConnectionTTLMsec to be higher (HornetQ level)

bkamphaus14:02:45

@pesterhazy: if the peer isn’t processing many transactions, and the transactor (verified from metrics or logs) is heartbeating fine and not reporting alarms, peer GC is most likely culprit. If you’re not using non-default JVM GC settings on peer app, you could adopt some similar to those on transactor if goal is to avoid pauses. Or tolerate the GC by upping one or both of the settings mentioned above.

pesterhazy14:02:40

like stu halloway says in his debugging talk, the culprit is always the GC

pesterhazy14:02:01

looking at the transactor metrics, the heartbeating looks fine

pesterhazy14:02:36

I guess there's no way around finding where those GC pauses are coming from

pesterhazy14:02:40

thanks for your help!