Fork me on GitHub
#datomic
<
2016-08-10
>
podviaznikov00:08:57

I have those logs in transactor:

2016-08-09 23:43:59.604 WARN  default    org.hornetq.core.client - HQ212040: Timed out waiting for netty ssl close future to complete
2016-08-09 23:44:00.573 WARN  default    org.hornetq.core.server - HQ222190: Disallowing use of vulnerable protocol: SSLv2Hello. See  for more details.
2016-08-09 23:44:00.573 WARN  default    org.hornetq.core.server - HQ222190: Disallowing use of vulnerable protocol: SSLv3. See  for more details.
Those are just warnings, right? I assume I can ignore those

yonatanel13:08:05

Is there any advantage other than convenience for having :where clauses that don't use any index? Maybe caching of those extra filters?

jimmyrcom13:08:06

Does anyone know what would cause this timeout to show in the logs when transacting files of a few mb in size "PoolingHttpClientConnectionManager - Closing connections idle longer than 60 SECONDS"

hans13:08:37

jimmyrcom: datomic is not really good at storing large amounts of data in single transactions. transactions with at most a few hundred datoms, a few kilobytes per datom is where its sweet spot sits.

jimmyrcom13:08:17

Thanks hans, so the number of items per transaction could trigger this?

hans13:08:34

in the end, it is the overall size of the transaction that matters. if you put too many datoms into one transaction, indexing can have trouble to keep up. if the datoms are too large, datomic's assumptions regarding segment sizes become invalid, making it less efficient.

hans13:08:21

also remember that if you have large transactions, you're blocking out other writers for the duration of the transaction. you mentioned "a few megabytes", and that is a lot of data to be committed in one transaction.

hans13:08:48

the general advice is: make your transactions smaller, store blobs somewhere else (e.g. directly in the backing store without using datomic for it).

jimmyrcom13:08:46

Thanks for the advice Hans

colindresj14:08:50

@atroche, in your example is that assuming I’ve already queried for a company entity and a person entity? I’d like to be able to solve my case within the query alone. For more background, a company has a members attribute which is a cardinality many of type ref. I managed to have some success doing something like this:

[:find [?name ...]
 :in $ % ?p-name ?co-name
 :where [?p :person/name ?p-name]
        [?c :company/name ?co-name]
        [?pc :company/members ?p]
        [(= ?pc ?c)]
        [?c :company/members ?m]
        [?m :person/name ?name]]
Something in my head is telling me however, that I should be working with contains?

yonatanel14:08:32

@colindresj: I don't think you need ?pc. If you use ?c instead you can drop the equality check.

robert-stuttaford14:08:36

@marshall: great article on the blog. am i correct that queries on d/log do not interact with the peer query cache mechanism? or do they indeed cache as well?

marshall14:08:01

The log is a separate index so the segments retrieved via log access are different than those retrieved when you access one of the other indexes (i.e. AVET EAVT, etc) If you have a query that uses both the Log API (via helper functions) and other datalog clauses, the query engine will still use the other indexes as appropriate to satisfy the query, and those will be accessed the ‘regular’ way (i.e. with caching)

robert-stuttaford14:08:29

right. so queries that only work with d/log are, in essence, not cacheable

robert-stuttaford15:08:11

e.g. if i used d/log with a filter on the datom :a values and reverse to make an activity stream view, that'd be bad from a performance perspective, because no caching happens on the log segments in the peer library

marshall15:08:04

Log segments are cached. http://docs.datomic.com/caching.html#object-cache Whether or not certain segments are in cache at a given time is, of course, dependent on usage

robert-stuttaford15:08:30

for some reason i had this idea that only the covering indices were cacheable, and something told me to double-check

marshall15:08:48

well, the log is a covering index 😉

robert-stuttaford15:08:58

yaknowwhatimean 🙂

robert-stuttaford15:08:02

eavt avet aevt vaet

robert-stuttaford15:08:22

rather than .... teav?

marshall15:08:37

or at least t___

marshall15:08:55

i’d have to check , but i don’t think the log provides any ordering within a transaction

robert-stuttaford15:08:13

i know tx datoms come first

robert-stuttaford15:08:31

which is contrary to storage indexes, i think

bhagany15:08:25

this is excellent news, I also thought d/log didn't cache

robert-stuttaford17:08:25

totally changes my perception of it, actually

kschrader18:08:59

is the easiest (only?) way to allocate more memory to peers just to set -Xmx8g -Xms8g (for example) from the command line?

kschrader18:08:30

which will then allocate 50% of that to the object cache?

jgdavey18:08:55

It depends on what you want. 50% of the JVM’s max heap is a good start, but for your particular needs, you might be able to change that to something else. You can set it to a custom value (in bytes) with the datomic.objectCacheMax java property.

jgdavey18:08:13

But for the JVM instance itself -Xmx would need to be high enough as well

marshall18:08:01

Your system will throw an exception if you request an objectCacheMax greater than 75% of your JVM heap

timgilbert19:08:03

Hey, quick question: given a datomic connection object, is there a quick way to get the URI it connects to? I scanned the Java API for a .getUri() method or similar but didn't see anything

timgilbert19:08:20

I just want to use it for logging in the case of a failed connection

bhagany19:08:45

I don't see anything relevant upon reflecting the connection either, but I agree this would be useful