This page is not created by, affiliated with, or supported by Slack Technologies, Inc.
2016-08-10
Channels
- # admin-announcements (1)
- # alda (1)
- # bangalore-clj (1)
- # beginners (94)
- # boot (139)
- # braveandtrue (1)
- # cider (19)
- # cljs-dev (21)
- # cljsjs (8)
- # cljsrn (79)
- # clojure (124)
- # clojure-austin (1)
- # clojure-belgium (1)
- # clojure-berlin (3)
- # clojure-hamburg (3)
- # clojure-quebec (1)
- # clojure-russia (77)
- # clojure-spec (5)
- # clojure-uk (18)
- # clojurescript (39)
- # conf-proposals (21)
- # core-async (5)
- # cursive (8)
- # datomic (40)
- # defnpodcast (1)
- # devcards (14)
- # dirac (5)
- # editors (1)
- # emacs (4)
- # jobs (1)
- # liberator (4)
- # onyx (29)
- # perun (15)
- # proton (15)
- # protorepl (9)
- # re-frame (47)
- # reagent (38)
- # ring (1)
- # rum (7)
- # specter (23)
- # untangled (8)
- # yada (55)
I have those logs in transactor:
2016-08-09 23:43:59.604 WARN default org.hornetq.core.client - HQ212040: Timed out waiting for netty ssl close future to complete
2016-08-09 23:44:00.573 WARN default org.hornetq.core.server - HQ222190: Disallowing use of vulnerable protocol: SSLv2Hello. See for more details.
2016-08-09 23:44:00.573 WARN default org.hornetq.core.server - HQ222190: Disallowing use of vulnerable protocol: SSLv3. See for more details.
Those are just warnings, right? I assume I can ignore thoseIs there any advantage other than convenience for having :where clauses that don't use any index? Maybe caching of those extra filters?
Does anyone know what would cause this timeout to show in the logs when transacting files of a few mb in size "PoolingHttpClientConnectionManager - Closing connections idle longer than 60 SECONDS"
jimmyrcom: datomic is not really good at storing large amounts of data in single transactions. transactions with at most a few hundred datoms, a few kilobytes per datom is where its sweet spot sits.
in the end, it is the overall size of the transaction that matters. if you put too many datoms into one transaction, indexing can have trouble to keep up. if the datoms are too large, datomic's assumptions regarding segment sizes become invalid, making it less efficient.
also remember that if you have large transactions, you're blocking out other writers for the duration of the transaction. you mentioned "a few megabytes", and that is a lot of data to be committed in one transaction.
the general advice is: make your transactions smaller, store blobs somewhere else (e.g. directly in the backing store without using datomic for it).
@atroche, in your example is that assuming I’ve already queried for a company entity and a person entity? I’d like to be able to solve my case within the query alone. For more background, a company has a members attribute which is a cardinality many of type ref. I managed to have some success doing something like this:
[:find [?name ...]
:in $ % ?p-name ?co-name
:where [?p :person/name ?p-name]
[?c :company/name ?co-name]
[?pc :company/members ?p]
[(= ?pc ?c)]
[?c :company/members ?m]
[?m :person/name ?name]]
Something in my head is telling me however, that I should be working with contains?
@colindresj: I don't think you need ?pc. If you use ?c instead you can drop the equality check.
@marshall: great article on the blog. am i correct that queries on d/log
do not interact with the peer query cache mechanism? or do they indeed cache as well?
The log is a separate index so the segments retrieved via log access are different than those retrieved when you access one of the other indexes (i.e. AVET EAVT, etc) If you have a query that uses both the Log API (via helper functions) and other datalog clauses, the query engine will still use the other indexes as appropriate to satisfy the query, and those will be accessed the ‘regular’ way (i.e. with caching)
right. so queries that only work with d/log are, in essence, not cacheable
e.g. if i used d/log
with a filter on the datom :a
values and reverse
to make an activity stream view, that'd be bad from a performance perspective, because no caching happens on the log segments in the peer library
Log segments are cached. http://docs.datomic.com/caching.html#object-cache Whether or not certain segments are in cache at a given time is, of course, dependent on usage
ok, awesome!!
for some reason i had this idea that only the covering indices were cacheable, and something told me to double-check
yaknowwhatimean 🙂
eavt avet aevt vaet
rather than .... teav?
i’d have to check , but i don’t think the log provides any ordering within a transaction
i know tx datoms come first
which is contrary to storage indexes, i think
i know, right
totally changes my perception of it, actually
is the easiest (only?) way to allocate more memory to peers just to set -Xmx8g -Xms8g
(for example) from the command line?
It depends on what you want. 50% of the JVM’s max heap is a good start, but for your particular needs, you might be able to change that to something else. You can set it to a custom value (in bytes) with the datomic.objectCacheMax
java property.
Your system will throw an exception if you request an objectCacheMax greater than 75% of your JVM heap
Hey, quick question: given a datomic connection object, is there a quick way to get the URI it connects to? I scanned the Java API for a .getUri()
method or similar but didn't see anything
I just want to use it for logging in the case of a failed connection