This page is not created by, affiliated with, or supported by Slack Technologies, Inc.
2016-02-04
Channels
- # aatree (5)
- # admin-announcements (37)
- # alda (1)
- # announcements (4)
- # architecture (1)
- # aws (3)
- # beginners (82)
- # boot (230)
- # braid-chat (14)
- # cider (48)
- # cljs-dev (8)
- # cljsrn (31)
- # clojars (47)
- # clojure (72)
- # clojure-austin (2)
- # clojure-russia (396)
- # clojurescript (72)
- # community-development (3)
- # component (6)
- # core-async (6)
- # cursive (26)
- # datomic (42)
- # emacs (6)
- # events (35)
- # hoplon (57)
- # immutant (3)
- # jobs (2)
- # jobs-discuss (10)
- # ldnclj (16)
- # luminus (2)
- # off-topic (50)
- # om (181)
- # parinfer (285)
- # proton (68)
- # re-frame (19)
- # reagent (2)
- # ring-swagger (23)
- # yada (36)
I'm seeing very slow queries and my database is not even that large. Any suggestions for how to proceed?
If I have an expensive query that returns 5MB of data, I can see that the first time the query is made it takes about ~3 seconds. But the second time that same query is made, shouldn't it be way faster because of caching?
I'm wondering if I've setup something incorrectly.
I thought perhaps the peer does not have enough memory but based on New Relic I can see that I haven’t hit the max heap size yet, so that’s probably not the cause.
Nevermind, turns out to be a different issue.
@currentoor: if you revisit this again and can share the query or an obfuscated form of it, there are common issues like clause ordering, typos in variable bindings, inclusion of clauses that don’t relate and lead to cartesian product intermediate sets of tuples, etc. that result in inefficient queries (and sometimes those inefficiencies may only become glaringly obvious at scale).
also note that index or log segments go into the object cache and won’t by default consume the entire heap, you can change the objectCacheMax system property (defaults to half of heap), more on that here: http://docs.datomic.com/caching.html#object-cache
@bkamphaus: much obliged!
Does Datomic do okay with the transactor and storage on the other side of the country (~100 ms)? Our west coast people are trying to get started with Datomic and are reporting slowness.
From a newly started peer JVM, a query that takes 5 s within the same AWS region as the servers takes 99 s from laptops in our Portland, Oregon, office (~100 ms ping times). And d/connect
takes 80 s.
It's much better for subsequent queries, as the peer starts to cache most of what it needs. But is this expected behavior, and is it network latency that is the determining factor? Or is something wrong? Should I be looking for Couchbase connection problems?
@ljosa: it sounds like network latency is certainly a contributing factor and that’s not a configuration I would typically recommend. Is there also a cross-regional consistency setting (i.e. replication or something) that’s a confounding factor as well?
no, no couchbase xdcr, as it doesn't guarantee the consistency that Datomic requires. just a transactor and a couchbase cluster, both in us-east-1.
@ljosa: what are your memory-index settings?
ok, that looks reasonable.
reason I ask re these two things is (1) really common issue with sudden latency spikes of users on e.g. Cassandra is cross-datacenter consistency/replication, have seen two orders of magnitude jump in latency out of that (2) peers have to accommodate memory index (and read all log/memory index segments into memory) with the initial call to connect, so that could be a contributing factor where even a small amount of latency could have a big impact.
I did some couchbase testing from my house in Massachusetts. Ping times around 25 ms. Connecting takes a few seconds. The query that they used in Oregon takes 30 s. Also tested directly with Couchbase, and things look reasonable: 200 ms to create cluster, 930 ms to open bucket, 30 ms to read a small document. No errors from Datomic or Couchbase.
@ljosa: it’s certainly true that (especially with the cross-country latency contributing) a warm query will be significantly faster as it won’t be retrieving segments from storage. If the entire database or most frequently accessed segments can be held in the object cache on the peer, performance should be fine after the warm up period.
do you have peer logging enabled?
the concurrency of peer reads can be adjusted, also: http://docs.datomic.com/system-properties.html#peer-properties
yes, after my 30 s query I get the first metrics:
[Datomic Metrics Reporter] INFO datomic.process-monitor - {:tid 22, :AvailableMB 2590.0, :StorageGetMsec {:lo 26, :hi 389, :sum 33313, :count 857}, :pid 37440, :event :metrics, :ObjectCache {:lo 0, :hi 1, :sum 75, :count 944}, :LogIngestMsec {:lo 0, :hi 601, :sum 601, :count 2}, :MetricsReport {:lo 1, :hi 1, :sum 1, :count 1}, :DbAddFulltextMsec {:lo 0, :hi 29, :sum 29, :count 2}, :PodGetMsec {:lo 54, :hi 76, :sum 186, :count 3}, :LogIngestBytes {:lo 0, :hi 3581246, :sum 3581246, :count 2}, :StorageGetBytes {:lo 67, :hi 48478, :sum 10179767, :count 857}}
hm, the average StorageGetMsec time for the peer doesn’t seem notably slow from the Datomic peer view, (39 msec average)
the same query is an order of magnitude increase? I would only expect that from latency if e.g. the StorageGetMsec time is extremely fast (i.e. an order of magnitude lower if we’re talking 3 vs. 30 sec), though this assumes storage reads dominate.
cold and hot query comparisons, system configs identical re: heap and object-cache size? (i.e not cross a memory threshold for intermediate representation on differently configured systems?)
The query takes 5.3 s from an AWS instance in the east. Metrics:
{:tid 19, :PeerAcceptNewMsec {:lo 1, :hi 1, :sum 1, :count 1}, :AvailableMB 1200.0, :StorageGetMsec {:lo 0, :hi 5, :sum 444, :count 846}, :pid 12134, :event :metrics, :ObjectCache {:lo 0, :hi 1, :sum 81, :count 936}, :LogIngestMsec {:lo 1, :hi 619, :sum 620, :count 2}, :MetricsReport {:lo 1, :hi 1, :sum 1, :count 1}, :PeerFulltextBatch {:lo 1, :hi 1, :sum 1, :count 1}, :DbAddFulltextMsec {:lo 0, :hi 35, :sum 35, :count 2}, :PodGetMsec {:lo 12, :hi 31, :sum 71, :count 3}, :LogIngestBytes {:lo 0, :hi 5165426, :sum 5165426, :count 2}, :StorageGetBytes {:lo 67, :hi 48478, :sum 10071059, :count 846}}
wow, StorageGetMsec average is 0.52 msec, vs. 39 msec in the other example, so I’d say that could certainly account for the difference (very good fit actually to 5.3 second versus 30 second ratio).
I tried -Ddatomic.readConcurrency=1000
also, without much effect. (Well, it went from 30.8 s to 28.8 s, not sure if I just got lucky.)
may just be luck, I think the latency is the bottleneck. The storage retrieval component of the query just being masked by the extremely fast storage access in the primary config.
Do you have other tricks that may speed up the connect and first query? Or do our people in Oregon just have to get used to long startup times? (This is for dev work and ad-hoc analysis; we don't have Datomic peers on production servers in the west.)
the usual answer for reducing latency in population the object cache is memcached ( http://docs.datomic.com/caching.html#memcached ) but not sure you’ll want to configure it for the dev work and ad-hoc analysis situation you describe. I’m not sure where the costs with the queries are being made perf wise.
i.e. if it’s intermediate reps and joins, narrowing, etc. or if your clauses match a ton of results that have to be then passed on. You could throw up a REST server to return query results for ad hoc analysis and submit queries to the endpoints, that way the peer stays warm, though I’m not sure that would save you much trouble if you’re getting really large result sets.
some of the costly queries may be able to be tuned via clause re-ordering, or strategies for handling time/tx provenance if those are a component?
different Datomic processes can use a different memcached