This page is not created by, affiliated with, or supported by Slack Technologies, Inc.
- # admin-announcements (104)
- # adventofcode (3)
- # aws (1)
- # boot (651)
- # cljs-dev (21)
- # cljsrn (12)
- # clojure (81)
- # clojure-china (1)
- # clojure-germany (1)
- # clojure-miami (2)
- # clojure-nl (8)
- # clojure-russia (19)
- # clojurescript (208)
- # core-typed (1)
- # cursive (19)
- # datavis (55)
- # datomic (57)
- # events (1)
- # hoplon (102)
- # ldnclj (12)
- # leiningen (8)
- # off-topic (11)
- # om (127)
- # onyx (21)
- # parinfer (2)
- # portland-or (3)
- # proton (2)
- # re-frame (2)
- # reagent (6)
@domkm it may not be part of the official API, but
(dec t) works. I saw it used first here: http://dbs-are-fn.com/2013/datomic_history_of_an_entity/
(tx-range log (dec t) nil) provides the same range as
(tx-range log t nil) if
(dec t) isn't a real
t. It rounds up.
...with the caveat that I am using the free storage protocol. Maybe other ones work differently, though I hope they would be consistent.
That's strange. The code in http://dbs-are-fn.com/2013/datomic_history_of_an_entity/ shouldn't work then. Maybe it no longer does.
it does round up. the only real way to find a previous
t is to iterate dec, testing
d/t->tx until you find one, @domkm
personally i think it’s annoying that the api doesn’t provide an easy way to traverse time backwards like this
we do an activity stream (most recent first) and had to deal with this too
@robert-stuttaford: Hi, robert, just update the result of importing the large data (around 340 million datums) with dividing to 120 variables from mysql to datomic, it has been done yet, processing. But it seems well. and I request-index and sync-index after importing each variables' datums.
Found the reason of failing in the previous time is the peer/transactor timeout error. So set the datomic.peerConnectionTTLMsec and datomic.txTimeoutMsec from default 10 seconds to 1 minute
@robert-stuttaford: hmm... fail with the error java.lang.RuntimeException: HQ119028: Timeout waiting for LargeMessage Body in the peer log I guess it's because I set memory-index-threshold too high with 256m
@bkamphaus: over to you on this one
joseph, are you starting from scratch or resuming where it failed?
if you’re not resuming, i would highly recommend you stop and make it so you can resume before doing anything else
@joseph: do you have metrics reported on the imports anywhere (i.e. cloudwatch), or alternatively are you at least saving logs where you can grep through them?
@bkamphaus: I am using dev mode, so just save the logs
All contained on one machine? I.e. same machine is running peer import process, transactor (and its writes to file system via dev)?
@robert-stuttaford: because I don't have any clue where it failed when importing the datum, so i just from scratch
@bkamphaus: is that a problem?
There will be intrinsic constraints due to processes running on the same box - gc induced pauses, multi-JVM stresses, etc. It will at least introduce a level of reliability you have to accommodate with increased timeouts, probably increased heartbeat time, and potentially GC tuning on the peer app (G1GC settings similar to transactor from command line).
It won’t be near the volume you’d get of transaction/import using a distributed system with a dedicated transactor box and a dedicated storage. The indexing overhead will be harder to bear, too, without a dedicated transactor.
but the big step back for any of these imports, in terms of assessing machine/system config, approach taken in peer app doing the imports, transactor settings, etc., is what is the target throughput? In Bytes, Datoms, or transaction count at least?
You can use TransactionBytes or TransactionDatoms sum (grepped from metrics or monitored on cloudwatch) to see how much throughput you’re getting in terms of both (Datomic metrics are reported per minute)
Sum that over several hours or e.g. day (a long enough time period that you’re capturing the overhead introduced by indexing), and you can assess your current throughput and whether or not it fits your requirements.
But our expectation is that Datomic’s throughput capacity, even when not optimally configured, is sufficient to get you into trouble with database size very, very quickly. I.e. we know that you can transact more than 1 billion datoms a day with a well tuned but not particularly rigorously optimized system and a fast, well provisioned storage, and that you start incurring more performance costs for ongoing operations over 10 billion datoms in size. Note these numbers are soft estimates based on hardware/networking practicalities, etc. and just a small snapshot, or a definitive statement on any actual limits or discrete performance change or level.
@bkamphaus: hmm thanks for the very informative advices. I never thought that peer, transactor are in the same machine will have some potential JVM and GC problems. I will try to lay them in different pcs. Since we will not save the data on the cloud, so till now we didn't use any amazon's service and productions.
But I am think of applying some databases instead of just dev modes. hope that would improve the performance
@bkamphaus: double-checking that I've understood this: essentially, throughput is almost never a problem itself except that it can mask the size-related performance issues the high throughput enables?
@curtosis: to be a little more precise, I’m commenting that any high level of sustainable Datomic throughput is fairly simple to achieve.
struggling to really push optimization with throughput, unless you already have e.g. a rotating time or domain based sharding strategy or something, indicates that your data may be too large to be a good fit with Datomic.
ah, I see. the "inverted" perspective is a better one. "If you're having throughput problems, that's a good sign you're gonna have a bad time for other reasons" (modulo mitigation strategies).
Right, the “I want to put 1 billion datoms in a Datomic db a day” problem indicates you should step back and say “Is Datomic a good fit for 356 billion facts?” Today, probably not so much (modulo mitigation strategies) — I like that phrasing
and from an architect's perspective I think I'd be inclined by "mitigation strategies" to mean, in part "confidence your domain is naturally (or at least sanely) shardable/rotatable"
But import tuning, etc. is worth making sure you’re taking reasonable first steps, tuning, distribution the system correctly, etc. I’m not trying to dissuade anyone from perf tuning their imports. Just run through this sanity check list: 1. Know what your current throughput is (and how to measure it): 2. Know what your throughput target/requirements are. 3. Know what their implications are and if they’re compatible with your use of Datomic in general.
1 billion datoms/day is ~11,500 datoms/second.
I see the problem shape come up often of “What are you trying to put into Datomic?” “everything that’s coming in from somewhere else”, “how much is that?” I don’t know”, “how fast is it going in now?” “I don’t know”, “how fast do you need it to be?” “faster” “What problem does that solve?” “It’s not going fast enough” … just want to tease out the salient details.
10 billion datoms/year = 317 datoms/second average
for comparison, you can hit 10k+ triples/sec on a tuned high-performance triplestore. But it's not exactly in the same problem space as Datomic. And your query throughput will be very different.
@curtosis: yes, and that's a peak burst rate. How many systems can sustain 10k triples/second 24 hours per day for weeks at a time?
heh.. @stuartsierra same thought track
Unfortunately, most database benchmarks only report peak burst rate. It's expensive (and less impressive) to test throughput per week.