Fork me on GitHub
#datomic
<
2015-12-17
>
magnars05:12:14

@domkm it may not be part of the official API, but (dec t) works. I saw it used first here: http://dbs-are-fn.com/2013/datomic_history_of_an_entity/

domkm05:12:14

@magnars: ts are not sequential (on my system).

domkm05:12:30

I wish they were 😞

magnars05:12:30

they don't have to be - (as-of (dec t)) works even if there is no such t as (dec t)

domkm05:12:40

@magnars: (tx-range log (dec t) nil) provides the same range as (tx-range log t nil) if (dec t) isn't a real t. It rounds up.

domkm05:12:28

...with the caveat that I am using the free storage protocol. Maybe other ones work differently, though I hope they would be consistent.

magnars05:12:28

That's strange. The code in http://dbs-are-fn.com/2013/datomic_history_of_an_entity/ shouldn't work then. Maybe it no longer does.

robert-stuttaford07:12:01

it does round up. the only real way to find a previous t is to iterate dec, testing d/t->tx until you find one, @domkm

robert-stuttaford07:12:33

personally i think it’s annoying that the api doesn’t provide an easy way to traverse time backwards like this

robert-stuttaford07:12:46

we do an activity stream (most recent first) and had to deal with this too

joseph14:12:55

@robert-stuttaford: Hi, robert, just update the result of importing the large data (around 340 million datums) with dividing to 120 variables from mysql to datomic, it has been done yet, processing. But it seems well. and I request-index and sync-index after importing each variables' datums.

joseph14:12:04

Found the reason of failing in the previous time is the peer/transactor timeout error. So set the datomic.peerConnectionTTLMsec and datomic.txTimeoutMsec from default 10 seconds to 1 minute

joseph14:12:11

it's working now

joseph14:12:45

BTW, each sync-index processing takes around 15-25 seconds

robert-stuttaford14:12:24

that’s fantastic!

joseph14:12:43

@robert-stuttaford: hmm... fail with the error java.lang.RuntimeException: HQ119028: Timeout waiting for LargeMessage Body in the peer log I guess it's because I set memory-index-threshold too high with 256m

joseph14:12:57

change it to 64m and these is my other config:

joseph14:12:13

object-cache-max=2g memory-index-max=4g memory-index-threshold=64m

joseph14:12:24

try again, and set timeout to 2 minutes

robert-stuttaford14:12:39

@bkamphaus: over to you on this one simple_smile

robert-stuttaford14:12:34

joseph, are you starting from scratch or resuming where it failed?

robert-stuttaford14:12:56

if you’re not resuming, i would highly recommend you stop and make it so you can resume before doing anything else simple_smile

Ben Kamphaus14:12:29

@joseph: do you have metrics reported on the imports anywhere (i.e. cloudwatch), or alternatively are you at least saving logs where you can grep through them?

joseph14:12:46

@bkamphaus: I am using dev mode, so just save the logs

Ben Kamphaus14:12:41

All contained on one machine? I.e. same machine is running peer import process, transactor (and its writes to file system via dev)?

joseph14:12:56

@robert-stuttaford: because I don't have any clue where it failed when importing the datum, so i just from scratch

joseph14:12:42

@bkamphaus: is that a problem?

Ben Kamphaus14:12:20

There will be intrinsic constraints due to processes running on the same box - gc induced pauses, multi-JVM stresses, etc. It will at least introduce a level of reliability you have to accommodate with increased timeouts, probably increased heartbeat time, and potentially GC tuning on the peer app (G1GC settings similar to transactor from command line).

Ben Kamphaus14:12:02

It won’t be near the volume you’d get of transaction/import using a distributed system with a dedicated transactor box and a dedicated storage. The indexing overhead will be harder to bear, too, without a dedicated transactor.

Ben Kamphaus14:12:51

but the big step back for any of these imports, in terms of assessing machine/system config, approach taken in peer app doing the imports, transactor settings, etc., is what is the target throughput? In Bytes, Datoms, or transaction count at least?

Ben Kamphaus14:12:19

You can use TransactionBytes or TransactionDatoms sum (grepped from metrics or monitored on cloudwatch) to see how much throughput you’re getting in terms of both (Datomic metrics are reported per minute)

Ben Kamphaus14:12:23

Sum that over several hours or e.g. day (a long enough time period that you’re capturing the overhead introduced by indexing), and you can assess your current throughput and whether or not it fits your requirements.

Ben Kamphaus15:12:42

But our expectation is that Datomic’s throughput capacity, even when not optimally configured, is sufficient to get you into trouble with database size very, very quickly. I.e. we know that you can transact more than 1 billion datoms a day with a well tuned but not particularly rigorously optimized system and a fast, well provisioned storage, and that you start incurring more performance costs for ongoing operations over 10 billion datoms in size. Note these numbers are soft estimates based on hardware/networking practicalities, etc. and just a small snapshot, or a definitive statement on any actual limits or discrete performance change or level.

joseph15:12:53

@bkamphaus: hmm thanks for the very informative advices. I never thought that peer, transactor are in the same machine will have some potential JVM and GC problems. I will try to lay them in different pcs. Since we will not save the data on the cloud, so till now we didn't use any amazon's service and productions.

joseph15:12:28

But I am think of applying some databases instead of just dev modes. hope that would improve the performance

curtosis16:12:38

@bkamphaus: double-checking that I've understood this: essentially, throughput is almost never a problem itself except that it can mask the size-related performance issues the high throughput enables?

Ben Kamphaus16:12:51

@curtosis: to be a little more precise, I’m commenting that any high level of sustainable Datomic throughput is fairly simple to achieve.

Ben Kamphaus16:12:26

struggling to really push optimization with throughput, unless you already have e.g. a rotating time or domain based sharding strategy or something, indicates that your data may be too large to be a good fit with Datomic.

curtosis16:12:55

ah, I see. the "inverted" perspective is a better one. "If you're having throughput problems, that's a good sign you're gonna have a bad time for other reasons" (modulo mitigation strategies).

Ben Kamphaus16:12:26

Right, the “I want to put 1 billion datoms in a Datomic db a day” problem indicates you should step back and say “Is Datomic a good fit for 356 billion facts?” Today, probably not so much (modulo mitigation strategies) — I like that phrasing simple_smile

curtosis16:12:44

and from an architect's perspective I think I'd be inclined by "mitigation strategies" to mean, in part "confidence your domain is naturally (or at least sanely) shardable/rotatable"

Ben Kamphaus16:12:02

But import tuning, etc. is worth making sure you’re taking reasonable first steps, tuning, distribution the system correctly, etc. I’m not trying to dissuade anyone from perf tuning their imports. Just run through this sanity check list: 1. Know what your current throughput is (and how to measure it): 2. Know what your throughput target/requirements are. 3. Know what their implications are and if they’re compatible with your use of Datomic in general.

Lambda/Sierra16:12:24

1 billion datoms/day is ~11,500 datoms/second.

Ben Kamphaus16:12:02

I see the problem shape come up often of “What are you trying to put into Datomic?” “everything that’s coming in from somewhere else”, “how much is that?” I don’t know”, “how fast is it going in now?” “I don’t know”, “how fast do you need it to be?” “faster” “What problem does that solve?” “It’s not going fast enough” … simple_smile just want to tease out the salient details.

curtosis16:12:48

lol. I'm very familiar with this sequence of questions/answers.

curtosis16:12:27

frequently reduced to just the last two.

Lambda/Sierra16:12:07

10 billion datoms/year = 317 datoms/second average

curtosis16:12:20

for comparison, you can hit 10k+ triples/sec on a tuned high-performance triplestore. But it's not exactly in the same problem space as Datomic. And your query throughput will be very different.

Lambda/Sierra16:12:33

@curtosis: yes, and that's a peak burst rate. How many systems can sustain 10k triples/second 24 hours per day for weeks at a time?

curtosis16:12:38

(although, also, I doubt you can sustain that load rate!)

curtosis16:12:55

heh.. @stuartsierra same thought track simple_smile

Lambda/Sierra16:12:06

Unfortunately, most database benchmarks only report peak burst rate. It's expensive (and less impressive) to test throughput per week.

curtosis16:12:02

also easier to optimize for