Fork me on GitHub
#xtdb
<
2019-07-25
>
hoppy05:07:46

did some load testing with the rocks/rocks crux. Made 1M (sorta) random documents, 4 keys that care, about a dozen that don't. Was getting insertion rate of about 400/sec on a craptop with an SSD.

hoppy05:07:05

DB's weighed in about 800M (in total) afterward.

hoppy05:07:29

both of these look pretty good

🙂 12
hoppy05:07:49

transactions batched 1K docs per

refset09:07:57

@hoppy nice! Are you comparing against anything in particular? Is that insertion rate for your batched transactions or averaged out for documents?

hoppy13:07:06

yeah, it much faster than the garbage we are using now (;->

parrot 4
hoppy13:07:02

The insertion rate is a gross number like it took 45 mins to get 1M inserted, so ...

hoppy13:07:39

post-hence, I played with the batch size a bit, and it seemed a lot less sensitive to this than one might expect.

refset13:07:26

Cool, okay good to know. We'll see what we can do to speed it up further!

hoppy13:07:04

of course, all of this was done to try test query speed, so I'm getting into that now...

🤞 4
jeroenvandijk11:07:31

Regarding insertion rate is there are rule of thumb for what can be expected? Linear to the underlying datastore (e.g. kafka) + x% ?

refset12:07:41

@U0FT7SRLP insertion throughput across a horizontally scaled cluster of nodes will only be limited by the performance of the single partition Kafka transaction topic (asides from some basic transaction validation the Crux overheads are very limited), but end-to-end ingestion (retrieval + indexing at one or more nodes) will be mostly limited by the local KV store performance (i.e. RocksDB). We don't have any rules of thumb currently, but we hope to make these kinds of benchmarks available very soon

jeroenvandijk12:07:26

@U899JBRPF Thanks! Sounds good

👌 4