Fork me on GitHub
#datahike
<
2021-05-12
>
whilo01:05:18

@arohner Nice to have you around, I was just checking the status of spectrum 🙂. The README is slightly outdated. I think we should be able to scale to larger setups already, but we have not done tests with billions of entities yet. Write performance got significantly improved in version 0.3.3 (i.e. 20k Datoms/sec bulk throughput on my machine) and we are working now on our read performance. One issue we recently encountered was that the query engine we inherited from DataScript in some cases does unnecessary full range scans, which is completely prohibitive for larger databases. We also conduct an open reading group on Datalog and design a more advanced query engine. Please join our discord if you like to discuss more.

Josh08:05:14

Are you doing anything special to achieve 20k Datoms/sec ? I’m trying to import ~47000 datoms and getting speeds of around 500 - 1k datoms / sec. I’m on version 0.3.6 and using the file backend.

arohner08:05:36

Spectrum is fairly dormant, I’m afraid. I’m still interested in working on it, but there just isn’t enough time in the day right now 🙂

arohner08:05:43

Thanks for the update. I’ll check on the discord

👍 1