This page is not created by, affiliated with, or supported by Slack Technologies, Inc.
2017-05-08
Channels
- # aws (9)
- # beginners (69)
- # boot (14)
- # cider (26)
- # cljs-dev (56)
- # cljsrn (9)
- # clojars (4)
- # clojure (229)
- # clojure-brasil (1)
- # clojure-france (11)
- # clojure-greece (2)
- # clojure-italy (4)
- # clojure-mke (6)
- # clojure-serbia (6)
- # clojure-spec (83)
- # clojure-uk (38)
- # clojurescript (171)
- # core-async (3)
- # cursive (11)
- # data-science (11)
- # datomic (27)
- # emacs (113)
- # funcool (6)
- # hoplon (4)
- # jobs (1)
- # luminus (13)
- # lumo (44)
- # off-topic (148)
- # onyx (5)
- # overtone (1)
- # pedestal (4)
- # powderkeg (1)
- # proton (2)
- # re-frame (150)
- # reagent (16)
- # ring-swagger (43)
- # spacemacs (4)
- # specter (36)
- # vim (4)
- # yada (10)
@lorenlarsen why can't the peer create the database?
@favila Well the peer could create the database but my thinking was that this isn't such a good idea long term when I have multiple peers. I can use that as a workaround for now though.
I’ve read the docs a few times, but I’m still confused about Datomic objectCacheMax on the Peer. I understand the cache consists of uncompressed segments on-heap and compressed segments off-heap. If so, does objectCacheMax specify the max size for the off-heap+on-heap cache? If I set e.g. Xmx500m
, objectCacheMax=250m
and MaxDirectMemorySize=32m
, is the Peer smart enough to not blow the direct memory limit (if it even uses direct byte buffers)?
@dm3 The Datomic Peer does not currently use any off-heap memory. objectCacheMax is the size of the in-heap cache space.
Compressed segments are kept in the Storage Service and in Memcached.
@nikki I don't think that's viable, because Datomic doesn't qualify as a source control system (no forking and merging, no line/expression-level diffing) - for good reasons.
Databases and codebases are inherentenly different (despite all the parallels drawn between them), because in databases the 'snapshots' are derived from incremental changes (transactions), whereas in codebases the incremental changes (patches) are derived from the snapshot (i.e a consistent codebase)
The reason for that is that we reason about writing data in terms of incremental events, whereas we reason about writing code in terms of a coherent whole
at least that's what I do 🙂
@nikki yeah would love that especially for lisps 🙂
But I don't know if there are diffing algorithms for tree structures
not an expert at all
The thing is, we do like our indentation, so a diffing/merging algorithm must preserve more structure than just the CST
Hello everyone. I watched the “Day of Datomic” videos with the presentations by Stuart Halloway. In the section on query, he briefly discusses a query that looks something like this.
[ :find ?item ?price
:where [?item :product/costs ?price]
[(> 256 ?price)]]
Now, it seems clear to me that, if you have a billion products in your DB, then several gigabytes of data need to flow over the wire to the peer in order to process this query (assume a cold cache) even if the result set is only going to contain a few items.
Am I wrong? If so, how is this avoided?
Otherwise, what is the typical approach to solving this problem when using Datomic?Consider: 1) network is faster than disk, 2) only the peer running a query feels the load of the query
avoiding network traffic with your db on the fast datacenter network is rarely the scalability problem
#2 is a clear advantage. But with most databases that I might use, the data are indexed in such a way that I could run this query without ever inspecting the vast majority of the records (via network, disk, or any other mechanism). So the time to arrive at your result set, assuming it is sufficiently small, will be logarithmic in the number of records. (Hurray for B-Trees!) Does Datomic really provide no way to perform a range query without evaluating a predicate on every single item in the database?