This page is not created by, affiliated with, or supported by Slack Technologies, Inc.
2018-05-16
Channels
- # architecture (12)
- # aws (8)
- # bangalore-clj (1)
- # beginners (172)
- # boot (25)
- # chestnut (3)
- # cider (15)
- # cljsrn (5)
- # clojure (170)
- # clojure-india (1)
- # clojure-italy (21)
- # clojure-nl (87)
- # clojure-romania (3)
- # clojure-sg (1)
- # clojure-spec (1)
- # clojure-uk (79)
- # clojurescript (79)
- # cursive (2)
- # datomic (29)
- # dirac (26)
- # emacs (7)
- # fulcro (13)
- # jobs (4)
- # juxt (22)
- # lein-figwheel (1)
- # leiningen (2)
- # lumo (39)
- # nrepl (1)
- # off-topic (54)
- # onyx (124)
- # pedestal (1)
- # planck (4)
- # portkey (1)
- # re-frame (36)
- # reagent (2)
- # ring-swagger (8)
- # shadow-cljs (107)
- # spacemacs (1)
- # specter (25)
- # sql (7)
- # tools-deps (5)
- # vim (10)
- # yada (25)
transducers
+ entity api
is powerfull as datalog, with some different properties. great combination
pull is even better than entity
@U064X3EF3 @U2J4FRT2T I think the comparison deserves more nuance. Entity is good for navigating on one data path and making late-bound decisions along the way. Pull is good for collecting data along a bunch of data paths with less control and expressiveness. Entity can be a good substrate for implementing a richer version of Pull (e.g Om Next parsers or GraphQL). Finally, let's not forget that these 2 compose!
pull with transducers is less cool then with entity. If I swap entity with pull, I will need to write a pattern that "matches" with the transducers. And with pull, I will not be able to be lazy as I am in entity
Example
(eduction
(map :cart/itens)
(filter custom-item-pred?)
(map :cart/_itens)
(d/entity db id))
If I ask for empty?
on this, It will be way faster then datalog or pull'sfair enough! I’ve mostly been using client lately, which doesn’t have the entity api…
But peer's will never be deprecated, right? 😉 Peer is a different product. Client API feels like traditional DB's (sure, with the awesome of datomic model)
I’m not on the Datomic team, so can’t answer any questions about future direction (as I don’t know)
Entity is more general, pull can be implemented in terms of entity, but not vice versa
Does Datomic cloud allow bypassing the load balancer and controlling which peer you hit? In other words, can I direct similar queries at particular peers to get the most out of the object cache?
That's my question really. The use-case i'm considering it for requires high frequency, very low latency reads (not write heavy). Should I try on-prem if this kind of read optimization might be necessary?
no, Cloud will use things like sticky sessions / query affinity /etc to route multiple subsequent queries to the same node
additionally, when Query Groups become available, you will be able to provision and specify a query group dedicated to any particular workload you have
Ok, cool. Is there documentation on how query affinity is determined?
Do I also have control over the size of memcached? I basically want memcached to be the primary storage and misses to be extremely rare.
Great, sounds good. I'll have to evaluate it in practice. Thanks a lot.
Operations question about the on-prem topography running on AWS, which if there’s an answer in the docs I apologize for missing but I haven’t found: is there a way to discover, at a given moment in time, the IP address of every peer connected to a given transactor?
I seem to recall someone saying than using SQUUIDs was no longer necessary, but having switched to regular UUID generation, I suspect that this is no the case for my Datomic version because of how slow indexing is (v 5.0.5407) - can someone confirm that ?
as I understand it, squuids are not necessary in cloud, have no idea if anything changed re on-prem
@val_waeselynck Adaptive indexing is the change that i would have expected to remove most of the advantage of SQUUIDS (http://blog.datomic.com/2014/03/datomic-adaptive-indexing.html) It was released in version 0.9.4699
Ok, that's what I thought. I'll benchmark tomorrow
@U05120CBV solved it. The culprit was not slow index writes due to dispersion of values; it was actually slow index reads in a transaction function due to a high dispersion of EAVT lookups, resulting in lots of cache misses. Reordering the imports sensibly solved it. Adaptative indexing works fine AFAICT.