This page is not created by, affiliated with, or supported by Slack Technologies, Inc.
2017-05-16
Channels
- # beginners (176)
- # boot (11)
- # cider (12)
- # cljs-dev (65)
- # cljsrn (54)
- # clojars (18)
- # clojure (195)
- # clojure-austin (1)
- # clojure-dev (2)
- # clojure-italy (8)
- # clojure-quebec (1)
- # clojure-russia (51)
- # clojure-serbia (3)
- # clojure-spec (24)
- # clojure-uk (28)
- # clojurescript (41)
- # cursive (14)
- # data-science (60)
- # datascript (2)
- # datomic (111)
- # emacs (6)
- # figwheel (1)
- # graphql (16)
- # hoplon (26)
- # juxt (2)
- # lein-figwheel (3)
- # lumo (12)
- # off-topic (8)
- # om (14)
- # pedestal (22)
- # perun (2)
- # proton (1)
- # re-frame (29)
- # reagent (27)
- # ring (17)
- # ring-swagger (2)
- # rum (3)
- # spacemacs (3)
- # unrepl (155)
- # untangled (28)
- # vim (4)
is datomic a good fit for managing trees of data (predominently). There are relations between the data at different levels too.
@wistb so DAGs actually :)
I'd sat definitely yes for reading
To be seen for writing
Would be easier if you gave us examples of data, reads and writes
have you seen anyone replacing a hibernate layer with datomic ? Our situation is like that. we use spring/jpa/hibernate/postgress/oracle. In the interest of keeping the business logic in tact, we are wondering if we can replace the ORM with datomic ....
@wistb AFAIK there are no Datomic ORMs (Datomic is not 'R', and its community is not too fond of 'O' 😉 ). Reimplementing all of JPA on top of Datomic would probably be a huge endeavour IMO. Out of curiosity, what leverage do you expect from Datomic if you're planning on 'hiding' it behind a JPA-like interface? Application logic is where the specifics of Datomic usually shine (datalog, rules, pull, entity API, non-remote querying etc.)
Although I do understand the desire to make a smooth transition
the read interface shouldn't be too hard, I guess you can have each Entity class have a private datomic.Entity field and implement the getters on top of that
writing is probably more problematic
(Datalog beginner) Is there a reason why Datomic/Datomic does not support hash-map return values, e.g. [:find {:id: ?e :title ?title} :where [?e :product/title ?title]] => [{:id 1789... :title "Title 1"} ...]
My first guess would be to do with subquerying and set uniqueness
use [:find [(pull ?e [:id :title]) …]
Thanks @karol.adamiec! 🙂
Date/instant attribute naming best practice: :thing/arrival-date
vs :thing/arrived-on
vs :thing/arrived-at
?
I have a slight preference for :thing/arrival-date
which is more informative re: type.
The noun vs verb debate is not a big deal IMHO - clarity and searchability are the important concepts
I intermittently get:
Exception in thread "main" clojure.lang.ExceptionInfo: Error communicating with HOST 0.0.0.0 or ALT_HOST datomic on PORT 4334
when a peer / kubernetes replica starts up and attempts to connect, using SQL storage with pro license. sometimes kubernetes will restart it up to 4 times after it crashes, but it always eventually connects. is this typically thrown if it can't establish network or are there other reasons? should my peer retry a few times before crashing?That means peer connected to storage, but could not connect to transactor via the provided "host=" or "alt-host=" config values set in the transactor's transactor.properties file @devth
i thought i needed it to be 0.0.0.0 in order to work, but it's been awhile since i first set it up on k8s
storage (sql) goes through a proxy (google cloud sql proxy) on localhost to the cloud mysql instance
what I mean is, how does the proxy get the destination address? I am seeing if these are determined via different systems, which would make it possible that one system is up and the other is not yet
e.g. if proxy forwarded to hardcoded IP (likely for google cloud mysql), then maybe dns is just not up yet
@val_waeselynck I liked this : "Application logic is where the specifics of Datomic usually shine (datalog, rules, pull, entity API, non-remote querying etc.)" . It is nice take-away point .
Hi, I have an architecture question. I’ve a Datomic/Clojure based microservice. Some of its transactions are ‘domain events’ that I want to shoot off to kafka. I identify them with an attribute on the relevant transaction entity I’d been playing around with Onyx, it’s cool, but I’m starting think it might be overkill for this use case which is really just grabbing relevant transactions, mapping their attributes to the event structure and sending them on their way to kafka. I’ve been looking at datomic’s TX-report-queue, as it looks like a listener on that guy would pretty much meet my needs. But i’m not clear on some of its semantics. it seems like each peer gets it’s own queue? if so then I’d potentially be processing (num peers) copies of the same transaction/event.
If every entity needs a globally unique identifier, do you guys prefer to add it under an entity namespaced attribute (e.g. for your product entity :product/id
or organization entity :organization/id
) or globally understood name (e.g. every entity have the attribute`:entity/id`)?
I prefer to have separate ID attributes per "type" of entity, because it allows me to query for them more easily
I don't have to "duck type", I can just look specifically for :product/id 123. Otherwise I'd have to query for :generic-id 123 and then later figure out if facts for that entity is of the type I expect
@augustl Wouldn't your query effectively be the same either way?
'[:find ?name
:in $ ?id
:where
[?e :product/id ?id]
[?e :product/name ?name]]
'[:find ?name
:in $ ?id
:where
[?e :entity/id ?id]
[?e :product/name ?name]]
I like to just query for the id and then build entity objects, not pull out attributes in the query
Still seems pretty similar 🙂
(:product/name (d/entity db [:product/id my-id]))
(:product/name (d/entity db [:entity/id my-id]))
so your URL could be /people/5 and that could be an ID of a product, not a person, and you'd still get data from the query
Isn't the type implicit with the entity attributes? i.e. Because the entity has the :product/name
attribute it is therefore a Product entity.
but if you want a full collection of attributes you'd just end up with a sloppy system, I'd say
where you would get a person with the id 5 (even though 5 is a product) and all the attributes would be nil
I think you'd still be covered tho':
(let [{:product/keys [id] :as e} (d/pull db '[*] [:product/id my-id])]
(if id
{:status 200 :body e}
{:status 404}))
(let [{:product/keys [name] :as e} (d/pull db '[*] [:entity/id my-id])]
(if name
{:status 200 :body e}
{:status 404}))
And you wouldn't end up with a map of nil
s. From the docs:
> attribute specifications that do not match an entity are omitted from that entity's result map
I think I see what you're getting at: you just want a globally consistent key to check for an entity's type?
IIRC, d/entity
will always return an entity even if the entity does not exist. So, you'd still need the if
.
Yeah, you'd still need to check if pulling :product/id
off the entity was nil
before proceeding.
(transact conn (into #{[:db/add (d/tempid :db.part/tx) :audit/user x] [:db/add (d/tempid :db.part/tx) :audit/ns y]} other-tx))
resolve both tempid to the same "transact-id". It's a feature? Should I use it??