This page is not created by, affiliated with, or supported by Slack Technologies, Inc.
2021-01-14
Channels
- # adventofcode (2)
- # announcements (61)
- # babashka (26)
- # beginners (125)
- # calva (63)
- # cider (33)
- # clj-kondo (40)
- # cljs-dev (24)
- # clojure (165)
- # clojure-australia (8)
- # clojure-dev (4)
- # clojure-europe (44)
- # clojure-finland (1)
- # clojure-greece (4)
- # clojure-losangeles (1)
- # clojure-nl (28)
- # clojure-taiwan (3)
- # clojure-uk (64)
- # clojurescript (2)
- # core-async (14)
- # datomic (34)
- # docker (2)
- # fulcro (9)
- # garden (1)
- # jobs (4)
- # jobs-discuss (21)
- # kaocha (3)
- # off-topic (48)
- # pathom (4)
- # practicalli (3)
- # remote-jobs (3)
- # shadow-cljs (46)
- # spacemacs (6)
- # sql (4)
- # tools-deps (22)
- # xtdb (5)
- # yada (2)
Does Datomic Cloud support attribute type :db.type/bytes?
I don’t see it in the valueTypes https://docs.datomic.com/cloud/schema/schema-reference.html#db-valuetype
Unfortunately, db.type/bytes
is not supported in cloud or analytics. In supporting this value type in on-prem we saw a number of problems due to the java semantics which we discuss here: https://docs.datomic.com/on-prem/schema.html#bytes-limitations
Alright thanks.
If this is a feature you need I'd be happy to share the use case with the team if you want to provide details. If we can't provide that type perhaps we can provide another solution that meets your needs.
I’m using a java lib for managing sessions and I’d like to store them in datomic. The sessions instances have an attribute map <object, object>. I wanted to be able to serialise the attribute map and store that in a session entity.
Basically a container of data that is semantically opaque. 😛
Might have to look at using a different storage mechanism for sessions.
Unless you have a different suggestion @U1QJACBUM
I considered that. What I ended up doing was using tuples for session key / value pairs.
Hello Datomic/Clojure experts, I am trying to pull all the relevant information regarding Employees in one query. First I get a vector of all the Employee maps. Then using specter/transform or clojure.walk/postwalk, I process the vector of Employee maps and get the full maps using :db/id 's. The ref attributes are not defined as component attributes. But I need to have similar functionality. For this, I use a (d/pull db '[*] db-id) inside the specter transform function. (or with a postwalk function). But my pull with the above pull statement takes nearly 10 seconds or above to fetch the whole employee maps. The questions are: 1 - Why it is taking so much time? I have, may be 200 employees at the moment. It is a SOLO stack. 2 - Is there any better/faster way to get the full maps with the :db/id's? Thank you for any suggestions. See the code below: I have removed irrelevant lines. (let [ employees [ #:employee{:email "<mailto:[email protected]|[email protected]>", :last-name "smith", :emplid "PLM0015", :job #:db{:id 101155069757724}, :full-time? true, :first-name "Haroon", :employee-type #:db{:id 79164837202211}, :gender-type #:db{:id 92358976735520}, } #:employee{:email "<mailto:[email protected]|[email protected]>", :last-name "smith", :emplid "PLM0025", :job #:db{:id 10115506975245}, :full-time? true, :first-name "Farhan", :employee-type #:db{:id 79164837202211}, :gender-type #:db{:id 92358976735520}, } ....................] ] ;;job :db/id is 101155069757724 ;; (d/pull db '[*] 101155069757724) ) (specter/transform [ALL] (fn [each-map] (let [db-id (:db/id each-map)] (d/pull db '[*] db-id) )) employees) ;;I apply the above logic only for the map values with :db/id's. )
If this is datomic cloud, this is slow because it is 600 blocking request+responses in a row
This looks like employees
already came out of a pull. why not just pull everything in one go?
Thank you for the response. It is Datomic Cloud/Solo. I was doing it this way basically to make it dynamic. It could be Employee or it could be another Entity in the domain, like Benefit or Product etc. One function returns the complete information.
I'm facing a few recurring issues with datomic cloud write latencies and index memory usage. In our current setup we are transacting one event at a time, throttling them to avoid overcharging our transactor. I was wondering if we would benefit from grouping our events before transacting, or that is not necessarily the case?
Rule of thumb is to aim for transaction sizes of 1000-2000 datoms if you actually can control how changes are grouped
could your transactor just be undersized for your rate of novelty? is this a regular thing or something you only encounter during bulk operations?