This page is not created by, affiliated with, or supported by Slack Technologies, Inc.
2021-09-28
Channels
- # announcements (92)
- # aws (7)
- # babashka (13)
- # beginners (42)
- # clj-kondo (9)
- # cljdoc (25)
- # clojure (156)
- # clojure-europe (19)
- # clojure-italy (3)
- # clojure-nl (3)
- # clojure-sg (1)
- # clojure-spec (3)
- # clojure-uk (6)
- # clojurescript (21)
- # copenhagen-clojurians (1)
- # cryogen (3)
- # cursive (9)
- # datahike (3)
- # datomic (5)
- # emacs (8)
- # graphql (4)
- # introduce-yourself (3)
- # jobs (2)
- # malli (1)
- # meander (8)
- # nrepl (3)
- # off-topic (8)
- # om-next (2)
- # pathom (11)
- # rdf (5)
- # reagent (59)
- # remote-jobs (4)
- # shadow-cljs (8)
- # tools-build (23)
- # vim (16)
I just want to double check if I'm doing something wrong (or if this is part of what needs to be worked on for 1.0): currently my insert times are averaging 300-600ms, that's pretty hefty!
(go
(doseq [some-datom {:name (str (random-uuid)) :age (rand-int 50) :country (str (random-uuid))}]
(time (a/<! (d/transact conn [some-datom])))))
This was going through the tutorial using the people config;; Define your schema
(def people-schema [{:db/ident :name
:db/cardinality :db.cardinality/one
:db/index true
:db/unique :db.unique/identity
:db/valueType :db.type/string}
{:db/ident :age
:db/cardinality :db.cardinality/one
:db/valueType :db.type/number}
{:db/ident :country
:db/cardinality :db.cardinality/one
:db/valueType :db.type/string}
{:db/ident :siblings
:db/cardinality :db.cardinality/many
:db/valueType :db.type/ref}
{:db/ident :friend
:db/cardinality :db.cardinality/many
:db/valueType :db.type/ref}])
;; Define you db configuration
(def people-idb {:store {:backend :indexeddb :id "people-idb"}
:keep-history? true
:schema-flexibility :write
:initial-tx people-schema})
;; You can also set up a schemaless db which provides schema on read
;; Create an indexeddb store.
(d/create-database people-idb)
;; Connect to the indexeddb store.
(go (def conn (<! (d/connect people-idb))))
(go
(doseq [some-datom {:name (str (random-uuid)) :age (rand-int 50) :country (str (random-uuid))}]
(time (a/<! (d/transact conn [some-datom])))))
The ClojureScript version did not have the optimizations yet that we introduced this year, so the performance was expected. Also Datahike is not batching transactions, so each transact has to open a connection to the store and flush it, so transactions with many datoms are preferred for now.