Fork me on GitHub
#datahike
<
2021-09-28
>
Lone Ranger14:09:35

I just want to double check if I'm doing something wrong (or if this is part of what needs to be worked on for 1.0): currently my insert times are averaging 300-600ms, that's pretty hefty!

(go 
  (doseq [some-datom {:name (str (random-uuid)) :age (rand-int 50) :country (str (random-uuid))}]
    (time (a/<! (d/transact conn [some-datom])))))
This was going through the tutorial using the people config

Lone Ranger14:09:15

;; Define your schema
(def people-schema [{:db/ident       :name
                     :db/cardinality :db.cardinality/one
                     :db/index       true
                     :db/unique      :db.unique/identity
                     :db/valueType   :db.type/string}
                    {:db/ident       :age
                     :db/cardinality :db.cardinality/one
                     :db/valueType   :db.type/number}
                    {:db/ident       :country
                     :db/cardinality :db.cardinality/one
                     :db/valueType   :db.type/string}
                    {:db/ident       :siblings
                     :db/cardinality :db.cardinality/many
                     :db/valueType   :db.type/ref}
                    {:db/ident       :friend
                     :db/cardinality :db.cardinality/many
                     :db/valueType :db.type/ref}])

;; Define you db configuration
(def people-idb {:store  {:backend :indexeddb :id "people-idb"}
                :keep-history? true
                :schema-flexibility :write
                :initial-tx people-schema})


;; You can also set up a schemaless db which provides schema on read

  ;; Create an indexeddb store.
  (d/create-database people-idb)

  ;; Connect to the indexeddb store.
  (go (def conn (<! (d/connect people-idb))))

(go 
  (doseq [some-datom {:name (str (random-uuid)) :age (rand-int 50) :country (str (random-uuid))}]
    (time (a/<! (d/transact conn [some-datom])))))

kkuehne14:09:26

The ClojureScript version did not have the optimizations yet that we introduced this year, so the performance was expected. Also Datahike is not batching transactions, so each transact has to open a connection to the store and flush it, so transactions with many datoms are preferred for now.