This page is not created by, affiliated with, or supported by Slack Technologies, Inc.
2018-03-01
Channels
- # aleph (4)
- # arachne (24)
- # beginners (231)
- # boot (4)
- # cider (63)
- # clara (36)
- # cljs-dev (57)
- # clojure (195)
- # clojure-dev (12)
- # clojure-gamedev (2)
- # clojure-greece (1)
- # clojure-italy (10)
- # clojure-poland (4)
- # clojure-spec (36)
- # clojure-uk (65)
- # clojurescript (133)
- # core-async (8)
- # core-logic (2)
- # cursive (18)
- # data-science (3)
- # datomic (58)
- # defnpodcast (3)
- # duct (2)
- # emacs (2)
- # fulcro (27)
- # graphql (3)
- # hoplon (18)
- # jobs (2)
- # jobs-discuss (10)
- # jobs-rus (1)
- # lumo (1)
- # mount (6)
- # nyc (2)
- # off-topic (27)
- # pedestal (13)
- # re-frame (71)
- # reagent (105)
- # reitit (4)
- # ring (2)
- # ring-swagger (1)
- # rum (10)
- # shadow-cljs (172)
- # spacemacs (24)
- # sql (26)
- # tools-deps (1)
- # uncomplicate (4)
- # unrepl (51)
- # vim (3)
- # yada (11)
a little confused as it's been awhile since I played with datomic. I am trying pro starter. What I remember doing in the past was setting up license key, storage, adding the postgres driver, adding datomic-pro (not client) to my deps, running the transactor, requiring datomic.api :as d, specify URI, create-db on it, make connection
What storage backend are you using? Besides the peer, you're also running the transactor, right?
I tried it the old way with the latest datomic-pro library using the URI, but it failed to transact my schema. now trying it with the client library and it's complaining about "SSL doesn't have a valid keystore" when trying to make the client
the minimal production datomic setup you need 3 components:
1- Storage backend (SQL, in your case)
2- Your app, that requires and use datomic.api
(we call it peer)
3- The transactor, that you download a jar and run in a independent JVM(looks like it's missing).
---
Dataflow:
peer READ from DB
peer send writes to transactor
transactor WRITE on DB
got it figured out. had some problems with conflicting dependencies that caused a few of the issues, and then there was some confusion over small diffs in the API, like the client using {:tx-data ...} when transacting vs the peer library.
i'm back with more performance questions. last time I found that my query was slow because of some naive mistakes. now the same query is slow because of a not-join
across :db/id
values.
This is the part of the query that's slowing it down:
(not-join [?c ?u]
[?v :comment-viewing/user ?u]
[?v :comment-viewing/comment ?c])
total in the database there are up to 114 ?u
datoms, 1649 ?c
datoms, and 6846 ?v
datoms
the query takes over 5 seconds with the not-join
, 500 ms without the not-join
, 700 ms when I drop the [?v :comment-viewing/comment ?c]
clause, and 99 ms when i drop the [?v :comment-viewing/user ?u]
clause.
looks like yes: https://docs.datomic.com/on-prem/indexes.html
solved! I just need to flip the clauses around like so:
(not-join [?c ?r]
[?v :comment-viewing/comment ?c]
[?v :comment-viewing/rapper ?r])
Takes the time from 5 seconds to 90 msOnly figured this out by looking at day-of-datomic:
;; This query leads with a where clause that must consider *all* releases
;; in the database. SLOW.
(dotimes [_ 5]
(time
(d/q '[:find [?name ...]
:in $ ?artist
:where [?release :release/name ?name]
[?release :release/artists ?artist]]
db
mccartney)))
;; The same query, but reordered with a more selective where clause first.
;; 50 times faster.
(dotimes [_ 5]
(time
(d/q '[:find [?name ...]
:in $ ?artist
:where [?release :release/artists ?artist]
[?release :release/name ?name]]
db
mccartney)))
@stuarthalloway / @marshall: We talked about Html and CSS blowing up the indexing service. Would prepending something like <!-- rev 0x0f32f3ffff --> fix that, if I remove all history on these entities?
@laujensen prepending to the value with something that changes (i.e. the sha of the value itself) would help with the segment size issue Note that enabling noHistory will not remove existing history from an attribute
When I get :restore/collision
from datomic restore-db
, is there any way to tell it to overwrite without having to manually delete the db?
I’m upset that I have to delete it at all, because I don’t know what that would do to a datomic client/peer connected to it
@alex438 Why are you restoring into a ‘live’ database? You must restart all peers and transactor after a restore: https://docs.datomic.com/on-prem/backup.html#sec-5
How to restart a transactor running with CloudFormation? There’s no restart/stop option - would I have to reboot the associated EC2 instance?
if you dont you should get a transactor failover the first time you try to write anyway
What’s the meaning of “You do not need to do anything to a storage (e.g. deleting old files or tables) before or after restoring.“, which seems to be at odds with the :restore/collision
error?
are you getting “is already in use by a different database” or “database already exists under the name”
the basis from which that database was created (call it foo) has been restored into that storage before
so if you restore foo into storage then change some stuff in it (and in the original source of foo) then try to backup the original source again and restore into the same storage you’ll see that error
is that something that can be fixed in a later release of datomic - to use new ids for restored data?
b/c those two databases share some history (whatever was there before the original restore), but diverge later
where's the right place to report an issue with the peer api? I've found that tx-range
doesn't work when supplied a tx id for start
(but does when using (d/tx->t tx-id)
)
I saw a video where the presenter mentioned tricking the transactor into inserting custom transaction timestamps. What is the best way to do that?