This page is not created by, affiliated with, or supported by Slack Technologies, Inc.
2018-01-30
Channels
- # arachne (23)
- # bangalore-clj (2)
- # beginners (64)
- # boot (20)
- # cider (3)
- # clara (11)
- # cljs-dev (29)
- # cljsrn (10)
- # clojure (143)
- # clojure-brasil (4)
- # clojure-dev (22)
- # clojure-dusseldorf (3)
- # clojure-italy (26)
- # clojure-sanfrancisco (13)
- # clojure-seattle-old (2)
- # clojure-spec (15)
- # clojure-uk (27)
- # clojured (1)
- # clojurescript (52)
- # core-async (13)
- # cursive (2)
- # datomic (106)
- # fulcro (45)
- # garden (1)
- # graphql (11)
- # hoplon (98)
- # jobs (11)
- # juxt (7)
- # keechma (2)
- # leiningen (36)
- # off-topic (39)
- # parinfer (13)
- # re-frame (34)
- # reagent (5)
- # ring (1)
- # rum (4)
- # shadow-cljs (83)
- # sql (1)
- # timbre (1)
- # unrepl (49)
- # vim (1)
- # yada (42)
Why is #datom[17592186045433 87 "start" 13194139534330 false]
included twice in the :tx-data
for the last transaction in this snippet:?
(let [conn (d/connect db-uri)]
@(d/transact conn [{:db/valueType :db.type/string
:db/cardinality :db.cardinality/one
:db/ident :test/attribute}])
(let [{:keys [tempids]} @(d/transact conn [{:db/id "start"
:test/attribute "start"}])
id (get tempids "start")]
(:tx-data
@(d/transact conn [[:db/add id :test/attribute "new"]
[:db/retract id :test/attribute "start"]]))))
=>
[#datom[13194139534330 50 #inst"2018-01-30T00:58:22.745-00:00" 13194139534330 true]
#datom[17592186045433 87 "new" 13194139534330 true]
#datom[17592186045433 87 "start" 13194139534330 false]
#datom[17592186045433 87 "start" 13194139534330 false]]
@kenny you don't need to explicitly retract "start". The new value will 'upsert' and the retraction will be added automatically
I have a theory as to why you see it in your example. If I'm correct it could be considered a bug, but won't influence anything negatively
Right but in this case the transaction data is generated based on a "test" transaction with d/with
.
I can explicitly de-dupe to work around it but it doesn't seem like the :tx-data
was meant to include duplicate datoms.
Plus we sync all transaction data to a Kafka topic and this will produce lots of duplicate data.
Actually, I spoke too soon. DataScript doesn't appear to be affected by it. Duplicate data argument still holds, however.
Data would be duplicated in a Kafka topic and use additional bandwith to send to every connected client.
I want to store my entity defaults in Datomic. Can I attach this to the attribute somehow? Can I create a custom attribute for my entity schema? Or do I need to have two seperate attributes like: entity/attr-a
enitity/attr-a-default
@caleb.macdonaldblack you can assert additional facts onto the attr itself {:db/id :your/attr :your/attr-default-value <value>}
of course, this means you need a definition of :your/attr-default-value
with the same type đ
@robert-stuttaford Thanks! I think thatâs what Iâm looking for
Question: if I want to write black-box tests for a REST API that is powered by datomic, is it possible to run a dev instance of datomic as a docker image with a canned test data-set?
yep @sleepyfox - if the transactorâs storage is inside the docker image (which dev does via an h2 database)
not sure if docker filesystem are mutable though? the transactor would need to be able to add stuff if youâre testing transactions
i donât know docker at all đ
I was wondering whether anyone had actually tried this before, or whether I am missing a trick that actually makes this unecessary...
Don't worry about Docker, it can do everything that I need it to, my question isn't really about Docker, but rather testing (micro)services backed by datomic
well, you could just use an in memory database, but thatâs not black-box, because youâd need extra code to set the db up
it is much simpler though, because it can work basically the same as fixtures for unit tests
one option is to prepare a database with everything you need, back it up, then restore that db to a fresh transactor and provide that transactor uri to your service
that at least makes the process repeatable
and lets you iterate on the data and the tests without touching the service
What are some of things that might cause this error - a query that pulls in a rule :
java.lang.Exception: processing rule: (q__30187 ?job-num ?latest-action-date), message: processing clause: (is-large-alt? ?e), message: java.lang.ArrayIndexOutOfBoundsException: 14, compiling:(NO_SOURCE_FILE:47:27)
Seems strange that I get processing rule and java.lang.ArrayIndexOutofBoundsException after the query has been running for a while.
Taking out the attributes in the query that are also referenced in the rule seems to remove the exception. Iâm new to using rules - but that doesnât seem like something that would be disallowed.
user=> (d/q '[:find ?e ?e :in $ :where [?e :db/doc _]] (d/db conn))
ArrayIndexOutOfBoundsException 1 clojure.lang.RT.aset (RT.java:2376)
user=>
nevermind, found 'em https://docs.datomic.com/on-prem/videos.html
I'm trying to use a value instead of a db like so:
(d/q '[:find ?last ?first :in [?last ?first]]
["Doe" "John"])
ExceptionInfo Query args must include a database clojure.core/ex-info (core.clj:4739)
I'm following this gist: https://gist.github.com/stuarthalloway/2645453
And I'm using (:require [datomic.client.api :as d])
with [com.datomic/client-cloud "0.8.50"]
in my :dependencies
But I get the 'Query args must include a database' error as shown above. What am I doing incorrectly?
@sleepyfox AFAIK the Client API does not contain query capabilities itself but sends the query to the peer server. Thus, it lacks the capability to process collections as DB value. You might have to use the Peer library for that.
https://docs.datomic.com/on-prem/architecture.html#storage-services helped me to understand the bigger picture.
Context: we want a clean and simple way to test code, and mocking the db by passing it as a value seemed like a great way.
I did some experiments with the Client API. But I got stuck when I needed an easy way mock Datomic for testing purposes, like:
(def ^:private uri (format "datomic:$s" (datascript/squuid)))
(defn- scratch-conn
"Create a connection to an anonymous, in-memory database."
[]
(d/delete-database uri)
(d/create-database uri)
(d/connect uri))
i.e. have a Datomic Cloud system for dev/testing and create a database, use it, then delete it
Yup. I was hoping to not have to switch between the Client and Peer APIs between actuals code and tests
I'd rather be able to mock out a db instead of creating an actual 'test' db in Cloud
Launching a local Peer Server locally to run a mem database would be a good compromise. But the current lack of support for delete-database
and create-database
makes creating a scratch-conn a bit messy.
take a look at the very top of this https://docs.datomic.com/on-prem/getting-started/connect-to-a-database.html
Thanks @marshall - I understand that I can spin up a Peer server to do this, but I'd prefer something more lightweight.
@marshall Thatâs right, but wouldnât I have to either restart the Peer Server or retract the previous test facts to get a âclean slateâ for the next test?
$ bin/run -m datomic.peer-server -h localhost -p 8998 -a myaccesskey,mysecret -d hello,datomic: -d hello2,datomic: -d hello3,datomic:
Serving datomic: as hello
Serving datomic: as hello2
Serving datomic: as hello3
There's also https://github.com/vvvvalvalval/datomock which is super useful for this kind of testing scenario
Oh, but it's peer-only, never mind
internally we use a solo system that is up all the time to provide tear off dbs for that kind of thing
Speaking of datomic:mem://...
connections, what does datomic.memoryIndexMax
default to here? I'm currently trying to capacity plan and having trouble understanding the balance between this and the object cache...
mem databases donât have a persistent index (by definition), so they donât have indexing jobs. they are entirely âmemory indexâ
I see. I'm able to get this error to occur in my experiments: Caused by: java.lang.IllegalArgumentException: :db.error/not-enough-memory (datomic.objectCacheMax + datomic.memoryIndexMax) exceeds 75% of JVM RAM
If objectCacheMax defaults to 50% of VM memory, I imagine memoryIndexMax must be set to something in order to exceed 75%. This is where my question is coming from.
300m I believe. Trying to "reverse engineer" how these capacity settings work, that's why it's so low.
I'm able to get it to run by setting the objectCacheMax system property really low, say 50m
@marshall I'm thinking it's something arbitrary from my tests. Appreciate the help, your links provide plenty of context đ»
Even though Iâve got write-concurrency=2 in my transactorâs properties, and allocated a write capacity of 400 (!) for DynamoDB, Iâm still getting throttled writesâŠIâm a bit surprised. How have you dealt with the limited nature of DynamoDB when running a transactor on it, how do you judge how much to bump capacity during a bulk import, etc.?
@alex438 We have customers running sustained AWS write capacity of 1500, with a setting of 4000 for bulk imports. I would say 400 is on the low end for an active production system and Iâm not surprised youâre getting throttled during a bulk import
@alex438 One value prop of Datomic Cloud is that it doesnât use Dynamo the same way and a similar write load against the system can be handled with a much lower Dynamo throughput setting
in many of our internal experiments the Dynamo autoscaling with Datomic Cloud rarely even hits 100 while running a large batch import