This page is not created by, affiliated with, or supported by Slack Technologies, Inc.
2016-12-15
Channels
- # adventofcode (1)
- # beginners (79)
- # boot (23)
- # cider (15)
- # cljs-dev (14)
- # cljsrn (27)
- # clojars (4)
- # clojure (172)
- # clojure-dusseldorf (23)
- # clojure-india (2)
- # clojure-italy (1)
- # clojure-nl (23)
- # clojure-russia (43)
- # clojure-spec (29)
- # clojure-uk (70)
- # clojurescript (97)
- # clr (8)
- # cursive (10)
- # datomic (69)
- # events (3)
- # garden (12)
- # hoplon (120)
- # immutant (34)
- # lein-figwheel (9)
- # leiningen (4)
- # off-topic (4)
- # om (10)
- # onyx (51)
- # rdf (1)
- # re-frame (15)
- # reagent (23)
- # ring-swagger (8)
- # test-check (3)
- # untangled (96)
- # yada (1)
does datomic have an assert if not exists mode? for example, in an upsert I want to generate a new :user/uuid if it’s a new user, but I wouldn’t want to replace an existing user’s uuid
@wei Unique identities: http://docs.datomic.com/identity.html#sec-4
Yeah that's probably something you'd have to do in a transactor fn. (Though I'm having trouble coming up w/ a use case for it.)
Are there any tools for making diagrams of a Datomic database? I’m used to have that for relational databases where you can see all the entities/attributes. And maybe also relations, if that can be figured out by ‘ref’ type + naming conventions.
not afaik, but not hard to produce a dataset that e.g. graphviz or mermaid can consume
ok, thanks.
latest datomic release breaks HTTP rest endpoint
java.lang.UnsupportedClassVersionError: org/eclipse/jetty/server/Server : Unsupported major.minor version 52.0
@mx2000 validation is the kind of thing you want to do in the Peer, using any library available to your application language
there is no implicit way to enforce this kind of constraint.
Okay, maybe I'm overthinking this, but bear with me: Let's say I have a :db.type/keyword
field, and 10 000 entities all have the same keyword. Let's say that keyword is 100 bytes long. Will this take up 1MB of space in datomic? Or do they all reference the same keyword somehow? Would I save lots of space by making it a :db.type/ref
and point to an entity with a :db/ident
instead?
@magnars i think you’d save space, because now it’s 10,000 Longs and one keyword. @marshall or @jaret may correct me
@karol.adamiec I just used REST to transact and got no error. Where are you running into this? On transactions or on launching the REST service?
@jaret trying to run the service
jbin at Jarets-MacBook-Pro in ~/Desktop/Jaret/Tools/releasetest/5544/pro/datomic-pro-0.9.5544
$ bin/rest -p 8001 dev datomic:
REST API started on port: 8001
dev = datomic:
yes but on amazon ami
checked local machine and is fine, must be sth with aws image then.
Quick google didn't turn up anything, anybody got any experience with periodically dumping datomic queries into something like redshift or postgres?
Kinda figuring I'll have to build something with since
and translate those to insert/updates?
you could use d/log and d/tx-range
Is there a way to restart a Datomic restore from a specific “segment”? My large import failed due to exceeded throughput and the last thing it printed out was "Copied 414355 segments." I don’t really want to restart the process from the very beginning because even though it skips segments it has already imported, it still takes a long time to get to the place it died at.
@jaret Okay. What is the provisioned throughput my Dynamo table should have for an import? I had it set at 300 which seemed to work well.
@potetm the problem arises when there are multiple identity properties (e.g. uuid and email). if an update comes in with {:user/email “
I’d want to be smart enough to assign a :user/uuid
if we can’t find
, but not reassign a uuid if an existing user is found. would be cool if there were a simple to do it without a db fn.
So the recommendations are still to bump your provisioning for write if you are being throttled, but you might not need 1000
Well 300 did not work so 800? I just don’t want this restore to fail and need to start over again 😛
Also there seems to be some read reqs as well. It looks like the restore is using ~100 read capacity
You can change the number of concurrent reads and writes using the properties marshall mentioned
@jonas 👏:skin-tone-2: http://learndatalogtoday.org is still my favorite place to go to refresh my datalog skills
I'm giving a 15 minute demo of Datomic tomorrow and that movie schema is one that my practice audience caught on to the quickest
let’s say I have a Datomic database with a cardinality/many :user/aka
attribute. How can I write a parameterized query to return entities that match all supplied akas? “Give me all users known as “The Dude” AND “El Duderino”.
I can make it work easily in the inline, non-parameterized version of the query, but I can’t figure out the query when I have to pass in the akas as input
I noticed with tuple binding that the results depend on the order of the elements in the input vector
so if an entity just has aka “A” but not “B” if I pass in [”A” “B”]
as input it returns, but not if I pass [”B” “A”]
, unless I’m mistaken
(da/q '{:find [?e]
:in [$ [?aka]]
:where [[?e :aka ?aka]]}
(-> (da/db conn)
(da/with [{:db/id (d/tempid :db.part/db)
:db/ident :aka
:db/valueType :db.type/string
:db/cardinality :db.cardinality/many
:db.install/_attribute :db.part/db}])
:db-after
(da/with [[:db/add 1 :aka "A"]
[:db/add 1 :aka "B"]
[:db/add 2 :aka "A"]])
:db-after)
["A" “B”])
=> #{[1] [2]}
But If I change it the input [”B” “A”]
I get #{[1]}
@adamfrey you need to use [?aka ...]
you currently only use the first value which explains your current behavior
In the :in
part
ok. But if I change my in clause to :in [$ [?aka …]]
then it becomes a collection binding and it does OR matching instead of AND
Yes that's correct, hmm
I am on my phone not sure how to do that from the top of my head