Fork me on GitHub
#datomic
<
2016-12-15
>
wei00:12:00

does datomic have an assert if not exists mode? for example, in an upsert I want to generate a new :user/uuid if it’s a new user, but I wouldn’t want to replace an existing user’s uuid

potetm04:12:38

Wait, that doesn't quite fit the bill.

potetm04:12:07

Yeah that's probably something you'd have to do in a transactor fn. (Though I'm having trouble coming up w/ a use case for it.)

potetm04:12:50

How do you know it's the same user if it doesn't have the same id?

mx200005:12:24

How do I implement a database constraint, that all my emails have to be lower-case?

mx200005:12:33

I did not find any result on google.

tengstrand07:12:33

Are there any tools for making diagrams of a Datomic database? I’m used to have that for relational databases where you can see all the entities/attributes. And maybe also relations, if that can be figured out by ‘ref’ type + naming conventions.

robert-stuttaford08:12:49

not afaik, but not hard to produce a dataset that e.g. graphviz or mermaid can consume

karol.adamiec10:12:24

latest datomic release breaks HTTP rest endpoint

karol.adamiec10:12:28

java.lang.UnsupportedClassVersionError: org/eclipse/jetty/server/Server : Unsupported major.minor version 52.0

val_waeselynck10:12:06

@mx2000 validation is the kind of thing you want to do in the Peer, using any library available to your application language

val_waeselynck10:12:32

there is no implicit way to enforce this kind of constraint.

magnars12:12:45

Okay, maybe I'm overthinking this, but bear with me: Let's say I have a :db.type/keyword field, and 10 000 entities all have the same keyword. Let's say that keyword is 100 bytes long. Will this take up 1MB of space in datomic? Or do they all reference the same keyword somehow? Would I save lots of space by making it a :db.type/ref and point to an entity with a :db/ident instead?

robert-stuttaford12:12:12

@magnars i think you’d save space, because now it’s 10,000 Longs and one keyword. @marshall or @jaret may correct me

jaret15:12:00

@karol.adamiec I just used REST to transact and got no error. Where are you running into this? On transactions or on launching the REST service?

karol.adamiec15:12:53

@jaret trying to run the service

jaret15:12:48

And you're using 0.9.5544?

jaret15:12:28

jbin at Jarets-MacBook-Pro in ~/Desktop/Jaret/Tools/releasetest/5544/pro/datomic-pro-0.9.5544
$ bin/rest -p 8001 dev datomic:
REST API started on port: 8001
   dev = datomic:

karol.adamiec15:12:07

yes but on amazon ami

karol.adamiec15:12:44

checked local machine and is fine, must be sth with aws image then.

dominicm17:12:31

Quick google didn't turn up anything, anybody got any experience with periodically dumping datomic queries into something like redshift or postgres?

dominicm17:12:29

Kinda figuring I'll have to build something with since and translate those to insert/updates?

robert-stuttaford17:12:22

you could use d/log and d/tx-range

dominicm17:12:53

That actually makes more sense, yes.

kenny18:12:31

Is there a way to restart a Datomic restore from a specific “segment”? My large import failed due to exceeded throughput and the last thing it printed out was "Copied 414355 segments." I don’t really want to restart the process from the very beginning because even though it skips segments it has already imported, it still takes a long time to get to the place it died at.

jaret18:12:06

@kenny No you can't restore to a specific segment.

kenny18:12:22

@jaret Okay. What is the provisioned throughput my Dynamo table should have for an import? I had it set at 300 which seemed to work well.

wei19:12:58

@potetm the problem arises when there are multiple identity properties (e.g. uuid and email). if an update comes in with {:user/email “” :user/name “bob”} I’d want to be smart enough to assign a :user/uuid if we can’t find , but not reassign a uuid if an existing user is found. would be cool if there were a simple to do it without a db fn.

jaret19:12:37

@kenny for fastest imports you will want DDB writes set to 1000 or more

kenny20:12:37

@jaret What about read capacity?

jaret20:12:29

So I got a bit confused. You're running restore right?

jaret20:12:41

Before, I assumed you were asking about import specifically

kenny20:12:59

Yes. I am running a restore-db

kenny20:12:15

Database has ~450m datoms

jaret20:12:35

So the recommendations are still to bump your provisioning for write if you are being throttled, but you might not need 1000

kenny20:12:20

Well 300 did not work so 800? I just don’t want this restore to fail and need to start over again 😛

jaret20:12:06

Well you know its a one time thing so mise well bump 1000

kenny20:12:15

Also there seems to be some read reqs as well. It looks like the restore is using ~100 read capacity

kenny20:12:07

Yeah I’ll leave it 1000, just to be safe.

jaret20:12:30

You can change the number of concurrent reads and writes using the properties marshall mentioned

kenny20:12:50

But would those need to be changed before the restore has started?

jaret20:12:37

I think you will be ok just bumping the provisioning temporarily

kenny20:12:38

Yeah I’m trying 150 and 1000.

gdeer8122:12:56

@jonas 👏:skin-tone-2: http://learndatalogtoday.org is still my favorite place to go to refresh my datalog skills

gdeer8122:12:15

4 years later and there still isn't anything like it

wei22:12:12

it’s great for evangelizing datalog

gdeer8122:12:21

I'm giving a 15 minute demo of Datomic tomorrow and that movie schema is one that my practice audience caught on to the quickest

adamfrey22:12:24

let’s say I have a Datomic database with a cardinality/many :user/aka attribute. How can I write a parameterized query to return entities that match all supplied akas? “Give me all users known as “The Dude” AND “El Duderino”.

adamfrey22:12:05

I can make it work easily in the inline, non-parameterized version of the query, but I can’t figure out the query when I have to pass in the akas as input

marshall22:12:44

Oh, all. Maybe look just above that section

marshall22:12:54

Tuple binding form

adamfrey22:12:40

I noticed with tuple binding that the results depend on the order of the elements in the input vector

adamfrey22:12:10

so if an entity just has aka “A” but not “B” if I pass in [”A” “B”] as input it returns, but not if I pass [”B” “A”], unless I’m mistaken

adamfrey22:12:47

(da/q '{:find  [?e]
          :in    [$ [?aka]]
          :where [[?e :aka ?aka]]}
    (-> (da/db conn)
      (da/with [{:db/id                 (d/tempid :db.part/db)
                 :db/ident              :aka
                 :db/valueType          :db.type/string
                 :db/cardinality        :db.cardinality/many
                 :db.install/_attribute :db.part/db}])
      :db-after
      (da/with [[:db/add 1 :aka "A"]
                [:db/add 1 :aka "B"]
                [:db/add 2 :aka "A"]])
      :db-after)
    ["A" “B”])
=> #{[1] [2]}
But If I change it the input [”B” “A”] I get #{[1]}

mitchelkuijpers22:12:22

@adamfrey you need to use [?aka ...] you currently only use the first value which explains your current behavior

adamfrey23:12:31

ok. But if I change my in clause to :in [$ [?aka …]] then it becomes a collection binding and it does OR matching instead of AND

mitchelkuijpers23:12:02

Yes that's correct, hmm

mitchelkuijpers23:12:49

I am on my phone not sure how to do that from the top of my head

adamfrey23:12:47

so I guess tuple binding can give me AND, but only if I know the amount of things I want to AND ahead of time, and can destructure with :in $ [?aka1 ?aka2]. But it’s still unclear to me how/if I can AND with a list of values that variable at runtime

adamfrey23:12:30

I’m going to play around with relation binding, because it’s the only option left

marshall23:12:58

You may be able to use a rule

marshall23:12:24

I'm also on my phone so I can't try currently. I can look tomorrow morning

adamfrey23:12:38

ok. Thanks for your help