Fork me on GitHub

if you’ve got a fixed list of application types you could make an attribute for each

:application/A (valueType ref, cardinality one)
:application/B (valueType ref, cardinality one)


@mrmcc3: In general, I am confused about cardinality and uniqueness (identity vs value).


Say I have:

:company/name (valueType string, cardinality one, db.unique/value)
:company/applications (valueType ref, cardinality many)

:application/name (valueType string, cardinality one)
:application/cool (valueType boolean, cardinality one)
I would like to make it so there can never be a duplicate :application/name belonging to a particular :company.


So transacting something like the following will fail:

[{:db/id #db/id[db.part/user]
  :company/name "Foo"
    [{:application/name "Duplicate"}
     {:application/name "Duplicate"}]]


(I realize that sort of makes it look like they're component entities, but I am not using a component here, using a ref)


@d._.b: you might want to look at transaction functions


🙂 I sort of had a feeling that might be coming.


Is what I'm asking crazy talk?


Entity-level uniqueness enforcement is something you'd need to implement via something like transaction functions. Uniqueness by value can be enforced database-wide with :db.unique/value


:db.unique/identity is for enforcing unique entity identities (ie your company name)


@marshall: Based on my reading, the same is true of specifying an attribute is "required", yes?


Datomic is 'inherently sparse'. If you need to ensure the presence of certain attributes you can do that via transaction functions used as 'constructors'


So, I could conceivably achieve the same effect as what I was describing above by enforcing some set of attributes are present


(which are also unique)


Yep. That logic can be put in the transaction function.


Whether that's a good idea or not is another matter 🙂


"depends on what you're trying to do" of course


In general, I'd say that is the right approach for that use case. The major caveat is that transaction functions run on the transactor and can affect overall write throughput, but I'd argue that these cases (creating a new customer/user/etc) are infrequent and important, so i would tend to implement validation and enforcement that way


Yeah, when I can in relational land, I like to get out of the way and let the database do the work. Validation functions vs simply enforcing constraints


In a sense, transaction functions are exactly that - letting you define the behavior the db enforces




@marshall: Perhaps I've just missed a couple of places in day-of-datomic, but a suggestion: One file containing a schema with some many/ref attributes and uniqueness and backward references (e.g. :_foo), where all of the datoms are transacted in the same file, using a nested map, the list form, and finally, adding onto and retracting an item from the many/ref attribute.


That might be deeply specific, but I've read the Transaction, Schema, Identity and Uniquness, etc. docs several times over, and clicked around the day of datomic repo, and wanted to pass along the feedback.


Of course, I missed constructors 🙂


I appreciate it. I'll look at what we have and see if I can put together some something along those lines


@marshall: In general, I think a fuller example of an app with some slightly-less-than trivial data model would be much appreciated. For instance, the Best Practices documentation mentions that Database updates often have a two-step structure: .... The examples I've seen don't do a whole lot of this.


Last suggestion is: the mbrainz example doesn't create a partition, and it seems to me it ought to.


Along those lines, two more questions: - Is a legal partition name? - "Your schema should create one or more partitions appropriate for your application." <- I seem to recall hearing during the Q&A at Datomic Conf something about when it's appropriate to mess with partitions, and I thought it was "not much, if ever". If I have :foo/name, :bar/name, :baz/qux, should I be creating partitions for :foo, :barr, and :baz?


Partitions are an optimization for index locality only. If you know ahead of time that some set of data will often be accessed together, then it might make sense to put those data in a partition


You are very unlikely to suffer from not having defined your partitions perfectly.


@marshall: I promise these are the last of my last questions for the night, and want to thank you for all of your help so far: I notice that the samples/seattle/getting-started.clj example shows creating a partition for :communities. It transacts a :community datom using a :communities tempid. It made me wonder a couple things: 1.) The partition was created after import of the seattle data. Can the creation of a partition after-the-fact change anything about locality of datoms that were already added? 2.) Along the same lines, is there any secret handshake w/r/t the partition's name :foo, and attributes which use the namespace :foo. (See: :community/name w/ a partition named :communities) There's not right? I assume that the only way to get the locality boost that partitions allow for, is to reference that partition as the :db/id when adding a datom.


And, I suppose finally -- if you were to realize at some point: "Wow, I really need better locality..." what would you do?


Correct. The partition a datom is in can only be defined when it is transacted


And the name namespaced keyword you use for the attribute id is not related to the partitin


For your last question- I've never seen that happen, but if it did the approach would be the same as that for 'I need fulltext' or 'I need to change the data type of an attribute' - rename the existing attribute - create a new one with the old name in the desired partiton - migrate the data from the 'old' attribute to the new one


@marshall: I realize you get paid for it, but it's late, so I owe you a beer.


to help people out in this channel, I mean


Thanks a lot for all of the help; I really appreciate it.


Have a good night.


No worries:) you too


@robert-stuttaford: was a typo in my schema, my bad 😕 thanks for your help


@yonatanel: Per your question from yesterday at 4:31 AM EST, I talked to Stu about the 2012 post you linked. We optimized query with predicates in 2013:

## Changed in 0.9.5130
* Performance Enhancement: The query engine will make better use of AVET
  indexes when range predicates are used in the query.
In terms of your second question, the API is correct and is essentially saying :vaet contains datoms for attributes of :db.type/ref and is the reverse index.


@jaret: Do you know if only a single index is used in queries? I wonder if I should cram filtering logic into queries, or have a minimum of that in queries and the rest in regular clojure code. I have reasons for both


How can I query to see if a :db.cardinality/many value is exactly equal to a passed in collection? For example, I pass in a collection ?coll and I want to find all entities whose :cardinality-many value is exactly equal to ?coll. So I write:

'[:find ?e .
  :in $ [?coll ...]
  [?e :cardinality-many ?coll]] 
But this returns all entities whose :cardinality-many value contains an entity in ?coll.


Where the passed in collection is a set, so order does not matter in the equality check.


@kenny: you can just compare the set of coll to (:c-m (d/entity db your-e)) with =


I am trying to find your-e though


afaik datalog doesn't support this. to express it in datalog terms, it'd be [:find ?e :in $ ?c1 ?c2 <and more> :where (and [?e :attr ?c1] [?e :attr ?c2] <and more>)]


you could write a function (defn has-exact-coll? [db e your-coll-as-set] (= your-coll-as-set (:attr (d/entity db e)))) and call it from within your datalog (d/q '[:find ?e :in $ ?your-coll-as-set :where [?e :attr] (your-ns/has-exact-coll? $ ?e ?your-coll-as-set)] db (set your-coll))


but you'll want to find some other way to first restrict which ?es you're looking at, because otherwise you're checking every entity this way 🙂


one simple way to do that is to include a clause that first checks for ?e with :attr, as i have done above


does that make sense?


Yes. Thank you 🙂


Hi all! I'm playing around with querying clojure data structures with datomic's query engine. I've covered off most of the things I wanted to try but I'm finding it difficult to express one particular thing. Say I have a two lists of lists, the first containing user information (account id, gender, zip code) and the second containing some replacements (account id and zip code). I want to get all users from the first list of lists and return the zipcode to be the one form the second list if the user is in it or the zipcode from the first list otherwise. I'm not sure the best way of expressing it, whether the data should be merged before I query it or whether I can do this within the query. So far I have ` (ns datalog-test.core (:use [datomic.api :only (db q pull) :as d])) (q '[:find ?accid ?gender ?zip :in $p $r :where (or (and [$p ?accid ?gender _] [$r ?accid ?zip]) [$p ?accid ?gender ?zip])] [[1 :m 22321] [2 :f 23343] [3 :m 32431] [4 :f 34958]] [[2 49884] [3 4857]]) ` but I get the error: ` :db.error/invalid-data-source Nil or missing data source. Did you forget to pass a database argument? {:input nil, :db/error :db.error/invalid-data-source} ` Can anyone offer guidance on how best to achieve the above?


hello, I've installed the datomic dep in my clojure project but don't know where to find the transactor


@flipmokid: the db connection is a obligatory property that should be passed as last parameter



(def db (d/db conn)


@vinnyataide Hi, I'm using this directly on Clojure data structures and not using a Datomic instance.


the problem is that you are using a q function that expects a db


the datomic api expects a data source, even an in memory one


passing data structures instead of db's is a thing you can do


not sure what's up with the error, though


I see 2 data structures right?


can you do that?


maybe not?


I also think that you don't need the :in clause


@bhagany: Yes it's an odd one (unless I'm doing something silly), I find I'm only seeing the errors when using the or/and expressions @vinnyataide: Check out, I was surprised and happy that you could do datalog queries on clojure data directly


In the gist he uses multiple collections too


I suspect it’s related to the or clause and multiple data sources in there? and the in clause is definitely necessary when passing more than one data source.


I was looking at this:, and noted the lack of :in… but now I realize that it's because it's implicit


ah, that's right… with or it has to be like ($ or …)


I see, so I can only refer to one data source at a time in the or


Hmm... I wonder how I could achieve what I want to with that restriction


I believe or, not, and pull expressions may all have rough edges when it comes to handling multiple data sources.


fwiw, this returns results for me:


'[:find ?accid ?gender ?zip
  :where (or (and [?accid ?gender]
                  [?accid ?zip])
             [?accid ?gender ?zip])]
[[1 :m 22321]
 [2 :f 23343]
 [3 :m 32431]
 [4 :f 34958]
 [2 49884 0]
 [3 4857 0]])


I added the 0's in the last two datoms to resolve an IndexOutOfBoundsException


@bhagany: Thanks for trying, I'll give it a go now and see what results it gives


what about the transactor?


I can't find anything about the location in the documentation


it expects you to download the lib as a standalone service


are you trying to run a dev transactor?


there's a shell script to start it up in the package you download - bin/transactor


but I downloaded it as a dep in lein


idk where it is


there are two parts - the thing you downloaded via lein is the client library. the thing you download from is needed as well, for the transactor


I'm using pro starter though, if you're using free, the process might be somewhat different


me too, I'm using pro starter


Yeah, since the transactor is only one per machine its kinda obvious


thanks for the help


okay, then just to put it all together, here's my whole install process: - download a zip from and unzip it - bin/maven-install for the client library - modify the sample to fit my needs - bin/transactor + appropriate args to run the transactor


do I need the maven install if I did the project with credentials gpg?


I think that accomplishes the same thing, but I've never tried it


I'm gonna make a technical report about a system that I'm making in datomic with om next, so these details are really good to me 🙂


Are pull queries supposed to work with history databases?


(d/q '[:find ?p (pull ?tx [:db/txInstant]) ?added
       :in $ ?userid
       :where [?u :user/purchase ?p ?tx ?added]
              [?u :user/id ?userid]]
     (d/history db) userid)


Throws an IllegalStateException for me.


@zane: no, pull is only supported on current value of db


Or rather, not on a history db. I believe it does work on asOf dbs