Fork me on GitHub

for example I have location, and project/location :ref :one to location. now I want to query all projects that has that location ?


@nxqd: "You can navigate relationships in the reverse direction by prepending a '_' to the name portion of a reference attribute keyword.” - pasted from the Datomic tutorial. In your case this means you get a location entity from Datomic. With that entity in hand, you get all project entities for that location simply by calling:

(:project/_location location-entity)


maybe this has been asked before, but how does t->tx work exactly ? I don't see how a transaction number could map to any one transaction independently of the database.


Just to check my rough understanding of how the transactor works. When you transact something, the transactor first writes it to the log, and updates the indexes periodically in the storage layer. The peers and memcached never have to touch the transactor as they can follow the log along fetch from the indexes in the storage layer once the transactor has indexed. I’m assuming there’s no “push” from the transactor to the peer to say that indexes have been updated, instead it’s pulled from the storage layer?


the txor and peers all keep a live index of the newly transacted but not yet indexed-in-storage datoms


for this reason, the txor is pushing all novelty to peers as it happens, so that queries on peers can consider the full database, not just storage index


my rough understanding is that the process is like this: 1. txor logs to tx-log in storage 2. transacting peer informed 3. live index updated; pushed to peers 4. merge of live-index to storage-index possibly triggered due to threshold reached @bkamphaus can confirm simple_smile


essentially. log is always durable and all data is in live/memory index after the transaction, as in diagram here:


transactor notifies peers of new data (logged on peer)


But the live index isn’t the actual data indexes - i.e. some reduced form of the log that notifies the peer of new data, but not the data itself? Thanks for that diagram, I thought I’d seen that somewhere


Say, if I'm going to provide a unique identifier for an entity that will be used in an external process (web service), should I create my own ID with (d/squuid) and :db/unique :db.unique/identity, or can I just use the actual entity ID?


This seems to imply that entity IDs are only for internal keys, but I don't see it explicitly stated anywhere:


@lucasbradstreet: those details are documented in the “On the Peer” section of the memory index docs in caching,


Specifically: * A peer builds the memory index from the log before the call to connect returns. * A peer updates the memory index when notified by the transactor of a transaction. * A peer drops the portion of the memory index that is no longer needed when notified by the transactor that an index job has completed.


@timgilbert: you should create an identifier with (d/squuid), it’s true that an entity is essentially an internal id. If, e.g., you ever have to migrate records to another Datomic database, a uuid will be stable in the migration. An entity won’t as it’s assigned by the db and can’t be specified.


these details are distributed through the docs in different places as you acknowledge, is there a first place you would have looked where you’d want that info to be more explicit?


Thanks @bkamphaus. I did scan the "uniqueness" page looking for this, but then I thought I thought I remembered it from one of the datomic videos, which are a little hard to grep 😉


@bkamphaus: thanks so much. I guess I should’ve done more homework in the docs!


@lucasbradstreet: no worries, I’ll admit it’s not immediately apparent that you should jump to the caching topic to answer that question simple_smile


Yeah, heh. What does “notify” mean here? "A peer updates the memory index when notified by the transactor of a transaction.”. Is that as little as a the tx-id?


yep. with peer logging on you’ll see a message like:

2016-01-26 14:16:43.046 INFO  default    datomic.peer - {:event :peer/notify-data, :msec 2, :id #uuid "56a7e23a-2e3e-41ca-bf4d-b9113aba6e41", :pid 24212, :tid 28}


Ah, cool, I am definitely going to turn peer logging on. That’s a good trick


does anyone know of a tool that will generate datomic schema from prismatic schema?


Can anyone tell me of a way to query on a date range on a non-indexed attribute?


@petr same query will work with or without index, although performance will differ. Do you mean a date for your own attribute or domain, or Datomic transaction time?


If I add an UUID attribute with db.unique/identity, that will mean that my whole entity is stored an extra time (so four times vs three), since it’s additionally stored in AVET, right?


bkamphaus: it’s a date for my own attribute


@lucasbradstreet: that’s partially correct, avet will be set to true for any attribute, but only that attribute/value will be indexed, not everything on the entity.


I have only found datomic.api/index-range though (


entities are derived from datoms, not directly storied in their entirety in indexes.


I basically want to find all entities that had a date attribute between two date-times


@petr you can use standard comparison, < and > etc. in query as the default case. index-range or datoms with :avet will work if you need to page through things by time.




@petr sorry re: the datoms + :avet, just remembered your condition specified not indexed. So, yes, index-range and datoms with :avet won’t work, but query will.


Oh that totally makes sense


it’s also fairly cheap to turn :avet - especially for a regularly sized value like a inst, long, etc. any particular motivation for keeping it off?


bkamphaus: [(< :moo/some-date #inst "2015-10-14T17:30:00.953-00:00”)] ?


So maybe it’ll be used to lookup the eid, and then if you wanted to access a bunch of attributes on that entity, they’d be accessed via EAVT


Nevermind. I just bind it ot a var


@petr I would parameterize the time values myself, i.e. have ?inst1 and ?inst2 in the :in and provide values.


Yep, I would do too. Was just testing


i'm trying to get cloudwatch monitoring to work but so far not having any luck. i wrote up the details of what i've done on a stackoverflow question. any suggestions of what i could do to get it working would be appreciated


@gworley3: I’ve only used our documented permission granularity or let it be set via the ensure-transactor process and never had any issues. i.e.,

   ["cloudwatch:PutMetricData", "cloudwatch:PutMetricDataBatch"],


in general my first troubleshooting step (if the situation allows) for any AWS config that’s not working is to try and run the transactor locally with keys in the environment with pretty wide permissions, so I can get a sanity check on my settings with the complexity of role config factored out.


Your situation may or may not allow for a troubleshooting step like that, of course.


@gworley3: also just a sanity check, you’re using Pro or Pro Starter?


ok, should be fine, it’s just free that's not supported for cloudwatch metrics.


interesting. when i look at the iam role access advisor it says nothing has tried to access cloudwatch through the role


also, where should i expect to see them show up when it works? metrics on the ec2 box or as a separate datomic section or somewhere else?


@gworley3: re: where they’ll show up, I use CloudWatch from the AWS console, from the left drop down menu there’s a “Custom Metrics” drop down where you can select “Datomic"


ah, ok. i don't (yet) show anything like that


I would double check that the IAM Role that displays on the instance description in the EC2 Dashboard is the correct one, also. I just checked a working transactor IAM role and its inline policy for metrics is verbatim from the docs:

   ["cloudwatch:PutMetricData", "cloudwatch:PutMetricDataBatch"],


on startup I usually see it take 5 minutes or so for metrics to show up.


i changed the role to have this exact policy but still not seeing anything