Fork me on GitHub
#datomic
<
2017-01-31
>
mruzekw01:01:47

Has anyone thought of using the Pull API like a GraphQL query for client-side systems?

pesterhazy08:01:42

@wei, that seems long but connecting to datomic does take a while

pesterhazy08:01:57

With dynamo, a minute or so

pesterhazy08:01:50

Remember that datomic peers have a more active role than jdbc clients

pesterhazy08:01:26

I assume they need to pull still some segments to get started

robert-stuttaford09:01:01

they need to download the database roots, all idents, and the live index

dominicm11:01:20

I'm struggling a little bit with figuring out how to use conformity with transactor functions. Particularly as it seems that edn parses ' in a way I wouldn't expect: (edn/read-string "['[db]]") returns [' [db]] instead of core/read-string which returns ['[db]], so the code (particularly for the datomic query in my tx function) will not read correctly into edn.

dominicm11:01:35

Hits me after I ask, (quote [:find ...]) is the solution to the ' problem

apsey11:01:35

Hi, I have two questions related to datomic’s infrastructure and provisioning: 1) Cognitect’s AMI: the used AMI dates back to May of 2014. Are there plans to release a new AMI with security updates and newer Amazon Linux version? 2) Logback config: To change logback.xml, should I simply replace it via the AWS EC2 Userdata?

tengstrand12:01:07

After two weeks of work we now have a generic save function that can “update”, create and retract arbitrary nested data structures, originally returned from pull queries (and then modified by e.g. a web client). Is there any similar libraries available already for Datomic? Otherwise it could be an idea to open source it.

stijn13:01:32

@teng: we have built something similar for comparing data coming in from external sources. curious how you approached it

danielstockton13:01:00

@mruzekw Yes, om.next is basically that.

mruzekw13:01:01

@jeff.terrell I have. I’ve been using it client side. I’m currently investigating best ways to communicate with the server, which is why I ask about Datomic

mruzekw13:01:59

@danielstockton I like some parts of Om but am more a fan of Rum. Does om.next have isolate into different libraries?

danielstockton13:01:06

Not sure I understand the question, sorry.

mruzekw13:01:34

I wondered if I could take the reconciler part and use it with Rum instead of Om

danielstockton13:01:50

Sorry, I don't have experience with rum at all. It might be possible to combine the two somehow. What do you think rum provides that om is missing?

Niki13:01:42

Simplicity :) and full control

danielstockton13:01:59

I find om (next) to be quite simple. Underneath, it's just plain react components, there isn't too much left when you take away the reconciler. For example, you can use a different templating library if you wish, although I prefer plain functions myself.

danielstockton13:01:40

I haven't tried rum, perhaps I don't know the simplicity and control I am missing.

danielstockton13:01:14

I can understand why it might seem overkill for simple applications with uncomplicated state. I also would like to see much tighter integration with datascript, instead of atoms + normalization helpers.

tengstrand13:01:31

@stijn We define every “foreign key”, how the entities are related to each other + update/retraction/creation rights. Then we can perform all CRUD operations recursively to arbitrary data structures. Very neat. Took a week to write the 40+ tests!

tengstrand13:01:06

@stijn How did you solve it?

stijn13:01:39

@teng: the idea is that data coming in from e.g. XML files that needs to be updated in the database has a 'scope' which is basically a pull spec + a query that identifies the entities. If an xml file contains entities for all users with property x=a, the query will select the entities in the db that match the xml file scope. It might be that some files contain different details about the users, so the second thing needed is the pull spec to compare to. With that info we can generate adds and retracts by walking both the data that comes from the db and the data from the xml file.

stijn13:01:59

so it's a bit different from your use case i guess

tengstrand14:01:12

@stijn Interesting! Another thing. We are adding auditing now where we store when and who that updated the db, like this: {:db/id #db/id[:db.part/tx] :auditing/changed-by “system-x”}. I retrieve the tx information in separate queries (with help from @favila, so thanks again!) by using find queries that have “sub” pull queries. I couldn’t figure out how to retrieve the auditing data in the same pull query as the original one. Do you know if it’s possible to also retrieve tx related data (like auditing) for all attributes in a nested pull query, so that we don’t need to do several queries?

Lambda/Sierra14:01:58

@teng Pull queries navigate relationships between entities. Transactions are entities just like any other. If you wanted to get both transactions and other entities in a pull query, you would need to have a :db.type/ref attribute linking the transaction entity to those other entities.

tengstrand14:01:22

@stuartsierra Ok, thanks, I will try that!

karol.adamiec14:01:03

@stuartsierra how would that look? one needs to explicitly define that in schema and then in every assertion/retraction provide that data? or can i grab txInstant from any entity by default?

tengstrand14:01:34

@stuartsierra I’m not sure if that solves our problem. The tx id can vary between attributes for an entity. We can’t add an extra attribute for every attribute?!

karol.adamiec14:01:12

atm i am having `[:find (pull ?tx [:db/txInstant]) (pull ?e [*]) :in $ :where [?e :order/identifier “asdf" ?tx]]`

karol.adamiec14:01:37

and process that collection later to merge txInstant into order record.

Lambda/Sierra14:01:51

Transactions are collections of datoms. A datom links an Entity, Attribute, Value, and Transaction. The pull API only knows about Entities.

Lambda/Sierra14:01:14

If you wanted to navigate associations between Entities in your data and Transaction entities, you would need to add those associations in your data when you transact it.

tengstrand15:01:52

@stuartsierra …by doubling the number of attributes in every entity?

Lambda/Sierra15:01:39

That was not the use case I had in mind. What I have seen in the past is associations between Transaction entities and the top-level entities that are known to be "affected" by that Transaction. If you need the association at the granularity of individual attributes — i.e., individual Datoms — then you will need to use the query API.

tengstrand15:01:21

@stuartsierra Ok. We need the granularity to be at the attribute level, so then I will just continue with the query API as you suggested. Thanks.