Fork me on GitHub

@val_waeselynck: Ah yeah sorry, that wasn’t a given. It is in a transaction i am using the lookup ref. I am basically doing a big data import from another database with a lot of tables that are linked by ids. For each id I am creating a ref, which I create by finding the entity that it references. The reference I am trying to resolve should be there, but in case it isn’t (due to data error) I still want the transaction to succeed. However when using lookup refs, the transaction fails when the lookup ref doesn’t resove to en entity. Currently I am using a query at the peer to find the id if present, but it is not as nice a solution as just putting a lookup ref in the transaction (and it adds the problem of making sure that the peer is synchronised with all the relevant transactions).


@ljosa: thanks @bkamphaus Can you say something more here? Does the order depend on the index used?


@casperc Id use the query in a transaction fn


@val_waeselynck: Good point, that might be the way to go there.


@akiel: no order is promised as when the entities are retrieved the underlying semantic is a set. Regarding implementation as vector as opposed to entity API — due to flexibility of nested pull specifications, it’s possible to get conflicts in the set which would reduce the count of elements and mislead about the number of matched entities. The tradeoff, as you recognize, is not being guaranteed correctness in equality, etc. of retrieved vectors since order is not promised.


@bkamphaus: I understand your point with nested pull specifications. I also understand that promising an order would constrain future implementation changes. But is it possible to declare the ordering as undefined but consistent within a version of datomic and either within one point in time or over all points in time?


hi @hjrnunes


Can I add an entity like this, nesting components is shown?


@akiel: I understand the case you’re making and can make a note of the request. To set expectations, though, we’re pretty conservative on making guarantees, and consistent ordering within a few caveats like within but not between versions is not likely to be one we’ll be making. At least for the short term it’s just a tradeoff when using pull.


@bkamphaus I’m fine with this. Thanks.


@hjrnunes: I’m not sure whether you can specify nested entities. But if so, they miss the :db/id (d/tempid …) part.


But it’s alsways possible to add the entities first and reference them later by there tempid (in the same transaction)


so I’m trying to do two different things here, I guess


some of the components reference existing entities, others reference new entities that are to be created


I understand your point re. the second case i.e. the new ones


but what about the first case?


btw, I’m trying to use lookup refs but I tried with the actual id in long format, and still get the same error


@hjrnunes: See: — I believe issue is that you need to use map form, not list form for parent entity (can’t nest the map as a value in a :db/add list form).


so the entire transaction needs to be a map then?


@hjrnunes: not the entire transaction, but the nested map has to be inside a map. You could specify other [:db/add …] or [:db/retract …] forms in the transaction as a whole.


I see; does the map format implicitly means :db/add?


@hjrnunes: Yes, internally transformed into the add form, see:


ok, so I guess the right transaction would look something like this then:


@hjrnunes: I would just put the :recipe/name assertion in the map as well, to be honest. As opposed to passing the same tempid twice.


I.e. if you’re just asserting the entity, one big map is typically the most readable form.


yeah I suppose that’s a good idea


Btw, can I use lookup refs in nested components?


I believe you should be able to, not sure if I’m done that specifically before but I would expect that you can.


ok, i’ll give it a try


well, I’m getting something more puzzling now


IllegalArgumentExceptionInfo :db.error/not-a-data-function Unable to resolve data function: :db/id datomic.error/arg (error.clj:57)


@hjrnunes: that error would indicate you’re using :db/id in a list form somewhere rather than map form.


Yeah, I thought that initially


I’ll double check


just to confirm


is the tx data supposed to be wrapped in a vector when it’s passed on to transact if it is a map?


i.e. (transact db tx-data) or (transact db [tx-data]) assuming tx-data is a map?


Using the example from:

(d/transact conn [{:db/id order-id
                   :order/lineItems [{:lineItem/product chocolate
                                       :lineItem/quantity 1}
                                      {:lineItem/product whisky
                                       :lineItem/quantity 2}]}]


right, so I can’t see what the issue is then


that’s my tx map, it gets wrapped in a vec before it goes to transact


@hjrnunes: and you’re sure you’re wrapping it in a vec and not converting it to one?


actually, I’m doing that exactly


@bkamphaus: perfect, got it working. Thank you sir!


Is it bad to include indexes on most schema attributes?


@currentoor: nope, in fact the overhead is fairly cheap, I’d probably turn :avet indexes on anything that wasn’t blobby, though we’ve improved perf. for that specific problem (large string or binary values in :avet).


@bkamphaus: Oh ok awesome. I need to add them retroactively. Anything I need to be careful about?


nope, I would just review the docs on schema alteration - - to get example forms of the alteration transactions you need to submit.


Cool, thanks!


Hey, maybe kind of a dumb question, but I've got my transactors deployed to AWS/DDB with IAM roles, and my deployed peer machines can connect to them just fine from the aws-peer-role roles, but now I can't figure out how to tell Amazon to treat my laptop as also being in that peer role during development.


What do people generally do to get this working, is there a way I can avoid setting up environment variables on every development laptop?


would datomic be suitable for real-time survey program? say a lot of people cast votes on items and also see real-time updates on vote counts


my main concern is the write throughput. any insight in this would be appreciated


@timgilbert: for running a transactor or peer on a laptop against ddb, as it’s a dev/testing scenario only, I just have my AWS user access keys in my environment. I think to use roles outside of AWS resources you’re still stuck with a user who must assume the role.


@settinghead: nothing about your use case intrinsically disqualifies Datomic. I guess the main question is what’s your best estimate of the throughput you’d need?


Is a sorted collection of squuids the same as if they were ordered by the time they were created?


Assuming they were created simply with (squuid)