Fork me on GitHub
Oliver George06:03:18

Hello. I'm trying to write a query which takes a list of filters and should only return when all filters return true. This is what I have. It's not right but perhaps on the right track.

Oliver George06:03:46

Question is how to destructure the filter map and "and" the tests

Oliver George09:03:11

Slowly getting there. I have a recursive step to expand out the filters now.

Oliver George09:03:57

Last odd thing is that my filter? function is called twice as often as I would expect (so same inputs presented twice). I'd love to know why.


Modeling question: has there been anything written around the idea of potentially collapsing the (fully qualified) attributes on cardinality one, component refs into their parent refs?


A few things: updates to a component trees become less complex, also it’s a little involved to explain but I think it becomes a little less complicated to create strict specs as it regards required keys. Anecdotally, I’ve noticed cases internally where we’re constantly flattening nested component refs, that we initially modeled that way because it matched a business concept. I wouldn’t say it’s reason, but you also drop the collecting attribute. Any case where you might refer to the component aggregate you can select keys.


by updates, I mean upserting novelty into a nested document. Though it’s not that bad with a tree you just have to pull the matching :db/ids


so, does that sound unfounded then?


I've wanted to do something similar to this before, as datomic results can get very deeply nested if you're traversing long paths in the graph


FWIW, there's no technical reason you couldn't put :user/name and :address/street in the same entity, though, and just sort of manually smoosh two different entities together


I have a query against solo that returns about 3000 items - and takes about 22 seconds. I was hoping to reduce that by using :limit - but it seems to take the same amount of time. Thoughts on speeding up the query? Likely will just cache those on the server for now so the web based client can get them quickly. Wondering in general how to approach this with datomic cloud.


Having to adjust from having frequently used queries automatically cached into the peer server!


@donmullen that should speed up on a warm cache


Hi people, we have a lot of entities that we want to get rid of (~1m per day from early january). In the schema of those entities we have noHistory true for all of the attributes. Yesterday i was playing with excision on my laptop, and got datomic full indexing for hours and hours with only 10k of excised entities. The procedure was: excise, excise-sync, gc-storage... So today I was thinking about that if we had nohistory, and we retract it, that entity... will show up in a backup? What is your strategy to deal with old data?


I’m suddenly getting the permission error (`Forbidden to read keyfile at s3://....`) and can’t reproduce on another machine with the same credentials (it works!). What else could cause this?


AWS Creds. That error indicates that you are running in a role / with credentials that dont have the correct permissions @denik see:


it’s possible you have your local AWS profile configured in one place but not the other


there is a hierarchy/order of precedence for the various credential sources (env creds, profile, etc)