Fork me on GitHub

Is it unwise to use :db/noHistory on a :db.unique/identity attribute that is meant to identify ephemeral, high-churn entities?


I'm passing as-of a Date from clj-time's to-date but I'm getting a casting error, something to do with datomic idb. Has anyone had this issue before?

class java.util.Date cannot be cast to class datomic.db.IDb (java.util.Date
   is in module java.base of loader 'bootstrap'; datomic.db.IDb is in unnamed
   module of loader 'app')


Are you passing the date as the first argument instead of the second?


Nope, here's my code

(let [time (tc/to-date
            (t/from-time-zone (t/date-time 2019 10 22 16 50 0)
                              (t/time-zone-for-offset +8)))
      db-then (d/as-of (d/db conn) time)]
  (d/q '[:find ?doc
         :where [_ :db/doc ?doc]]


and db-then is a db...

(let [time (tc/to-date
            (t/from-time-zone (t/date-time 2019 10 22 16 50 0)
                              (t/time-zone-for-offset +8)))
      db-then (d/as-of (d/db conn) time)]
  (type db-then)) => datomic.client.impl.shared.Db


This made me laugh very hard. Thought it'd be of interest to the immutability fans out there 😂


Don't forget the FAQ section.


Just found out the author is in this slack...


Hi. I want to "play around" with datomic cloud but I'm having completing the tutorial. The datomic cloud setup on AWS all seemed to work fine. I have datomic-access running and I can do the curl -X socks call from the tutorial and I get a successful response with s3-auth-path. But when I try (d/create-database client {:db-name "testion"}) I get:

Unable to find keyfile at
   . Make
   sure that your endpoint and db-name are correct.

Msr Tim15:10:59

you need to permissions to access those s3 files

Msr Tim15:10:29

did you see that file in s3 ?


I'm new to AWS. I made an IAM user and gave him AmazonS3FullAccess permissions. Is that enough? How can I test access to those files?


I'm now the root user and still same problem. I'll try and delete and re-create the cloud-formation. maybe this helps


We recently upgraded from the Solo to the Production topology. When using a lambda proxy, the request contains a requestContext with authorizer claims parsed from the oauth2 token in the request. When using the VPC Proxy, this information is missing. Is there a way to retrieve it?

Joe Lane17:10:36

@dmarjenburgh We ended up parsing the token (jwt in our case) in a pedestal interceptor to work around this missing piece in http-direct.


Yeah, figured that would be the thing to do. Thanks

Joe Lane17:10:17

If you need to see the contents of the request I suggest casting the request object and looking at it in cloudwatch (seemed to be the only way to debug it)


Did you use a library for parsing the jwt, or just base64decode it yourself?

Joe Lane18:10:52

Decode it myself


Anyone tried this? Is there a foot-gun lurking here?


I’m thinking of also using :db/isComponent for all attributes on these ephemeral entities so that I can retract them in one fell swoop.


What does that gain over retractEntity?


Ah yeah, good point.


I have a feeling tho that the sage advice might be to not store this type of data in datomic. I’m attempting to avoid bringing in another storage mechanism.


why do you think there might be some special problem here?


Just looking ahead to see if this is a known bad idea. I’m gathering that it’s probably just fine tho.


a minor caveat is that noHistory is not a guarantee of no history ever, just that history will be dropped from indexes. so you may still see some history between the last indexing job and now


also I don’t know if history disappears from transaction logs


I see in the docs that the indexes are stored in S3. Would it be correct to say that :db/noHistory = :db/noS3? Or just that the datom will eventually not exist in S3? > The effect of :db/noHistory happens in the background Maybe that’s what this means. The datom is eventually scrubbed from S3 in the background..?


> also I don’t know if history disappears from transaction logs Looking in the docs again, this would mean that it’s still stored somewhere at the end of the day, yeah? DDB in this case. And these datoms would still show up in d/tx-range.


I don’t know the ins-and-outs of cloud


for on-prem, tx log data is written to storage and kept in peer+transactor memory until the next index kicks in. Reads transparently merge the last index time with the in-memory index derived from the tx log; but when the in-memory index is flushed to storage, the history of no-history attributes is not written. I don’t know if the transactor also takes the additional step of rewriting the tx-log to remove attribute history, but it seems unlikely to me. For cloud, I don’t know the precise mechanics of where that in-memory log goes or what precisely happens during indexing


anyway, this is easy to test. If it matters to you it’s probably better than listening to me speculate


actually only on-prem is easy to test. on cloud there’s no d/request-index, so you will have to induce it some other way (probably via lots of writes).


it will stay in history logs


Thanks for the info guys 🙏

Msr Tim21:10:16

how much of an AWS expertise does one need to run and maintian datomic . I finally setup a test production topology. it setup tons and tons of AWS things that i don't really grasp.

Msr Tim22:10:53

Is there a future posiblity of a hosted version of datomic


…you mean on-prem?


I think he meant a version where all you have to do is get credentials, download the client and you're good to go. This is what I thought cloud was going to be initially. Right now, you still have a lot of moving pieces with all the AWS stuff you have to setup yourself.

Msr Tim13:10:25

yeah. Exactly.

Msr Tim13:10:15

I don't feel confident at the moment that i can maintain AWS setup on my own in a small team

Msr Tim13:10:04

maybe someday if get up to speed with AWS properly


Possibly on-prem has a lower support burden? It is less embedded into aws


run a transactor, run a peer, use dynamo for storage

👍 4
Msr Tim16:10:22

ah.. maybe

Msr Tim16:10:36

but i would really prefer not running my own database