Fork me on GitHub

I just noticed the limit on string sizes is 4096 characters ( I transacted some data with more characters (> 10k) and it stores and retrieves it just fine. It’s perfectly reasonable to set a limit on string sizes, but 4KB is often too small for our use case (The ddb limit is 400KB). How do you best deal with larger text values?


I’ve seen others in this channel mention making use of some external blob store (e.g. S3) to store “large” values, and then only storing the key/identifier/slug of that value in Datomic. To retain immutability, the object in blob storage should never be directly modified, but instead should be copied-on-write. This way datomic’s history will always point to valid blobs. Hope this helps.


Thanks, I was thinking about that too, and combining it with cloudsearch for querying inside the docs.

Dustin Getz16:04:22

I believe large blobs impact performance which is the reason for the 4k limitation in Cloud


Hello! Basic question. I want datomic clients on a different machine than the peer server. Can I just start the peer server remotely, let :8998 through the firewall and connect to it from my clients with the access credentials I set? Will that expose my access key and secret over the network? Will normal traffic be encrypted over the network? Or do I have to tunnel this myself if I want it encrypted? I'm working with the docs here: Thanks!


(if my question is stupid because of X reason, please do shout out; I'm figuring this out for the first time)


Slight update: I think I'm going the safe route; keeping the peer server running behind the firewall, and using an SSH tunnel for connection. Then I can keep SSH as the only means of access. I'm still not sure though, so replies are welcome.


Does anyone have an opinion around the best practices for access control in datomic cloud? In the past I had used (d/filter ...) in the peer model to provide a filtered view of the database using middleware such that the risk of leaking data from a poorly written handler was low but it doesn’t appear that there is a straightforward solution to this problem in the client model.

👍 4

I did this by ensuring all queries/pulls/writes go through a decorator pipeline (I used interceptors but plain fns would work) before they hit the client api. it works well as long as you ensure all client api calls are proxied by the pipeline

👍 4

unfortunately I can’t share the code because it has proprietary design included but it’s not magical, just adding where conditions etc


pulls are trickier. in that case you have to check the results after they come back from the client api call