Fork me on GitHub
George Ciobanu04:10:56

is there a way (without using the excellent tx-function examples shared by @andrew.sinclair) to retract multiple values from a cardinality many ref attribute? E.g. in (d/transact conn [ [:db/retract [:app/id "app_2"] :children [:component/id "Component 1"]]] I'd like to remove multiple children (components) at once.


Each retraction performed like this (with a list-form :db/retract) translates directly into a single datom, so you would need to create a :db/retract for each child you want to retract

George Ciobanu14:10:24

Got it. Thank you so much marshall

George Ciobanu04:10:11

Another question: similar to the example above, is it possible to remove a child's parent? I tried (d/transact conn [ [:db/retract [:component/id "Component 1"] :children [:app/id "app1"]]]) and I get an error: :cause ":db.error/not-an-entity Unable to resolve entity: :_children"


the underscore syntax is only valid in pull


in this case you would want to retract from the parent entity, so something like: [:db/retract [:app/id "app_1"] :app/children [:component/id "Component 1"]]


How to get all entities with a missing field? For example all customers that does not have :customer/address


[(missing? $ ?customer :customer/address)] should do that I think.


that works thank you 🙂


Is there a way to rename a Datomic Cloud application? I need to change my dev environment to prod and I put dev in the name.


Can I redeploy with a different name?


Do you mean the system name or the codedeploy application name?


I think you have to set up new stacks if you really want a new system name.


The system name = storage stack name and it’s everywhere.


Yes, I thought so...


also for the codedeploy application name you have to recreate the compute stack, you cannot alter this parameter in cloudformation (I tried it last week)


Hello! I’m wondering if anyone can help me with a query that is looking for missing values that is timing out — I have the following function, which looks for user entities that have an id, but are missing the name. My goal is to assemble those incomplete users, so I can do a scan to fetch their missing information. But the query for getting the count times out. I’m sure there’s a better way to write the query? THANK YOU!!! (I’m running this via datomic proxy, if that matters…)

(defn count-uninitialized-users []
  " count users with missing names "
  (ffirst (let [conn (get-conn)]
            (d/q '[:find (count ?e)
                   [?e :user/id]
                   [(missing? $ ?e :user/name)]]
                 (d/db conn)))))

Execution error (ExceptionInfo) at datomic.client.api.async/ares (async.clj:58).
processing clause: [?e :user/id ?id], message: java.util.concurrent.TimeoutException: Query canceled: timeout elapsed

Mark Addleman15:10:45

I'm aware of three different strategies to deal with timeouts: First, there is a timeout option on the query object (see Second, I have found that using the async api can alleviate the problem (i think async helps when the result set is very large but I'm not certain) Third, switching to the index api rather than the query api is a last resort.


@genekim You can pass a :timeout to the query if you think it’s an issue of it just being a long-running job

marshall15:10:06 you’ll need to use the arity-1 version of the q function


I believe the default timeout is 60s


Ooh!!! Promising, @marshall @mark340! I’m giving that a shot! Thx! I’ll keep you posted.


@marshall @mark340 Changing timeout worked! Thank you, all!!! Woot!

👍 4

@genekim Did you try swapping the clauses?


Yep!!! Alas, still same result.


Yep!!! Alas, still same result.


ok, that would have been to easy. Not sure then


Is missing? someting from datomic?


@jeroenvandijk — the approximate count of entities is 115K with names, and 1MM without names…


ah i see it is, never mind


hmm ok i have no idea, sorry


@genekim make sure to pass db to your functions, so that you can assemble together many functions that operate on the same db basis. This is useful for things like reporting routines, or really anything where you need to take multiple looks at the database and ensure you're seeing the same thing plus, it has the benefit of centralizing the d/db calls and cleaning up the function bodies


Thank you!!! People kept suggesting that to me, but I can’t say I actually understood why until your comment. That’s awesome!!!


@ghadi Ohhhhh…. Got it… I’ll look in the music-brainz code samples to see how it should look. Thank you!

🌈 4

Is it possible to downgrade from production to solo?


Nvm, I answered my own question...


Actually, I didn't. Is it possible to downgrade from production to solo?


I am building a new production instance of Datomic Cloud and want to downgrade the dev instance to Solo. My attempt at just upgrading failed. Any ideas?

Joe Lane20:10:57

@hadi.pranoto You cannot downgrade from production to solo


Has anyone deployed an application on AWS Lambda that uses Datomic Cloud? I'm curious how Datomic Cloud would handle many short-lived connections.


We have some lambdas doing that in production. The connection overhead is far overshadowed by the Clojure runtime boot up time and the VPC ENI provisioning time.


AWS is rolling out a fix for the ENI problem but the Clojure startup time is still a barrier for any customer-facing use cases (we just use lambda for background processing right now)


Thanks for the info. Couple other questions. Are you AOT'ing your code? About how many lambdas are running in parallel?


Datomic itself seems to have no issues that we’ve seen so far.


Yup we are AOTing the code, however, bootstrapping the Clojure runtime on a lambda incurs a ~1.9s hit for cold starts. We’ve tested this in Java lambdas that only depend on Clojure and just import the clojure.core ns.


At most we’ve had ~8 lambdas in parallel hitting datomic.


That's totally fine for our application. Anything under 10s is probably ok actually. What is your actual startup time with all your production deps?


Hmm, we will have a significantly higher level of parallelism. Should be somewhat easy to test, I suppose.


Because of the VPC issue, it takes about 18s. However, we’ve tested in other availability zones that have the VPC fix and that goes down to ~8s.


Is the VPC issue only a problem when sources outside the VPC need to contact the Lambda?


Its an issue when you need to run a lambda in a VPC (which you need to for datomic).


The fix is likely only a few weeks away for us east 1. Can follow it here


Gotcha, ok. Are you guys using deps.edn?


Another option that we’ve considered is just exposing datomic over API gateway using ions w/ http direct to expose a REST-like interface for datomic operations and securing it with an IAM authorizer.


Yup we use deps for all our dependency managment.


Interesting idea. What do you use to AOT?


We have an internal library that we’ve soft open-sourced (use at your own risk): . Its pretty basic, just uses clojure.core/compile and juxt.pack


We use lambda for pretty much everything so it has some extras in there for making it easy to manage multiple lambda entrypoints in a service.


Nice, will write it down to take a look at when we're ready to investigate more. Why'd you guys write your own lib instead of using one of the existing AWS Lambda clj libs?


None of them worked quite the way we wanted. We use quite a bit of middleware internally and wanted an interface more conducive to the pattern established by ring. Additionally, we do quite a bit with cloudformation and we have a library to make connecting lambdas to that easier. We wanted better control of the output artifacts so we could line that up better.


Only other one I was aware of was


Yep, that was the one I'm familiar with.


Looking at the org, I see cdk-clj. We use Pulumi at our org for deployment. It sounds like cdk-clj leverages jsii to work with a TypeScript API from the JVM. Is that correct? I'd be very interested in exploring that with Pulumi.


It does. However, the library has to explicitly be built with a jsii configuration.


Pretty sure it was built specifically to make CDK cross-language compatible. I’m not aware of any other projects using jsii.


I'm not familiar with jsii. Does that mean the libraries need to be actually written in a jsii compatible way? Or is there declarative spec files the library needs to have?


I believe so, although this is stretching my knowledge at this point since we just did it to target CDK. The source library has to be in typescript and has some restrictions:


Our implementation requires the jsii bundle manifests that get produced in the JVM target config. No idea how hard it would be to build an existing project with jsii.


Looks like Pulumi is in typescript so it might work.


Not sure. Pulumi supports a few different languages now but I don't think they use jsii.


Not a high priority for us but definitely super interesting stuff. I'll bring it up with the Pulumi team to see if they have any input. Thanks for all this info - been super useful.


You guys don't happen to use any native libraries in your Lambdas do you @U0BKC8NCU?


We don’t, no. I’ve dabbled with them on side projects though.


Ok. One of the services we're interested in running on Lambda needs some native libs. Seems like other people have used native libs on the jvm on Lambda.


You should be able to build the native lib on a docker image like and then package it into a lambda layer


If you have a pre-built binary thats compatible you can just put that on a lambda layer directly and ignore the docker step.

tyler23:10:13 is what you’d want to build on, not the ci image I posted above.


That seems like a better path than manually extracting the native libs from the classpath. Thanks! What do you guys use for logging in Lambda? I've used CloudWatch logs in the past and can hardly stand it after using Datadog logs for a while.


i.e. Do you guys ship the CW Lambda logs to somewhere else? Do you have some other log viewer?


We use cloudwatch logs for lambda logging but we’ve been moving to using traces instead with Amazon Xray and then just linking the logs from Xray.


We also post all the cloudwatch logs to an elasticsearch instance. Although haven’t really used that since we’ve gotten Xray setup properly.


Do you have some wrapper around the Xray api? We use opentracing right now and have a defn wrapper that can be used to instrument certain functions.


Yeah we just have a lightweight wrapper around the Xray java API. Nothing fancy. We are leveraging the annotation and metadata features pretty heavily, not sure how those interplay with opentracing. We’re pretty much all-in on AWS so we have tried to use their supplied libraries to get as much as we can out of their services.


Specifically this They will auto instrument calls to all AWS services (which we use a lot of) and to anything using the apache http client.


We started using the cognitect aws-api so we probably will miss out on the auto-instrumentation 😞


We did too although we are reverting back to the aws sdk. We like the interface for the aws-api lib by cognitect but its missing a lot of goodies. Luckily, the aws sdk v2 exposes a data class interface that allows for the same aws-api payloads to be marshalled into request classes. We are working on an internal lib that is basically a 1-1 replacement with the cognitect aws-api library.


Oh very cool!