Fork me on GitHub

Is toplevel jvm-opts considered? I just played with it a bit, defining a property , e.g.,

{:jvm-opts ["-Dapp.config=config.edn"]}
Is only effective when it is put under an alias.

Alex Miller (Clojure team)02:05:51

no, that's not a feature


For that, if I want to enforce

:jvm-opts [
. Where shall I put? Resort to some JAVA environment variables?


@UGC0NEP4Y What's the execution context here? Are you concerned about this for dev/test? Or for running code "normally"?


When I run code, either clojure -X:dev or clojure -X:prod, or even no -X is specified, I always want the OmitStackTraceInFastThrow to be disabled.


OK, so you're using aliases for that already? You can add :jvm-opts to those aliases?


If there is no better options, I can add. Just a dup of work. I just discovered top-level jvm-opts is ignored today, and mainly wanna confirm here.


clj-kondo should warn on global jvm-opts feel free to upvote:


It doesn't have to be duplication of work, you can always have a ‘common’ alias or something that you include with your other aliases when running.

Mark Wardle10:05:06

Is there a clever and simple way of getting the versions of a project's dependencies, irrespective of whether running as a jar, or using the clj/clojure command line tooling? It seems to me that the only way of doing this is to look for and parse the deps.edn file in the current directory, and use that, if it exists [presuming we are running from clj] and then fallback to looking at a resource in the classpath, which I would have to inject as part of a process? Neither approach will work properly if the project is used as a library and the owning project uses a later version of the dependency, which I guess I can accept. Or is there a better way? This is important because of reproducible research - I want to include versions of data and versions of software used in any analytics pipeline, so a live service needs to include metadata on the data and software versions used. Thank you.


> Or is there a better way? Nope. > This is important because of reproducible research - I want to include versions of data and versions of software used in any analytics pipeline, so a live service needs to include metadata on the data and software versions used. I'm not sure I follow. If you're in control of how the relevant processes are started, simply always refer to deps.edn for all the versions. Alternatively, build an uberjar and use that as the source of truth - the versions don't matter at all if you have the binary that was used to produce the results (assuming the JVM and the overall environment are also the same - that could matter).

Mark Wardle10:05:53

Thanks. Thought as much. I can manage in the top level project, but I have a few of those so was looking to let each dependency manage their own versioning and data. They can do latter, but the top level project at the time will have to supplement with the software versions used. Thank you.


So you have projects that can be run by themselves but also could be a part of a larger project? And that project is also run as a single JVM process as opposed to running other code by shelling out?

Mark Wardle10:05:17

Yes that's right. As a library or as a standalone service. Usage depends on context - eg a live clinical system vs a research pipeline. I'm using same logic and data but obviously composed slightly differently.


And what do you mean "reproducible" in this case - reproducible in the same context and same setup (library vs standalone) or reproducible across contexts and/or across setups?

Mark Wardle11:05:30

Thanks. Reproducible in that, given the version strings (data and software) someone could come along and reproduce the same results because they could set up the same configuration. I think I will make overall versioning a function of the top level project and either stuff something in the class path (for jars) or do deps.edn parsing for the clj cli. It felt as if I might be missing a better approach but it sounds not :)


It's still quite a bit vague, but: • If a project is run from sources by itself, deps.edn will be the source of truth - you don't need to do anything here • If a project is run as a JAR, pom.xml (or the contents of the jar, if it's an uberjar) will the the source of truth - once again, no need to do anything to make it reproducible • If a project is a part of a larger project, the two items above will apply to the larger project since it can potentially override some transitive dependencies. But you still don't need to do anything here The only way something can become broken when using deps.edn or a thin jar with pom.xml is when you yourself have replaced some files in ~/.m2. If you don't do that, all should be fine. If you're still worried, an uberjar is the way to go. Reproducibility is not something that only research is interested in. The whole programming community is interested in is, so it's pretty much a solved problem, as far as I can tell.

Mark Wardle12:05:16

Ah I see where I have been unclear. Running software needs to include metadata in its output that will give provenance to the materials output. You can do that out of band when used directly, by invoking the tools and then including that additional information. Where it doesn't work is when, for example, the data are made available via an API. The requirement is to include those version strings inside the running software so that they can be included in any data output as part of that process.

Mark Wardle12:05:12

So I will use deps.edn if it is there, and pom.xml as a fallback. It's akin to wanting to include the version string in eg a command line tool, but to also include the versions of some important dependencies and the database versions (snapshots) they are currently using. Thanks for your help.


Ahh, I see! :) In this case, I'd probably output everything I could - top-level deps.edn if it's found, any other deps.edn if there are local dependencies, top-level pom.xml (both deps.edn and pom.xml could exist, and might be out of sync with each other), the class path, and maybe even hashes of all the JARs on the class path (in case there's stuff like ~/my-lib.jar).

Alex Miller (Clojure team)13:05:04

May be helpful for reporting the whole transitive dep list

Alex Miller (Clojure team)13:05:35

Can also output edn if needed

Alex Miller (Clojure team)13:05:33

The maven dependency plugin may have something similar

👍 1
Mark Wardle14:05:46

Thanks both. So I could probably output everything, including the results of -X:deps list as part of my build and make it available to the code generating / processing the data. Thank you.


@U013CFKNP2R You could use tools.deps.alpha programmatically to get that data if you wanted to do it in-process with the code generation/processing...

👍 1
Mark Wardle17:05:32

Thanks. I'll do some investigation!

Alex Miller (Clojure team)18:05:12

if you need any help on that, happy to do so - it's only a few lines of code

Mark Wardle20:05:50

That's a kind offer, but I will look forward to delving in to something quite new to me, and it is very much specific to this work. 🙂


I have some common tests in a library that I’m trying to invoke from another project, is there a better way to do that than this:

(ns my-project.test.common
  (:require [clojure.test :refer :all]

(deftest shared
  (run-tests 'my-lib.common.test))


I don't see why run-tests should be wrapped inside another deftest, but other than that... you might also want to set the exit code of the process based on failing tests.



(def test-results
  (t/run-tests 'your.test-a 'your.test-b))           

(def failures-and-errors
  (let [{:keys [:fail :error]} test-results]
    (+ fail error)))                                 

(System/exit failures-and-errors) 

🙌 1

oh nice, hadn’t thought to check the babashka book out for tips! also, I came to #clojure to try and give you a break @U04V15CAJ 😂


And here I am, on my break ;P


you are superhuman! 🙇:skin-tone-2:


Setting an exit status greater than 255 might have unintended consequences.


Ah right, you should just return 1 or 0 probably

👍 1

There was an issue a while back that was an eye opener for me:


If there are 256 failures we're all good. I love computers. 😉