Fork me on GitHub
Jacob Rosenzweig02:01:41

Couldn't find a migratus channel but this is a simple question: What does it mean to migrate/up vs migrate/down a migration? I'm a bit unfamiliar with how migrations work in general.


migrate up is to apply a change migrate down is to undo it


Be careful if you're migrating data, though, since you can't always undo that


@rosenjcb There's a #sql channel where folks ask general database/SQL questions, so migrations comes up as a topic in there sometimes.

Jacob Rosenzweig02:01:59

Oh there is one. I completely forgot about that.


(there are also #honeysql and #hugsql channels which are the two most common "helper" libraries for working with SQL/JDBC)

Nom Nom Mousse13:01:07

Questions about exceptions in futures: I have tried setting

;; Assuming require [ :as log]
 (reify Thread$UncaughtExceptionHandler
   (uncaughtException [_ thread ex]
     (log/error ex "Uncaught exception on" (.getName thread)))))
as suggested, but threads still seem to die silently. One;cid=C03S1KBA2 is that this is due to Thread/setDefaultUncaughtExceptionHandler not applying to completable futures. Is there a way to work around this? Or what other ways are there to ensure exceptions are thrown from futures?


> Or what other ways are there to ensure exceptions are thrown from futures? As I mentioned there, in order to do that you have to call .get on that future (in case it's a Java's future - with Clojure's futures, you can just deref them), or some other method that throws when the future's body has thrown. Otherwise, futures act like "fire and forget".

Nom Nom Mousse15:01:43

Okay. Then this should work then (note the .get in the final line). So my exceptions disappear for some other reason (incorrect catching perhaps)

(defn add-completion-dispatch [java-proc jobid job]
    (.onExit java-proc)
    (reify java.util.function.Function
      (apply [this p]
        (reset! app-state/jobresult [jobid (.exitValue p) job])))))

(defn dispatch! [jobid]
  (swap! app-state/jobid->jobstate assoc jobid :in-progress)
  (let [job (@app-state/jobid->job jobid)
        _ (tap> {:tap-id :start-job :tap-data {:jobid jobid :job job}})
        ;; In some cases (like when running samtools index on a publishpath)
        ;; only the publishpath (i.e. what is normally a symlink) is created.
        ;; To know that such jobs actually created a file, we need to remove
        ;; the symlinks first
        _ (helpers/remove-symlinks job)

        _ (helpers/create-folders! (helpers/files-and-symlinks-to-be-created job))
        bb-process (->process job)
        jobfuture (add-completion-dispatch (:proc bb-process) jobid job)]
    (read-process bb-process :err jobid)
    (read-process bb-process :out jobid)
    (.get jobfuture)


@U0232JK38BZ I'm not sure what exactly you are doing but to catch exceptions in future's you should have an explicit try-catch block about the whole future's block That's typical, for instance, if you want to log something - e.g. timbre has logged-future, which can be something like this:

(defmacro logged-future
  "Given a body, execute it in a try/catch and log any errors."
  [& body]
  (let [line (:line (meta &form))
        file *file*]
         (catch Throwable t#
           (println t# "Unhandled exception at:"
                    ~file "line:" ~line
                    "on thread:"
                    (.getName (Thread/currentThread))))))))
I personally like this (my own) extra wrapper which includes client's stacktrace (very useful in all sorts of situations!)
;;; Now we get some extra goodies by preserving also the client stacktrace
;;; See 
(defn logging-future+* [file line body]
  `(let [client-stack-trace# (Exception. "Client stack trace")]
       (try ~@body
            (catch Throwable e#
              (log/error e# "Unhandled exception at:"
                       ~file "line:" ~line
                       "on thread:"
                       (.getName (Thread/currentThread)))
              (log/error client-stack-trace# "client stack trace:"))))))
(defmacro logging-future+
  "Logs any error that occurs during the execution of given body in a `future`
  *including* the client stack trace at the time of submitting the future for execution."
  [& body]
  (logging-future+* *file*
                    (:line (meta &form))
#_(logging-future+ (Thread/sleep 1000) (throw (Exception. "ERROR!")))
Check if you haven't seen that.

🙏 1

The above advice is not entirely correct because in the general case a future might not know how to handle its own error correctly. You can use try-catch within a future to then turn it into a result map, like {:result something} or {:exception something}. But then you'd have to deref the future to get the result - which would propagate the error anyway.

Nom Nom Mousse14:01:34

Also, is it possible to get your program to only use a single thread to simplify debugging?


Is there a Jira for (applies to dorun too)? These docstrings over-share a well-intentioned but over-simplified or not-entirely-correct statement about lazy sequences to explain why doall/dorun are useful. I'm not the OP, but it's potentially a one-word fix ("do not"->"might not") unless you want to get rid of the contextual info entirely. (This is an example of "no good deed goes unpunished" in docstring-writing)


I’ve noticed how much early Clojure code (inside Clojure itself) used a lot of metadata. For example, on functions there is metadata for docstrings, versioning, pre/post conditions, etc. Have we shifted away from the practice, since there are other language features that seem to replace it? Or are there still times and places where metadata is uniquely useful?


I can think of a few, just based on my teams production code: • annotating nodes using metadata in combination with zippers to carry extra information about tree nodes and not affecting the actual data structure • we have a data validation library and metadata is used to embed schemas in validation functions (a bit hard to explain without the actual code) • you can use metadata and protocols, so in some cases you can avoid using records


Metadata is still where all the listed things go, we just have nicer syntax for it instead of having to manually add it to metadata


Early on in clojure.core that syntax doesn't exist yet, because it is added via macros, so early on in clojure.core in order to add all that stuff it must manually be added to metadata


Ok, @U0NCTKEV8 that makes sense.


Here's an interesting perspective on defining metadata separately from its source:

Andrew Lai19:01:41

Hi Clojure channel! I’m trying to modify the compile-time logging of the `amazonica` library and re-route the AWS SDK logs to Timbre and I’m having some trouble - seeking help. My project is a leiningen project. Normally, when compiling, I see logs emitted from com.amazonaws.auth.profile.internal.BasicProfileConfigLoader namespace - I believe these logs are coming through via the My goal is to route these logs to Timbre and apply a ns-filter to them to eliminate them. Checking JVM options to make sure I can set Timbre compile-time behavior My project uses Timbre. I tried modifying the Timbre compile time behavior by setting :jvm-opts ["-Dtaoensso.timbre.min-level.edn=:fatal"] in my project.clj just to prove that I am setting JVM options correctly to change compiile time logging. When I updated the jvm-opts this does not affect the Amazon logs (they were still emitted), and is consistent with Logs being output via Apache Commons Logging. However, with this change, I can see that it does affect Timbre logging (I included a (timbre/log "HELLO"), statement at the top level- when I add the jvm-opt this message is suppressed) so I have confidence that the JVM opt is affecting Timbre logs. Re-routing Apache Commons Logging to SLF4J So far, I still expect logs to be emitted, because I haven’t re-routed the AWS SDK logs from Apache Commons Logging. When I include the log4j-over-slf4j dependency and slf4j-nop, I see the offending logs are suppressed, as I’d expect. This seems to indicate to me that the logs are getting re-routed via SLF4J, and the no-op implementation is suppressing them.

;; my deps when re-routing apache commons logs to slf4j and using a no-op
[org.slf4j/log4j-over-slf4j "1.7.30"]
[org.slf4j/slf4j-nop "1.7.30"]
The problem - rerouting SLF4J to Timbre This is where I’ve run into problems. When I swap out the slf4j-nop for slf4j-timbre , I would expect the logs to be re-routed to SLF4J, and then SLF4J to re-route them to Timbre. However, when compiling, I see the WARN level logs appearing, regardless of how I set the jvm-opts in my project to change the min logging level. When changing the jvm-opts, I can still see that the JVM opt affects my (timbre/log "HELLO") statement. So as far as I can tell, I have successfully re-routed logging to Timbre, but the Timbre configuration is only being applied to some of my logs (the ones I wrote), but not to the logs emitted from amazonica via the AWS SDK.
;; my deps when re-routing apache commons logs to slf4j and then SLF4J-> Timbre
[org.slf4j/log4j-over-slf4j "1.7.30"]
[com.fzakaria/slf4j-timbre "0.3.21"]
I’ve been troubleshooting this for a while, and I’m not sure how to continue. Does this train of thought make sense? Would you also expect the logs to be getting routed through SLF4J to Timbre at this point? Do you have any suggestions or pointers?


correct me if I'm wrong, but it looks like this is a slf4j-timbre question / issue - it's not really about clojure is it?

Andrew Lai19:01:28

That’s a good point - I spent a bit thinking about where to put the question. Looking for a #logging channel and didn’t see one. You’re right though, it is about interactions between specific libraries

Andrew Lai19:01:45

Do you have suggestions about where I should put this kind of comment in the future?


I think it's even more specific than that - this is how slf4j-timbre is expected to be used, so it's either a question of not following the instructions for that library, or a bug in that project?

Andrew Lai19:01:53

So a project issue in slf4j-timbre would be a better forum?

Andrew Lai19:01:22

Given that it doesn’t seem to belong here - I’ll delete this thread so other’s don’t spend time on it and move it over there then. Thanks for the pointer @U051SS2EU!


I think so. There are gray areas for sure, I'm not saying you can't post about an issue with a clojure library / that people won't be able to help. But it does look like a candidate for a bug report (after double checking the README and ensuring you are using the lib as it's expected to be used...)

Andrew Lai19:01:51

:thumbsup: , I’ve read and re-read the readme (it’s quite small) and the source. I’ll write it up for posting over in GH

Drew Verlee22:01:59

When i run this function from my command line i expected a clojure cyrogen project (file structure) to be created. instead i get No function found on command line or in :exec-fn clj -Sdeps '{:deps {io.github.cryogen-project/cryogen {:git/tag "0.6.6" :git/sha "fcb2833"}}}' -Tnew :template org.cryogenweb/new :name scott-jenkins/blog is the "function" in question "new" in "-Tnew"?


create is missing.


clojure ... -Tnew create :template ...

Drew Verlee22:01:35

that did what i wanted. thank sean, ill make a note on the cyrogen site.


(also make sure you're actually using clj-new there, not deps-new -- since clj-new recommends -Tclj-new and deps-new recommends -Tnew)


Cryogen's site says

clojure -X:new :template cryogen :name


That is not the same as -Tnew

Drew Verlee22:01:28

which do you recommend?


What's the question? (I wrote both clj-new and deps-new so the above is my recommendation?)

Drew Verlee22:01:48 says to use deps-new as such clojure -Sdeps '{:deps {io.github.cryogen-project/cryogen {:git/tag "0.6.6" :git/sha "fcb2833"}}}' -Tnew :template org.cryogenweb/new :name myname/myblog


Ah, they've switched to deps-new... good... so that's an error in their docs.

Drew Verlee22:01:22

great. i missed that. 🙂


I submitted a PR to fix it.

🎉 1

BTW, there's a #cryogen channel which is probably where this belongs.

👍 1

I’ve been surveying the strategies for versioning in the Clojure ecosystem (e.g. semver semantics are janky which is why Clojure’s version can look “odd”). Is there a summary of the suggested approach? Example, • “growth” API like do -> v0.7.0 • “stable” API like Clojure -> 1.10.3 • “stable” API like next.jdbc -> 1.2.761 • “stable” API like ClojureScript -> 1.10.741 (although the latest is actually 1.11.4 — is it aligning to Clojure now?) The above are libraries, so the next question would be whether they would apply to a web app? My use of “growth” and “stable” are borrowed from — Thanks all!


I have really grown to like the major.minor.commit-count approach for libraries but I need to figure out how to automate documentation updates for that approach (since updating docs will add at least one commit). I think it's applicable to an application as well as a library. I think there is also a good argument for major.minor.yyyymmdd as well for an application since that gives an immediate indication of how recent a release was, which I think people are sometimes more concerned about with an application?


> major.minor.yyyymmdd + a timestamp? I am thinking that if there are n releases a day we would lose that granularity. :thinking_face:


yyyymmddhhmmss is fine too 🙂


I've been thinking of just having one number + another number in the lib name. Something like: lib v1 lib v2 lib v3 lib v4 lib v5 lib2 v1 lib2 v2 lib2 v3 ...


Or maybe a major+minor, but it be the same setup. lib v0.0 lib v.0.1 lib v1.0 lib v1.1 lib2 v0.0 lib2 v1.0


The downside is that, if you make a backward breaking change, but it only is breaking to say 10% of users, it forces 100% of users to upgrade since the namespace has changed.


What's the benefit of a commit count?

Joshua Suskalo02:01:18

commits between releases is a reasonable way to see about how much work went into a release.


Hum, I guess possibly haha, if you're someone who does a lot of intermediate commits.


But still, what would it help you decide? Would you hesitate to upgrade if you see the count went up a lot?


I like that the commit count in the version shows an immediate monotonically increasing sequence. You can ignore the major/minor and just see progress -- and it does give some idea of the level of work -- even if different people handles their git commits differently...


So would you tell users to ignore that part? And the real releases, i.e, commits that are coherent, well tested and working, are always a major.minor bump?


Or do you make a rule to never commit something in a broken/incomplete state?


My projects automatically release a (snapshot) version to Clojars on every commit that passes tests -- and when I cut a release on GH, it only goes to Clojars if the tests pass (but otherwise the release is automated).


I'm tempted to remove the snapshot stuff and just let full releases go to Clojars on every commit -- and automate some of the release notes / change log etc.


The only difference between releases on Clojars and using SHAs on GitHub for me at this point would be that Clojars releases passed CI tests.


I don't think master (or main or develop or whatever your default branch is) should ever be broken. Do the work on a branch. If tests pass, merge it to your default branch.


That's a fancy setup. I like it, shows confidence in your test coverage. I like people to TAG things with git release to indicate a similar thing, I guess pass all tests in your case, or just generally, ok this one I tested properly and should work across the board.


So would it be your main commit count? Or the commit count of your branch as well? I guess that depends how you do your merge


git rev-list --count HEAD -- so it depends on how you manage your branches to some degree (whether you are merging or rebasing).


I wonder, if the idea is to give an indicator for the scope of change, if something like count of changed loc would be another good metric. You could just add the count of changed loc from the last.


I think it's important to have something that is monotonically increasing, to be honest.


If the last part -- count of changed loc -- is essentially random, it serves no useful purpose.


Since you'd have to update the minor number every time or be major.minor.patch.loc-change


I'm thinking count of changes till now + count of changes since previous. So it would be incrementing, though not always by the same increment, so you could not say how many released versions exists between two versions.


Is changed line count useful for Clojure tho' given that we're form-based, not line-based?


I mean, you add in a let and all the code below it is "changed" by being indented and a ) added to the "last" line?


Even if you really only changed two lines in the form?


character count 😛 It's true though, with loc, it could create a scenario where you changed something, yet you get the same count.


In any case, for me, as a user of a lib, I've never had to care about any of this except: Can I safely upgrade, which version has the feature/fix I want? And sometimes, which version works with this other lib that depends on it. It's neat to maybe get a feel of the scope of change, but I'm not sure what I'd do with that info.


You should always be able to safely upgrade. Breaking changes just shouldn't happen.


Breaking change would = new lib?


Ok, so wait, in your scheme, what is Major and Minor ?


What do you mean?


Like why would they ever increment?


They don't have to. I think 1.0.x means "stable" as opposed to "growth", going back to the OP, and then you might never bump them beyond that. You might be 1.0.x forever.


major and minor are subjective, so it's up to the library maintainer. I can't really imagine a major version bump in most of my libraries that would stay compatible, to be honest. But I've bumped the minor version a few times on some projects. Go look at their changelogs.


This convo has been helpful. For me, the confusing bit is looking at libs like tools.deps and which increment the minor and commit-count (not the major though). However, i’m not sure the criteria that was used to decide to increment the minor. I know it must be intentional though. For example, based on the above convo, • — why go from v0.6.8 to v0.7.0 ? (the third spot here does not appear to be commit) • — why go from 0.11.935 to v0.12.985


The latter is major.minor.commits and so that's indicating a level of change between commit 935 and 985 that is enough to warrant a minor version bump.


As for, I can't say: maybe there was a glitch in 0.6.9? maybe there was enough change after 0.6.8 to warrant a minor version bump?


write-pom - TBUILD-23 - specify explicit output path with :target
Update to tools.namespace 1.2.0
Update to tools.deps.alpha 0.12.1090


So I'm guessing the deps changes were enough to warrant a minor version bump?


Interesting. I would def like to understand the process as they apply it to minor version bumps. Maybe Alex is taking suggestions for the maintainers next sit down video? 😉


@U6GNVEWQG I think it's a pretty subjective decision, TBH.


I see, so that's why I've been thinking of going with only 2 numbers. The first would be 0 or 1. 0 would indicate still exploring the API, make no commitment to backward compatibility even within the 0 major. 1 would mean stable, backward compatibility is guaranteed at least throughout the 1 major (though maybe that's that, since a major breakage I would just release under a new name/namespaces. And then the second number just needs something that increments when changes are ready to go, which I guess could just be commit count, or honestly, just manually bump it would work.


And I think I'd decide between a new name/namespace or going to major 2 based on how big the breakage is. If I felt its small, and most people would not be broken, I would go to 2. If I felt it was a big breaking change that can break all users I'd release to a new name.


I guess it's kind of similar to semver, except I don't see the reason to split minor and patch really.