This page is not created by, affiliated with, or supported by Slack Technologies, Inc.
- # announcements (40)
- # aws (9)
- # babashka (21)
- # beginners (75)
- # calva (56)
- # chlorine-clover (1)
- # cider (12)
- # circleci (1)
- # clj-kondo (7)
- # cljsrn (13)
- # clojars (3)
- # clojure (171)
- # clojure-dev (11)
- # clojure-europe (64)
- # clojure-nl (11)
- # clojure-spec (6)
- # clojure-uk (9)
- # clojurescript (31)
- # conjure (1)
- # cursive (7)
- # datascript (7)
- # datomic (9)
- # emacs (4)
- # fulcro (65)
- # introduce-yourself (1)
- # jobs-discuss (7)
- # kaocha (7)
- # lsp (39)
- # missionary (5)
- # off-topic (54)
- # pathom (10)
- # re-frame (6)
- # shadow-cljs (110)
- # tools-deps (41)
Hi. I am updating our CI images to use 126.96.36.1997 from 188.8.131.525. After the update, the tests for a project failed by timing out after 10 minutes. The entirety of the test output was as follows.
I will be rolling back to the earlier version, but I am curious, is this something others have hit? Are there some particular steps I should follow when upgrading from 855 to 967?
Downloading: com/datadoghq/dd-trace-api/0.86.0/dd-trace-api-0.86.0.pom from central Downloading: org/clojure/clojure/maven-metadata.xml from clojars Downloading: org/clojure/clojure/maven-metadata.xml from central Downloading: org/clojure/clojure/maven-metadata.xml from datomic-cloud
I think this may be specific to the datomic-cloud repo, but maybe should move that to #datomic
no, I think there may be changes that have happened in one of the datomic repos
we've bumped some aws/s3 deps in that version range, but I have not seen anything like you describe
we did make some changes in the http-client config for s3 in that version range
All tests I ran are run with -Sforce. I don't have a readily available deps.edn. I also don't think this is related to Datomic. In a test run with 943, the cli output was just this.
cs-mvn is a s3 repo.
Downloading: org/clojure/clojure/maven-metadata.xml from cs-mvn
It seems like a deadlock, but that would be very speculative. I'm surprised the requests aren't timing out.
If it helps, the command I'm running.
/home/circleci/clj-184.108.40.2063/bin/clojure -Sforce -J-Dclojure.main.report=stderr -J-Xmx3800m -A:test:test-runner -M -m kaocha.runner --reporter kaocha.report/documentation --plugin profiling --plugin kaocha.plugin/junit-xml --junit-xml-file test-results/kaocha/results.xml
well, that's interesting. don't think it necessarily has anything to do with s3 based on that
it could indeed be a deadlock in the session locks. I'll have to think about this more. you might be able to bypass with
I know close to 0 about the maven lib, so take this with a grain of salt... if they are using a fixed size thread pool to get s3 creds in parallel, that could easily result in a deadlock.
there are several layers to this problem, but I don't think it has anything to do with s3
there have been changes in the session caching I'm doing, and in the underlying maven lib. I suspect my changes at the moment. :)
I'm probably not going to get to it today, but I will take a look soon
hey kenny, I have not succeeded in reproducing this or figuring it out, but I did spend some time looking at the various versions of the libs and I think I was using maven resolver libs (1.7.x) that require maven core libs (4.0 alpha+), so I've fallen back to the 1.6.x series there. they have been reworking the concurrency and locking parts of maven in 1.7.x. I don't see that directly implicated but it might be related. But if you wanted to try Clojure CLI 220.127.116.111, it's available.
I tried the work it backwards by commenting things out approach, and I cannot discern any pattern at all. I'll send you the smallest deps.edn I worked it back to. Any other changes, commenting something out or moving local/root deps into this deps.edn, result in no hang. Let me know if there's any info you're interested in.
I am pretty sure I understand the source of the problem, and I introduced it