This page is not created by, affiliated with, or supported by Slack Technologies, Inc.
2021-12-13
Channels
- # adventofcode (36)
- # aleph (1)
- # announcements (7)
- # aws (4)
- # babashka (14)
- # beginners (61)
- # calva (79)
- # cider (19)
- # clojure (48)
- # clojure-austin (1)
- # clojure-australia (2)
- # clojure-czech (2)
- # clojure-europe (46)
- # clojure-france (8)
- # clojure-nl (19)
- # clojure-uk (4)
- # clojuredesign-podcast (14)
- # core-logic (42)
- # data-science (3)
- # datalevin (8)
- # datomic (76)
- # events (1)
- # figwheel-main (9)
- # fulcro (6)
- # helix (1)
- # holy-lambda (1)
- # honeysql (2)
- # jobs (2)
- # jobs-discuss (20)
- # leiningen (5)
- # lsp (87)
- # minecraft (11)
- # nextjournal (4)
- # off-topic (17)
- # practicalli (1)
- # reagent (22)
- # reitit (8)
- # releases (3)
- # rum (2)
- # shadow-cljs (18)
- # sql (11)
- # tools-build (5)
- # tools-deps (9)
- # xtdb (20)
@audiolabs It seems that an artifact is now missing, when trying to resolve deps for com.datomic/datomic-pro com.datomic:memcache-asg-java-client:jar:1.1.0.31
can not be found
Error building classpath. Could not find artifact com.datomic:memcache-asg-java-client:jar:1.1.0.30 in central ( )
with datomic-pro com.datomic/datomic-pro {:mvn/version "1.0.6316"}
and
Downloading: com/datomic/datomic-pro/1.0.6344/datomic-pro-1.0.6344.jar from
Error building classpath. Could not find artifact com.datomic:memcache-asg-java-client:jar:1.1.0.31 in central ( )
with uncached com.datomic/datomic-pro {:mvn/version "1.0.6344"}
@U1PCFQUR3 thank you, I will look into this.
@U1PCFQUR3 and @U49RJG1L0 memcache-asg requires your http://my.datomic.com creds. As of the latest release the jar is also included with the Datomic download. You can run bin/maven-install within the directory for https://docs.datomic.com/on-prem/changes.html#1.0.6344 to install to your local maven repository.
Previously, we did not need to download and install the complete datomic db for the CI. We provide the credentials in our project’s CI.
Hm, a different error occurs:
λ clj -A:test
Downloading: org/clojure/clojure/maven-metadata.xml from cloud-maven
Downloading: com/datomic/memcache-asg-java-client/1.1.0.31/memcache-asg-java-client-1.1.0.31.pom from cloud-maven
Downloading: com/datomic/memcache-asg-java-client/1.1.0.31/memcache-asg-java-client-1.1.0.31.jar from cloud-maven
Error building classpath. Could not find artifact com.datomic:memcache-asg-java-client:jar:1.1.0.31 in central ( )
I built a minimal repo for this: https://github.com/n2o/datomic-minimal-bug
The error occurs on two dev machines, both set up with credentials for http://my.datomic.com and everything worked the complete year as expected, but since today / after your update the error exists.
@U1PCFQUR3 do your credentials in your maven settings.xml match what's in http://my.datomic.com currently?
We've found the issue and are working on a fix.
@U1PCFQUR3 I was able to reproduce using your repro, and have deployed a fix.
can you please retest and let me know
@audiolabs I might be seeing a different issue with an internal server error
api_1 | Caused by: org.eclipse.aether.resolution.ArtifactResolutionException: Could not transfer artifact com.datomic:datomic-pro:pom:1.0.6269 from/to (): status code: 500, reason phrase: Internal Server Error (500)
Approximately what time did this occur?
Yeah, here is a gist of a repro with the full stack https://gist.github.com/mfikes/48a66956e39ab928f7fa289d7c42aae1
Are you still encountering this error?
Just tried again and getting 500s https://gist.github.com/mfikes/e9e77d75d9b49ddde420b17bf692b8db
@U04VDQDDY what are you running to encounter this? What do your deps look like?
We also have the usual stuff in our .m2/settings.xml
like
<settings>
<servers>
<server>
<id></id>
<username>${MYDATOMIC_USERNAME}</username>
<password>${MYDATOMIC_PASSWORD}</password>
</server>
</servers>
</settings>
Sorry, took a little while to find it @audiolabs . We have a :deps
map entry like
com.datomic/datomic-pro {:mvn/version "1.0.6269"}
We found an error in the logs that appear to match your exception, however all other downloads for that version are proceeding correctly. I'm working through understanding what the differences are now.
This is successfully downloading the artifact for me from my desktop
{:deps {com.datomic/datomic-pro {:mvn/version "1.0.6269"}}
:mvn/repos {"" {:url ""}}}
But the error occurs in the bowels of our Docker setup... seeing if I can figure out more
FWIW the artifact downloaded successfully in Docker a few times over the weekend, and the internal server error ultimately came back, leaving us where we are now
We deployed this current version around 11am eastern Sunday. Did you have successful downloads after that point?
I want to say that from that time afterwards things were failing for me, but I'm not 100% sure.
It appears that this may be an http request, not https. We only support https now
it may be unrelated, but it will be an issue
However I'm adding more logging to be certain
We've identified an issue with head requests on files in the maven repo. If you can turn off head requests it should work. Meanwhile we're working towards a solution.
Could not transfer artifact com.datomic:datomic-pro:jar:1.0.6269 from/to (): Failed to transfer file with status code 500
@U09R86PA4 can you confirm that this is a head request on the file that's failing?
may work now, or very shortly, as we deployed a fix for head requests.
this is through lein, so I’m not sure what it’s retrieving. Now I get a different status code (204)
Could not transfer artifact com.datomic:datomic-pro:jar:1.0.6269 from/to (): Failed to transfer file with status code 204
@U04VDQDDY is it working for you?
@U09R86PA4 we're looking into Lein now. We've always returned 204 (which should be the correct status code). So there is another issue somewhere.
Confirmed that Vouch is indeed back up (saw our server make it through to runtime, and also got a confirm from another Vouch team member). Thanks for the fast response @audiolabs!
@U09R86PA4 we're unable to reproduce issues with lein. Could you start a new message/thread with information about your configuration?
@conan We do not use log4j. We do not have dependency on log4j. We include a bridge log4j-over-slf4j
which is only included for customers who use log4j. Therefore there is no datomic vulnerability here unless you introduced log4j in your app.
If you want to learn more the best resource is here: http://www.slf4j.org/log4shell.html
@U1QJACBUM but datomic image with AMI id i-0ccb21ac99b06cf35 uses java-11-amazon-corretto and java-11-amazon-corretto-headless and looks like those packages are vulnerable
@U1QJACBUM the ami image creation date is "CreationDate": "2021-09-22T14:33:04.000Z"
@U1QJACBUM I only guess that these AMI have to be rebuild with patched version of java-11-amazon-corretto and java-11-amazon-corretto-headless
Hi @aleh_atsman We spent some time looking into this because we had another customer report this problem this morning from AWS scanner (Amazon inspector) and after reviewing everything in detail we do not believe we have exposure here and the issue is with the scanner. We do not use Log4j or JNDI, but end-user applications might (such as via ions). If you are such a user, you should upgrade your Log4j version (or find an alternative logging solution such as https://docs.datomic.com/cloud/ions/ions-monitoring.html#overview). In general, I think the AWS scanner (Amazon inspector) is not correct. It is contradicting the security bulletin that Amazon wrote. Details: • Per [V5] of the Log4J https://aws.amazon.com/security/security-bulletins/AWS-2021-006/ the Corretto JVMs do not have vulnerabilities related to Log4J. The latest Amazon Corretto released October 19th is not affected by CVE-2021-44228 since the Corretto distribution does not include Log4j. We recommend that customers update to the latest version of Log4j in all of their applications that use it, including direct dependencies, indirect dependencies, and shaded jars. • The Corretto HotPatch, mentioned in the https://aws.amazon.com/blogs/security/open-source-hotpatch-for-apache-log4j-vulnerability/ , the top of [https://aws.amazon.com/security/security-bulletins/AWS-2021-006/] of the AWS security bulletin for log4j, and found here on https://github.com/corretto/hotpatch-for-apache-log4j2/ modifies running JVMs to completely disable the use of JNDI. Again, we don't use JNDI. The lack of applying this patch does not implicitly make the Corretto JVMs vulnerable to the log4j CVE. (It has however caused headaches and hours of lost sleep for https://github.com/corretto/hotpatch-for-apache-log4j2/issues/43) • None of the agents or daemons from AWS are written in java
@conan We do not use log4j. We do not have dependency on log4j. We include a bridge log4j-over-slf4j
which is only included for customers who use log4j. Therefore there is no datomic vulnerability here unless you introduced log4j in your app.
If you want to learn more the best resource is here: http://www.slf4j.org/log4shell.html
Because I imagine this question will come up with other customers I went ahead and created a thread with the answer to log4j here: https://forum.datomic.com/t/datomic-and-log4j-cve-2021-44228-no-vulnerability-in-datomic/2013
This includes both Datomic On-Prem and Cloud right?
for ions is getting the database always fast (less than 100ms)? I'm wondering if I should call d/connect every few minutes or sth but that seems a bit cargo culty 😅
It's just that my app might hit a timeout and a user get's "interaction failed" --- I'll just make those interactions handle things potentially taking a bit now
I had a bug where something timed out and one of the things it does is transacting to datomic. Not sure that was the thing that took long. Or if there was something else wrong. 😅
yea I'm trying to get it for each unit of work. Use case is a discord bot where say every 20s some command runs.
Separately, we cache all calls to connect. Unclear if it is recommended or not: https://ask.datomic.com/index.php/569/should-you-cache-d-connect-calls. It doesn't seem to have any impact.
Hi @aleh_atsman We spent some time looking into this because we had another customer report this problem this morning from AWS scanner (Amazon inspector) and after reviewing everything in detail we do not believe we have exposure here and the issue is with the scanner. We do not use Log4j or JNDI, but end-user applications might (such as via ions). If you are such a user, you should upgrade your Log4j version (or find an alternative logging solution such as https://docs.datomic.com/cloud/ions/ions-monitoring.html#overview). In general, I think the AWS scanner (Amazon inspector) is not correct. It is contradicting the security bulletin that Amazon wrote. Details: • Per [V5] of the Log4J https://aws.amazon.com/security/security-bulletins/AWS-2021-006/ the Corretto JVMs do not have vulnerabilities related to Log4J. The latest Amazon Corretto released October 19th is not affected by CVE-2021-44228 since the Corretto distribution does not include Log4j. We recommend that customers update to the latest version of Log4j in all of their applications that use it, including direct dependencies, indirect dependencies, and shaded jars. • The Corretto HotPatch, mentioned in the https://aws.amazon.com/blogs/security/open-source-hotpatch-for-apache-log4j-vulnerability/ , the top of [https://aws.amazon.com/security/security-bulletins/AWS-2021-006/] of the AWS security bulletin for log4j, and found here on https://github.com/corretto/hotpatch-for-apache-log4j2/ modifies running JVMs to completely disable the use of JNDI. Again, we don't use JNDI. The lack of applying this patch does not implicitly make the Corretto JVMs vulnerable to the log4j CVE. (It has however caused headaches and hours of lost sleep for https://github.com/corretto/hotpatch-for-apache-log4j2/issues/43) • None of the agents or daemons from AWS are written in java