Fork me on GitHub
#datomic
<
2021-12-13
>
wegi09:12:45

@audiolabs It seems that an artifact is now missing, when trying to resolve deps for com.datomic/datomic-pro com.datomic:memcache-asg-java-client:jar:1.1.0.31 can not be found

n2o10:12:25

Error building classpath. Could not find artifact com.datomic:memcache-asg-java-client:jar:1.1.0.30 in central ()
with datomic-pro com.datomic/datomic-pro {:mvn/version "1.0.6316"} and
Downloading: com/datomic/datomic-pro/1.0.6344/datomic-pro-1.0.6344.jar from 
Error building classpath. Could not find artifact com.datomic:memcache-asg-java-client:jar:1.1.0.31 in central ()
with uncached com.datomic/datomic-pro {:mvn/version "1.0.6344"}

Robert A. Randolph12:12:01

@U1PCFQUR3 thank you, I will look into this.

jaret12:12:57

@U1PCFQUR3 and @U49RJG1L0 memcache-asg requires your http://my.datomic.com creds. As of the latest release the jar is also included with the Datomic download. You can run bin/maven-install within the directory for https://docs.datomic.com/on-prem/changes.html#1.0.6344 to install to your local maven repository.

n2o12:12:28

hm yes i thought about it, but in the past this was not necessary and more comfortable 😄

n2o12:12:33

Previously, we did not need to download and install the complete datomic db for the CI. We provide the credentials in our project’s CI.

n2o12:12:55

Hm, a different error occurs:

λ clj -A:test
Downloading: org/clojure/clojure/maven-metadata.xml from cloud-maven
Downloading: com/datomic/memcache-asg-java-client/1.1.0.31/memcache-asg-java-client-1.1.0.31.pom from cloud-maven
Downloading: com/datomic/memcache-asg-java-client/1.1.0.31/memcache-asg-java-client-1.1.0.31.jar from cloud-maven
Error building classpath. Could not find artifact com.datomic:memcache-asg-java-client:jar:1.1.0.31 in central ()

n2o12:12:21

The error occurs on two dev machines, both set up with credentials for http://my.datomic.com and everything worked the complete year as expected, but since today / after your update the error exists.

Robert A. Randolph12:12:43

ok thank you! I'll be investigating.

👍 1
Robert A. Randolph12:12:34

@U1PCFQUR3 do your credentials in your maven settings.xml match what's in http://my.datomic.com currently?

n2o12:12:49

Yes, I can send you my account name via PM if you want to check the logs or similar.

Robert A. Randolph13:12:16

We've found the issue and are working on a fix.

n2o13:12:26

Uuuuh great, thanks 👍

Robert A. Randolph16:12:56

@U1PCFQUR3 I was able to reproduce using your repro, and have deployed a fix.

🎉 1
Robert A. Randolph16:12:45

can you please retest and let me know

n2o16:12:39

Yes, it works 🥳 thanks

mfikes13:12:11

@audiolabs I might be seeing a different issue with an internal server error

api_1                     | Caused by: org.eclipse.aether.resolution.ArtifactResolutionException: Could not transfer artifact com.datomic:datomic-pro:pom:1.0.6269 from/to  (): status code: 500, reason phrase: Internal Server Error (500)

👍 2
Robert A. Randolph13:12:25

Approximately what time did this occur?

mfikes13:12:51

Within the last 10 minutes.

mfikes13:12:51

I'm trying again now to see if it is still happening...

mfikes13:12:04

Yeah, here is a gist of a repro with the full stack https://gist.github.com/mfikes/48a66956e39ab928f7fa289d7c42aae1

Robert A. Randolph16:12:16

Are you still encountering this error?

mfikes16:12:54

Yes, just heard about it from another team member at Vouch; will confirm again myself

Robert A. Randolph17:12:13

@U04VDQDDY what are you running to encounter this? What do your deps look like?

mfikes17:12:30

In :mvn/repos we have a map entry with "" {:url ""}}

mfikes17:12:39

Looking for other places where we actually refer to it...

mfikes17:12:45

We also have the usual stuff in our .m2/settings.xml like

<settings>
    <servers>
        <server>
            <id></id>
            <username>${MYDATOMIC_USERNAME}</username>
            <password>${MYDATOMIC_PASSWORD}</password>
        </server>
    </servers>
</settings>

mfikes17:12:39

Sorry, took a little while to find it @audiolabs . We have a :deps map entry like

com.datomic/datomic-pro                            {:mvn/version "1.0.6269"}

Robert A. Randolph17:12:44

We found an error in the logs that appear to match your exception, however all other downloads for that version are proceeding correctly. I'm working through understanding what the differences are now.

mfikes17:12:58

There must be more to this... as I can't repro with a simple deps file

mfikes17:12:47

This is successfully downloading the artifact for me from my desktop

{:deps {com.datomic/datomic-pro {:mvn/version "1.0.6269"}}
 :mvn/repos {"" {:url ""}}}

mfikes17:12:03

But the error occurs in the bowels of our Docker setup... seeing if I can figure out more

mfikes17:12:56

FWIW the artifact downloaded successfully in Docker a few times over the weekend, and the internal server error ultimately came back, leaving us where we are now

Robert A. Randolph17:12:07

We deployed this current version around 11am eastern Sunday. Did you have successful downloads after that point?

mfikes17:12:37

I want to say that from that time afterwards things were failing for me, but I'm not 100% sure.

Robert A. Randolph17:12:27

It appears that this may be an http request, not https. We only support https now

Robert A. Randolph17:12:40

it may be unrelated, but it will be an issue

Robert A. Randolph17:12:40

However I'm adding more logging to be certain

Robert A. Randolph18:12:25

We've identified an issue with head requests on files in the maven repo. If you can turn off head requests it should work. Meanwhile we're working towards a solution.

favila18:12:05

+1 here, I’m also getting this

favila18:12:16

Could not transfer artifact com.datomic:datomic-pro:jar:1.0.6269 from/to  (): Failed to transfer file  with status code 500

favila18:12:26

on something that definitely worked before

Robert A. Randolph19:12:26

@U09R86PA4 can you confirm that this is a head request on the file that's failing?

Robert A. Randolph19:12:41

may work now, or very shortly, as we deployed a fix for head requests.

favila19:12:57

this is through lein, so I’m not sure what it’s retrieving. Now I get a different status code (204)

favila19:12:02

Could not transfer artifact com.datomic:datomic-pro:jar:1.0.6269 from/to  (): Failed to transfer file  with status code 204

mfikes19:12:31

Ahh good find. HEAD is in the stacktrace

mfikes19:12:37

It appears so @audiolabs thanks... I think Vouch is back up again 🙂

👍 1
Robert A. Randolph19:12:52

@U09R86PA4 we're looking into Lein now. We've always returned 204 (which should be the correct status code). So there is another issue somewhere.

mfikes19:12:35

Confirmed that Vouch is indeed back up (saw our server make it through to runtime, and also got a confirm from another Vouch team member). Thanks for the fast response @audiolabs!

🎉 1
Robert A. Randolph19:12:35

@U09R86PA4 we're unable to reproduce issues with lein. Could you start a new message/thread with information about your configuration?

favila14:12:57

The issue eventually went away

favila14:12:03

I have a new issue though! yay

conan15:12:33

Is there any risk to Datomic from the log4j vulnerability?

3
jaret15:12:08

@conan We do not use log4j. We do not have dependency on log4j. We include a bridge log4j-over-slf4j which is only included for customers who use log4j. Therefore there is no datomic vulnerability here unless you introduced log4j in your app. If you want to learn more the best resource is here: http://www.slf4j.org/log4shell.html

💯 2
Aleh Atsman15:12:38

@U1QJACBUM but datomic image with AMI id i-0ccb21ac99b06cf35 uses java-11-amazon-corretto and java-11-amazon-corretto-headless and looks like those packages are vulnerable

Aleh Atsman15:12:41

@U1QJACBUM the ami image creation date is "CreationDate": "2021-09-22T14:33:04.000Z"

Aleh Atsman15:12:29

@U1QJACBUM I only guess that these AMI have to be rebuild with patched version of java-11-amazon-corretto and java-11-amazon-corretto-headless

jaret17:12:20

Hi @aleh_atsman We spent some time looking into this because we had another customer report this problem this morning from AWS scanner (Amazon inspector) and after reviewing everything in detail we do not believe we have exposure here and the issue is with the scanner. We do not use Log4j or JNDI, but end-user applications might (such as via ions). If you are such a user, you should upgrade your Log4j version (or find an alternative logging solution such as https://docs.datomic.com/cloud/ions/ions-monitoring.html#overview). In general, I think the AWS scanner (Amazon inspector) is not correct. It is contradicting the security bulletin that Amazon wrote. Details: • Per [V5] of the Log4J https://aws.amazon.com/security/security-bulletins/AWS-2021-006/ the Corretto JVMs do not have vulnerabilities related to Log4J. The latest Amazon Corretto released October 19th is not affected by CVE-2021-44228 since the Corretto distribution does not include Log4j. We recommend that customers update to the latest version of Log4j in all of their applications that use it, including direct dependencies, indirect dependencies, and shaded jars. • The Corretto HotPatch, mentioned in the https://aws.amazon.com/blogs/security/open-source-hotpatch-for-apache-log4j-vulnerability/ , the top of [https://aws.amazon.com/security/security-bulletins/AWS-2021-006/] of the AWS security bulletin for log4j, and found here on https://github.com/corretto/hotpatch-for-apache-log4j2/ modifies running JVMs to completely disable the use of JNDI. Again, we don't use JNDI. The lack of applying this patch does not implicitly make the Corretto JVMs vulnerable to the log4j CVE. (It has however caused headaches and hours of lost sleep for https://github.com/corretto/hotpatch-for-apache-log4j2/issues/43) • None of the agents or daemons from AWS are written in java

jaret15:12:08

@conan We do not use log4j. We do not have dependency on log4j. We include a bridge log4j-over-slf4j which is only included for customers who use log4j. Therefore there is no datomic vulnerability here unless you introduced log4j in your app. If you want to learn more the best resource is here: http://www.slf4j.org/log4shell.html

💯 2
jaret15:12:36

Because I imagine this question will come up with other customers I went ahead and created a thread with the answer to log4j here: https://forum.datomic.com/t/datomic-and-log4j-cve-2021-44228-no-vulnerability-in-datomic/2013

👍 8
Dimitar Uzunov21:12:52

This includes both Datomic On-Prem and Cloud right?

jaret21:12:35

Yes.

datomic 2
☝️ 1
Benjamin18:12:00

for ions is getting the database always fast (less than 100ms)? I'm wondering if I should call d/connect every few minutes or sth but that seems a bit cargo culty 😅

Benjamin18:12:08

It's just that my app might hit a timeout and a user get's "interaction failed" --- I'll just make those interactions handle things potentially taking a bit now

kenny18:12:00

What problem are you trying to solve?

Benjamin18:12:17

I had a bug where something timed out and one of the things it does is transacting to datomic. Not sure that was the thing that took long. Or if there was something else wrong. 😅

Joe Lane18:12:30

@U02CV2P4J6S Once you have a connection you don't need to "refresh" it.

✔️ 1
Benjamin18:12:48

yea I'm trying to get it for each unit of work. Use case is a discord bot where say every 20s some command runs.

kenny18:12:31

In general, all remote calls should be wrapped in a retry.

kenny18:12:09

You should figure out which call is failing: transact or connect.

👍 1
Benjamin18:12:22

ah I wasn't doing it for the transactions yet, will put it

kenny18:12:21

Separately, we cache all calls to connect. Unclear if it is recommended or not: https://ask.datomic.com/index.php/569/should-you-cache-d-connect-calls. It doesn't seem to have any impact.

👍 1
jaret17:12:20

Hi @aleh_atsman We spent some time looking into this because we had another customer report this problem this morning from AWS scanner (Amazon inspector) and after reviewing everything in detail we do not believe we have exposure here and the issue is with the scanner. We do not use Log4j or JNDI, but end-user applications might (such as via ions). If you are such a user, you should upgrade your Log4j version (or find an alternative logging solution such as https://docs.datomic.com/cloud/ions/ions-monitoring.html#overview). In general, I think the AWS scanner (Amazon inspector) is not correct. It is contradicting the security bulletin that Amazon wrote. Details: • Per [V5] of the Log4J https://aws.amazon.com/security/security-bulletins/AWS-2021-006/ the Corretto JVMs do not have vulnerabilities related to Log4J. The latest Amazon Corretto released October 19th is not affected by CVE-2021-44228 since the Corretto distribution does not include Log4j. We recommend that customers update to the latest version of Log4j in all of their applications that use it, including direct dependencies, indirect dependencies, and shaded jars. • The Corretto HotPatch, mentioned in the https://aws.amazon.com/blogs/security/open-source-hotpatch-for-apache-log4j-vulnerability/ , the top of [https://aws.amazon.com/security/security-bulletins/AWS-2021-006/] of the AWS security bulletin for log4j, and found here on https://github.com/corretto/hotpatch-for-apache-log4j2/ modifies running JVMs to completely disable the use of JNDI. Again, we don't use JNDI. The lack of applying this patch does not implicitly make the Corretto JVMs vulnerable to the log4j CVE. (It has however caused headaches and hours of lost sleep for https://github.com/corretto/hotpatch-for-apache-log4j2/issues/43) • None of the agents or daemons from AWS are written in java