Fork me on GitHub
#datomic
<
2019-11-28
>
yannvahalewyn01:11:03

> org.eclipse.aether.resolution.ArtifactDescriptorException: Failed to read artifact descriptor for com.datomic:datomic-pro:jar:0.9.5981 Not sure what to look for to debug this. I followed the steps in my.datomic (~/.m2/settings.xml and :mvn/repos are set correctly). Can someone nudge me in the right direction?

yannvahalewyn02:11:55

If anyone found this by googling the error, verify you have this in settings.xml:

<settings xmlns=""
          xmlns:xsi=""
          xsi:schemaLocation=" ">
  <servers>
    <server>
      <id></id>
      <username>{email}</username>
      <password>{password}</password>
    </server>
  </servers>
</settings>

yannvahalewyn02:11:47

The issue was that I just copied over the settings.xml from my.datomic, but the example is not a complete example but rather just one key. It does link to the maven docs but I didn’t notice it. It’s not super intuitive what is expected for younger devs like me who have no experience with Maven.

yannvahalewyn02:11:23

It took me a full 40 minutes to figure this out, now who has time for that? 😄. Any plans to streamline onboarding a bit? A better and working example would be useful imo, especially pulling in the peer library.

yannvahalewyn15:11:02

I noticed other devs of various levels of experience share my feelings about the onboarding. Seems like a shame to me since big improvements can be made with a couple of simple steps. This is your first introduction to otherwise amazing software and may turn potential customers away early.

kardan12:11:28

I was trying to split my Cloudformation (Datomic cloud) stack do to an upgrade for the first time. When deleting the root stack I get an error for the nested compute stack, as DELETE_FAILED LambdaSecurityGroup resource <uuid> has a dependent object (Service: AmazonEC2; Status Code: 400; Error Code: DependencyViolation; Request ID: <uuid>). Anyone has any pointers on what to do / read up?

jaret16:11:44

In general, Datomic delete/upgrade will only delete/modify resources it created. If any of the resources it uses have been modified it will not delete that resource. Have you changed the security group or added any resources to the security group?

marshall16:11:34

This is likely the lambda ENI deletion delay issue

marshall16:11:00

@U051F5T93 after youve waited an hour or so, try deleting again

kardan16:11:26

I’ll try again. Don’t think I’ll created much more than what’s in the guides. (but this was a while ago, so might be wrong on this). Will need to be of to handle kids and stuff for a while so will check in later to see if it succeeds. Thanks for the pointers.

marshall16:11:08

There is a recent change in how aws handles lambda enis that affects their deletion. The current solution from aws is "wait an hour and try again"

kardan04:11:42

Tried twice (with a nights sleep in between) and failed again. Could it be a problem that I created a web lambda before splitting the stack?

kardan07:11:18

Hitting my connected API gateway with a browser it now responds with 500 Internal Server Error

kardan07:11:49

(this is however not anything in production)

marshall18:11:50

The lambda should be deleted, unless you created something manually out of band

marshall18:11:28

You can delve into the error in the cloudformation stack and determine what specifically failed to delete

marshall18:11:00

If it is a lambda ENI, that is caused by a recent change aws made to vpc resident lambdas

marshall18:11:40

You may need to look in the vpc console or the list of security groups to determine what resources are still present

kardan19:11:14

Ok, will dig in deeper

kardan07:11:59

Deleted the lambda security group manually and then went on to delete everything. Will start over from scratch again. Thanks for the pointers.

bartuka20:11:29

I was experiencing some issues between the async/datomic parts of my project and decided to perform a small experiment. I wrote a simple query that returned 5 entity ids and used the go to emit some parallel queries against my database [I'm on the datomic cloud]. This is the whole code:

(defonce client (d/client config))
(defonce connection (d/connect client {:db-name "secland"}))

(defn query-wallet []
 (-> (d/q '[:find ?e
            :where
            [?e :wallet/name _]]
          (d/db connection))
     count
     println))

(dotimes [_ 9] (async/go (query-wallet)))
If my dotimes is less than 8 it works fine and print my results. However, with 9+ parallel queries, it hangs and nothing happens. From the terminal the output of the tunnel is only:
debug1: channel 3: free: direct-tcpip: listening port 8182 for  port 8182, connect from 127.0.0.1 port 45492 to 127.0.0.1 port 8182, nchannels 11
debug1: channel 10: free: direct-tcpip: listening port 8182 for  port 8182, connect from 127.0.0.1 port 45506 to 127.0.0.1 port 8182, nchannels 10
debug1: channel 6: free: direct-tcpip: listening port 8182 for  port 8182, connect from 127.0.0.1 port 45498 to 127.0.0.1 port 8182, nchannels 9
debug1: channel 7: free: direct-tcpip: listening port 8182 for  port 8182, connect from 127.0.0.1 port 45500 to 127.0.0.1 port 8182, nchannels 8
debug1: channel 8: free: direct-tcpip: listening port 8182 for  port 8182, connect from 127.0.0.1 port 45502 to 127.0.0.1 port 8182, nchannels 7
debug1: channel 2: free: direct-tcpip: listening port 8182 for  port 8182, connect from 127.0.0.1 port 45410 to 127.0.0.1 port 8182, nchannels 6
debug1: channel 4: free: direct-tcpip: listening port 8182 for  port 8182, connect from 127.0.0.1 port 45494 to 127.0.0.1 port 8182, nchannels 5
debug1: channel 5: free: direct-tcpip: listening port 8182 for  port 8182, connect from 127.0.0.1 port 45496 to 127.0.0.1 port 8182, nchannels 4
debug1: channel 9: free: direct-tcpip: listening port 8182 for  port 8182, connect from 127.0.0.1 port 45504 to 127.0.0.1 port 8182, nchannels 3
I would like to know more about this issue. Why 7 parallel processes? This query is super simple and fast. Is this a configuration issue?

Joe Lane20:11:26

You're doing blocking IO inside of a go-block. Never do this. In core async there are 8 threads in the core async threadpool, when you perform blocking io in a go block you can deadlock that threadpool. If you are using the client api in a non-ion project, you can use the async part of the api ( https://docs.datomic.com/client-api/datomic.client.api.async.html ) to leverage core async from the datomic client. The async part of the client api DOES NOT WORK IN AN ION.

Joe Lane20:11:03

You would have this problem with anything doing blocking io in go blocks.

bartuka22:11:17

Alright!! thanks for the explanation. I will change the implementation

bartuka23:11:32

I tried to implement this using datomic asynclibrary.

(defonce client-async (d-async/client config))
(def ch-conn (d-async/connect client-async {:db-name "secland"}))
(def out-chan (async/chan 10))
(def times 9)
(dotimes [_ times]
  (async/go
    (->> (d-async/q {:query '[:find ?e
                              :where
                              [?e :wallet/name _]]
                     :args [(d-async/db (async/<! ch-conn))]})
         async/<!
         (async/>! out-chan)
         )))

(dotimes [_ times]
  (println (count (async/<!! out-chan))))
But I still get the same error. When I change to a version using async/thread it works fine. So, probably I am still doing IO blocking right now. Can you spot the error?