Fork me on GitHub
#clojure
<
2020-05-01
>
didibus00:05:28

Are we talking Emacs? Otherwise I think you can use zprint, it's a total formatter, so it'll rewrite everything exactly as defined

Adam Helins07:05:08

Is there a particular reason CLJ does not have the #queue reader macro unlike CLJS?

andy.fingerhut08:05:34

Most likely that it is simply lower priority on the Clojure core team's list than other things.

andy.fingerhut08:05:15

You can define a reader tag of your own that achieves the desired effect.

andy.fingerhut08:05:50

Note: My "most likely" above is better stated as "My best guess is". I am not a part of the Clojure core dev team, nor can I read their minds.

Adam Helins11:05:17

I guess so. The problem with defining your own reader is that things get messy when your code base gets mixed with another project.

Eamonn Sullivan10:05:46

Hi all, the output of a script I'm writing is a blob of json, but a blob that is meant for human editing/reading. I'd like to ouput it to the file pretty-printed. I'm using clojure/data.json, which has pprint, but that goes to *out* . What's the (i'm sure one-liner) idiomatic way of doing that? Write now I'm just doing (spit "filename.json" (json/write-str result)) .

simongray11:05:01

Wrap your printing function with (with-out-str)

noisesmith15:05:11

if you are in clojure rather than cljs, you can use the second arg of pprint to supply a writer

noisesmith15:05:33

but often with-out-str is easier code wise - just don't forget the writer option in other cases

Eamonn Sullivan15:05:38

Thanks. I didn't know about that second arg (or, rather, I saw it, but didn't know how to use it). I'll keep it in the toolbox, though. For this particular simple case with-out-str worked fine.

Eamonn Sullivan10:05:28

result is edn at that point.

otwieracz12:05:50

Hmmm. I’ve never done any “interactive” REPL utilties and I’ve got a question - how to do do (read-line) in Cider REPL but with “local echo”, so I can see what I am typing?

noisesmith15:05:37

that sounds like an emacs question, as (read-line) in a standard repl or client does echo as expected

noisesmith15:05:52

the fix won't be in clojure, it will be in elisp

worlds-endless15:05:59

Hey all -- which talk was it where one of the Stus talked about how "repl-driven development" is for them never done with an actual REPL window?

worlds-endless15:05:32

I'm guessing it's Running with Scissors

dominicm15:05:35

@worlds-endless it was mentioned on the ClojureScript podcast I think, but I think https://www.youtube.com/watch?v=Qx0-pViyIDU is what you're after

John Maruska16:05:02

probably a longshot but I'm trying to use discord.clj (https://github.com/gizmo385/discord.clj), and for some reason I can't get it to honor mentions, so when I (bot/say "@MyUsername") it doesn't actually ping, just raw text. has anyone used the library that may be able to help me out?

tianshu17:05:49

is there a known bug when using namespaced keyword in = in clojure.test? I can reproduce with

(t/deftest test-a
  (t/testing ""
    (t/is (= ::s/invalid 1))))
s is alias of clojure.spec.alpha. Error is Call to clojure.core/let did not conform to spec.

seancorfield17:05:56

@doglooksgood The problem is that particular keyword -- it will cause macro-expansion to fail.

seancorfield17:05:07

(because Spec is checking the macro-expansion)

didibus17:05:05

How is that? Did clojure.test special case it?

seancorfield17:05:39

Nothing to do with clojure.test.

seancorfield17:05:38

user=> (let [x ::s/invalid] x)
Syntax error macroexpanding clojure.core/let at (REPL:7:1).
:clojure.spec.alpha/invalid - failed: any? at: [:bindings :init-expr] spec: :clojure.core.specs.alpha/bindings
user=> 

didibus17:05:13

So the compiler itself special cases it?

didibus17:05:41

The spec validation of the let macro

seancorfield17:05:43

No. It's just Spec checking in macro definitions.

seancorfield17:05:07

It can be very surprising when you first trip over it 😐

didibus17:05:07

Ya, it's a funny edge case

noisesmith17:05:38

isn't there a function in clojure.spec that already does #(= ::s/invalid %)

seancorfield17:05:45

You are better off using s/invalid? to test that

💯 8
seancorfield17:05:39

It's a weird special case -- you can run into it sometimes when you try to write your own Spec predicates and conformers and you want to return ::s/invalid yourself.

tianshu17:05:51

Yes, I can use s/invalid? for this case. Thanks you!

tianshu17:05:09

I just run into it when I write a function that return ::s/invalid, so I create a variable called invalid to hold this value.

didibus17:05:06

So spec will validate the input passed to macros at macro-expansion time. But ::s/invalid is something spec looks for to know that a spec validation failed. So when validating the macro input if the input is that keyword, it'll think the input is invalid and thus fail the spec validation for the macro

❤️ 4
didibus17:05:11

That's what's happening

tianshu17:05:03

make sense, thanks!

didibus17:05:57

How does spec validation for macros get bootsrapped?

hiredman17:05:25

it happens at macro expansion time, not macro creation time

didibus17:05:12

So that has to be a special compiler step? Like there's no way I can attach myself to the macro-expansion in a similar way?

seancorfield17:05:50

@didibus not sure what you mean by "attach myself to the macro-expansion"?

tianshu17:05:26

We have a very old system running for years, thanks to how stable Clojure is. And it has some hotfix patches applied through REPL, but they are not committed to repository. Is there a way to find out those functions which is updated by REPL?

noisesmith17:05:54

the repl input history...

noisesmith17:05:06

the source macro uses metadata plus files / jars

noisesmith17:05:42

you could dump the class files created by the functions and attempt to reverse engineer using a decompiler :D

noisesmith18:05:30

in all seriousness, there's no way to get the form back if your repl didn't store it in a readline history file or something

tianshu18:05:24

My first idea is that the function update by REPL may have a incorrect meta compare to the original one. The line number could be wrong

noisesmith18:05:08

when you use defn in the repl no useful file / line number is attached to the metadata

bfabry18:05:15

^ this is an... amazing problem

noisesmith18:05:43

and there's no source of data that you can go look up (unless your repl stores it somewhere which is why I bring up the readline lib which creates logs on disk)

noisesmith18:05:29

the class file exists (it "is the function" per se), but you'd need to dump that and you probably dont' want to make the class file part of your repo - you want code you can debug / edit

noisesmith18:05:29

a decompiler will make weird looking code, but at least it's code :D

bfabry18:05:42

oooh here's an idea

user=> (defn foo [x]
(println x))
#'user/foo
user=> (ns-interns *ns*)
{foo #'user/foo}
user=> (meta (get (ns-interns *ns*) 'foo))
{:arglists ([x]), :line 1, :column 1, :file "NO_SOURCE_PATH", :name foo, :ns #object[clojure.lang.Namespace 0x2440022a "user"]}
user=> (:file (meta (get (ns-interns *ns*) 'foo)))
"NO_SOURCE_PATH"

bfabry18:05:02

that at least lets you iterate the ns's and find out which things have been modified

noisesmith18:05:36

oh - right, you can do (-> (all-ns) (ns-publics) (mapcat vals) (map meta) ...)

bfabry18:05:39

then yeah.. decompiler it is

noisesmith18:05:54

something like that, to get all vars and then filter by "NO_SOURCE_PATH" to find all the redefs

noisesmith18:05:04

scratch=> (->> (all-ns)
     (mapcat (comp vals ns-publics))
     (filter (comp #{"NO_SOURCE_PATH"} :file meta)))

(#'scratch/exercise-oea)

❤️ 4
👍 4
noisesmith18:05:50

so 🚢 -it - the function that finds all repl redefs is done :D

noisesmith18:05:24

getting bytes for what changed might be harder... a debugger might help

tianshu18:05:24

A list of where has been changed is enough for my case.

tianshu18:05:10

Maybe we can find those patches in repl history.

Chris Lester18:05:38

Does anyone have (useful :)) opinions on distributed transaction coordination in Clojure microservices? Moving storage work out of a Java monolith and now have some new business rules that are forcing me into distributed writes on multiple data sources. Found Immutant.XA but so far haven't found much else (or alternatives using a stream processor ... which I suppose I could use to orchestrate commits and compensations myself).

ghadi18:05:46

IMHO distributed transactions are a myth

8
ghadi18:05:42

I would start by enumerating the specific operations in your domain model and what changes each entails

ghadi18:05:53

in a spreadsheet / table

ghadi18:05:25

deferring any implementation choices until you have a spreadsheet of ops

Chris Lester19:05:55

That is a good suggestion, and is what I'm doing to begin thinking about it. I'm not sure whether I agree with the statement that they are a myth (they are a thing) but it is difficult to get right (on your own) and most people use eventual consistency now. However the modeling exercise will take a few hours at most to do (today :)) and then I'm back to looking for information on patterns people are using for this problem (in addition to those I'm familiar with). So I'd still appreciate if people want to share their opinions.

ghadi19:05:10

you are presenting a false dichotomy between "distributed transactions" and eventual consistency

ghadi19:05:27

you might end up with information that half of your operations do not need database involvement at all, and instead belong on a queue

ghadi19:05:37

microservices + distributed transactions are a paradox

Chris Lester19:05:33

I tend to agree with that the last statement (an opinion, but not easily provable) but the gist of this is feeling more like the definition of "not useful".

phronmophobic19:05:25

since you used to have a monolith, you could potentially create a single “transactor” whose only job is to group changes into meaningful transactions while still having multiple microservies that feed the transactor. trying to make something consistent out multiple dbs is always tricky

seancorfield19:05:58

I'll back @ghadi up here @activeghost: with better analysis, you may well find you don't actually need XA stuff at all and would be better off with a combination of regular TX, queues, self-correction, and eventual consistency. An apparent need for XA might also suggest an incorrect division across services (the rush to microservice architecture can often create more orchestration than is really needed because of artificial/arbitrary isolation of things).

phronmophobic19:05:29

with a single transactor, you at least have the opportunity to create something like transaction ids that can be used across the dbs. unfortunately, you’re basically building a database out of other databases at this point

Chris Lester19:05:27

I don't disagree with that @seancorfield, I don't really want to adopt the XA stuff and have read through enough of the research on distributed transactions in microservices to know it's likely a bad idea. I might be wrong however, so am asking (and others likely have a lot of experience to listen to). I just felt the projection and arguing over a projected world view of mine is not useful.

Chris Lester19:05:46

It wasn't the rest of the content. @smith.adriane yes, that feels like it gets into a lot of complexity.

hiredman19:05:24

Yeah, cross db transactions are doomed, but if you just have some hip microservice architecture all talking to the same database a transactor like microservice can work, you all the service to create a tx, then pass the created tx around to different services to add their stuff, then you ask the transactor to commit, but things like reads in transactions get tricky

phronmophobic19:05:52

I wouldn’t say doomed, but it’s definitely not fun. unfortunately, it’s not always possible to migrate all your disparate dbs into a single unified db. it’s really common for companies to end up with multiple dbs that start our completely separate, but at some point need to be combined

hiredman19:05:28

It is already happening across companies

hiredman19:05:14

Like, it is already the case I can't have a database transaction that encompasses calling the braintree api

hiredman19:05:00

So I use database implementation like techniques when using the braintree api (write ahead log and recovery)

ghadi19:05:43

Collecting ops, semantics, side effects (emails generated, notifs, external APIs) will pay repeated dividends in understanding a system. It's hard to make a generic answer here. I hope I didn't come across as antagonistic @activeghost

Chris Lester19:05:46

It was read a tad bit antagonistically but probably because I've had long ranging arguments on FB recently around coronavirus 🙂

ghadi19:05:05

icing on the internet cake

ghadi19:05:08

coordination of all sorts is really expensive, and best avoided

hiredman19:05:31

If you can rewrite updates to compare and set type operations you can decouple reading and writing which can help

ghadi19:05:53

in that talk, he goes through the implementation of a set of services that do an online chess system

ghadi19:05:16

the game history service, the pairing service that matches users wanting to play, etc.

Chris Lester19:05:54

Yes, I wasn't handed the ability to solve the storage medium problem (well, not exactly, but I need a very good reason to ditch the new contracts we've signed). We're migrating out of MySql to Redis/S3 (again, also not my choice) and will need to write to all three sources on a write, and two of them across a read (potentially). Of course the monolith is still doing the metadata orchestration purely in MySql (this is just JSON content, but the DB is "immutable" in that all writes are new versions and we never delete). Thx, will watch that.

ghadi20:05:56

the sections from 8'00" - 19'55" in that video are relevant to service-to-service coordination (or lack thereof)

hiredman20:05:31

also neither redis nor s3 actually has transactions

Chris Lester20:05:20

No they don't, well Redis kind of sort of supports transactions for sets of REDIS commands but it's not the same.

potetm02:05:44

AFAICT they are proper transactions

hiredman20:05:08

I feel like you could build something, take a fairly traditional database architecture, but treat redis as memory and s3 as disk, but that is going to be a lot of work

Chris Lester20:05:13

Yes, with essentially replication over to mysql from S3. That's basically what the problem space currently looks like to me since the monolith can switch back to using SQL alone and expect the data to be consistent (which I'm not quite sure how to solve yet). Of course that could change once I have the modeling exercise done.

seancorfield20:05:24

@activeghost However it turns out, I'd be very interested to hear what solutions you try and what you end up with. I think it's a very interesting (hard) problem, especially given the particular combination of storage systems you're working with.

Chris Lester21:05:40

Will do. It is hard, and a great opportunity to do something interesting.

seancorfield21:05:11

Thank you! And "good luck" 🙂

Chris Lester08:05:47

I decided to reduce the problem for the micro-service down to a single storage medium and a cache (which I don't care if that succeeds in filling theoretically) ...breaking it up so I can avoid the distributed transaction. The monolith that is using is will handle the write fallback to mysql db .. and if the write through the Clojure storage proxy (to whatever backend configured) fails it can abort the transaction it is in. Between the two layers I can cover that without the complexity of attempting to merge that logic into a single service. S3 and Redis aren't transacted, so what was just written can change between write/read back, but since the state/metadata is still managed in the rdbms that will work. The read scenarios are still complex, but the write scenario is now simplified.

Chris Lester08:05:24

... so no exciting hard problem, but it gets to ship lol.

seancorfield08:05:55

Sounds like a good, pragmatic choice. Thank you for coming back with that report!

Johnny Hauser20:05:21

I'm wondering if I'm missing something about stateful transducers. You can't just drop dedupe into comp , right?

noisesmith20:05:04

dedupe itself creates a stateful transducer, you can take (dedupe) and throw it into comp just fine

noisesmith20:05:44

it's just that you need a new instance of the thing dedupe returns per pipeline (unless you want dedupe across the pipelines but that might not work for concurrency reasons anyway)

Johnny Hauser20:05:01

That's what I'm getting at.... it seems in conflict with the docs

noisesmith20:05:08

I wouldn't suggest reusing one (dedupe)

seancorfield20:05:08

user=> (into [] (comp (map inc) (dedupe) (filter even?)) [1 1 1 2 2 2 3 3 3 4 5 6 7 7 8 9 9])
[2 4 6 8 10]
user=> 
^ @johnnyhauser

Johnny Hauser20:05:24

The prev value will not be initialized until the transducing process starts (in a call to transduce for example). The stateful interactions are therefore contained within the context of the transducible process.

Johnny Hauser20:05:30

That doesn't seem to be true ^

Johnny Hauser20:05:57

the prev value is initialized in the expression (dedupe) , is it not?

didibus04:05:55

It is not, dedupe returns a transducer. A transducer returns a reducing function, it is only at the time the transducer is called with a reducing function that it will initialize prev and return a reducing function of it's own which closes over prev

didibus04:05:03

`(defn dedupe [] (fn [xf] (let [prev (volatile! ::none)] (fn ([] (xf)) ([result] (xf result)) ([result input] (let [prior @prev] (vreset! prev input) (if (= prior input) result (xf result input))))))))`

noisesmith20:05:00

the "transducible process" referenced there is the one owned by dedupe

didibus03:05:17

I think the transducible process is the one owned by "into"

didibus04:05:27

That's why in your later example, the state isn't maintained between the two calls to into. Dedupe returns a transducer, and it's into which will later call the transducer to get the reducing function out of it. It is that reducing function which closes over the state.

4
Johnny Hauser20:05:17

Any idea why the transducer approach is not one degree lazier so that this sort of thing isn't a problem?

noisesmith20:05:26

scratch=> (let [d (dedupe)] [(into [] (comp (filter even?) d) (range 10)) (into [] d (range 10))])
[[0 2 4 6 8] [0 1 2 3 4 5 6 7 8 9]]
so the new usage of d seems to "reset" the state

Johnny Hauser20:05:27

suppose transducers required an extra invocation, in other words

hiredman20:05:48

dedupe is not a transducer

hiredman20:05:55

it returns a transducer

hiredman20:05:15

like ftiler is not a transducer, (filter even?) returns a transducer

Johnny Hauser20:05:16

I understand, but I'm pondering why it isn't

hiredman20:05:24

because it has state

Johnny Hauser20:05:41

That's true, but making it not a transducer is not the only solution.

hiredman20:05:56

so when you call (dedupe) you can fresh state

noisesmith20:05:05

@hiredman but the state isn't initialized until the transducer uses it

noisesmith20:05:33

my demo above shows that - the second usage of d didn't know the dupes of the first usage

hiredman20:05:10

it is for backwards compatibility

Johnny Hauser20:05:24

hmm, there's definitely something I'm not grasping there.

noisesmith20:05:59

so the implementation of dedupe changed to be safer, but instead of change the api (and break people's code) you keep the indirection

Johnny Hauser20:05:05

Isn't it initialized as soon as you pass another transducer into it?

hiredman20:05:05

backwards compat is not the right word

hiredman20:05:38

dedupe is that way because dedupe the function on seqs which is not a transducer exists

noisesmith20:05:02

oh that makes more sense, thanks

hiredman20:05:08

so making it a transducer either requires a new name, or keep it a function and giving it a new arity

Johnny Hauser20:05:16

looking at the definition, it looks like you invoke dedupe and get a function waiting on the next thing, and then when pass the next thing, it initializes the state

hiredman20:05:41

a transducer doesn't have another transducer passed to it

noisesmith20:05:45

@johnnyhauser (dedupe coll) exists already and isn't a transducer

hiredman20:05:07

a transducer is a function from reducing function to reducing function

noisesmith20:05:07

so we need a new arity to ask for the thing that transduces

hiredman21:05:54

the definition is something like

(defn dedupe ([] (fn [rf] ...)) ([coll] ...))
which you could unwrap like
(defn dupe [rf] ...)
but you lose the last arity there which is not transducer related

Johnny Hauser21:05:05

(defn dedupe []
  (fn [xf] // takes the next one
    (let [prev (volatile! ::none)] // initialized the state
      (fn // transducer

didibus04:05:53

The [xf]is confusing here, and a mistake in my opinion. While it is called xf, the parameter being passed in is a reducible function rf, not a transducer.

didibus04:05:34

(defn dedupe []
  // This is the transducer xf returned by dedupe
  (fn [rf] // takes the next reducing function, not the next transducer
    (let [prev (volatile! ::none)] // initialized the state
      (fn // this is not a transducer, this is a reducing function rf

Johnny Hauser21:05:12

What am I not seeing?

noisesmith21:05:35

you can't use that dedupe on a list

hiredman21:05:41

you are not seeing the [coll] function arity which is not transducer related

Johnny Hauser21:05:49

but I'm not interested in it

Johnny Hauser21:05:55

Just interested in transducers

noisesmith21:05:56

but other people depend on it!

noisesmith21:05:10

you don't need that implementation for your new transducer

noisesmith21:05:13

clojure needed it

hiredman21:05:21

that is the reason dedupe is written like it is

Johnny Hauser21:05:37

Well, can we just talk in general and not about dedupe?

didibus04:05:55

I'm not sure you and hiredman were discussing the same thing. He is explaining why dedupe returns a transducer instead of just being the transducer itself. That's just because a transducer is a function of one argument. And dedupe already existed as a function of one argument which takes a coll. So you couldn't modify it to also be a function of one argument which takes a reducing function

didibus04:05:20

But neither have anything to do with how state is handled

hiredman21:05:51

you might look at cat

hiredman21:05:06

which is in the style you are asking about

Johnny Hauser21:05:09

I'm just curious about opinions on lazier transducers so that stateful transducers can compose and be reused.

didibus03:05:01

What do you mean by lazy?

didibus03:05:27

Transducers just let you create nested chains of reducing functions

didibus03:05:12

And then you have transducible processes I think they are called, which basically make use of the transducer machinery to iterate through elements.

didibus03:05:35

The transducible context will initiate the push down of elements through the transducers. So it is in charge of when things actually happen.

didibus03:05:25

It helps to look at the fn signature of a transducer: `(fn [rf] (fn ([] ...) ([result] ...) ([result input] ...)))`

didibus03:05:25

So the outer fn is the transducer, it takes an rf which is a reducing function, and returns an rf as well. The inner fn is a reducing function. A reducing function must have 3 arity: init, completion and step

didibus03:05:01

In theory your entire transducing chain could be manually implemented as a single reducing function. But the idea is what if you have two reducing functions? Could you compose them together? Like chain them back to back. This is where the transducer comes in

didibus03:05:28

The transducer will return a reducing function which will internally call the rf that was passed into the transducer

didibus03:05:57

Now, because the transducer is like a constructor for reducing function, if your reducing function needs state, you can create it when the transducer is called to construct the reducing function and have the rf close on it

didibus03:05:00

When you comp transducers, no rf have been constructed yet. You just composed transducers together and the comp of them is still waiting for the rf to be passed in.

didibus03:05:54

It's only when you call a transducible process like into, transduce, sequence, eduction, etc. That those will call the transducer to construct the reducing function and they will be the one to orchestrate the iteration and keep calling the same rf with the elements until they decide it is done.

didibus04:05:00

So if say you wanted what you're asking, you'd need your own transducible process, for example: `(defn conj-rf ([] []) ([acc] acc) ([acc e] (conj acc e)))` (def deduping-conj-rf ((dedupe) conj-rf)) `(reduce deduping-conj-rf [] [1 2 2 3]) ;> [1 2 3] (reduce deduping-conj-rf [] [3 4 4 5]) ;> [4 5]`

didibus04:05:57

I hope this makes it clear. You see here conj-rf is a reducing function which just conjes elements to a vector. Dedupe returns a transducer. When we call the transducer returned by dedupe with our conj-rf it returns a reducing function which will first dedupe elements and then conj them. This reducing function is stateful. You can then use it with reduce to reduce over things with it. If you re-use it over and over the state is maintained.

didibus04:05:38

Now in theory, there as some rules a transducing context should follow, like encapsulating the reducing function so it can't be used across threads. That's just because if you look at my example, deduping-conj-rf isn't thread safe. If you use it inside two reduce calls happening in different threads it's unclear what would happen.

didibus04:05:46

And you're supposed to call the completion arity on the rf when you're done with the elements (if you are ever done). Because some transducers assume this will be the case and can leverage that as a final step before they return the result.

didibus04:05:22

But if you're not writing a generic transducible process , you can bend these rules as you see fit

hiredman21:05:41

I don't think anyone really wants that

hiredman21:05:30

like, there is no strong motivation to make sweeping changes to support that, because the way there are now works well and people aren't constantly hitting that as a limitation

Johnny Hauser21:05:37

What would be the drawback?

hiredman21:05:41

I am not even sure it makes sense

Johnny Hauser21:05:50

It's the sense part I'm most interested in.

Johnny Hauser21:05:45

It makes sense to me. I wrote some stuff that works and I didn't run into any problems yet. But that doesn't mean it's right or good and such.

hiredman21:05:29

have an f that you use like (reduce f ...) in multiple different places in your code, is there some generic state that all those places should share?

noisesmith21:05:00

if f was memoized, kind of

noisesmith21:05:14

(where memoization is an innocuous kind of hidden state)

hiredman21:05:24

it is highly dependent on the nature of f

hiredman21:05:46

for a generic reusable thing like transducers, the answer is no

Johnny Hauser21:05:18

an example that I couldn't tolerate was, having created a scan transducer (that's like clojure's reductions ?) , if you wanted to scan add 0 , and reuse that composition, it would not start from 0 on future invocations

hiredman21:05:23

for your own stuff, you can already write transducers that share state

hiredman21:05:21

scan would likely be what if I recall the docs call, a reducible context, not a transducer

hiredman21:05:05

like transduce is not a transducer, it is a thing that knows how to apply a transducer to a reducing function then use that to run a reduce

hiredman21:05:19

and the shared state is setup and managed and exists inside a call to transducer where it uses the reducing function that resulted from appling a transducer

Drew Verlee21:05:28

when you run the clojure command line tool if my deps contains a mvn/repos key it will look for repos there when trying to get deps. Additional it will look in my ~/.m2/settings.xml file. Is there a reason it would look somewhere else other then HOME/.m2/settings.xml? Can i get debug information on why it can't find an artifact?

dominicm21:05:12

Maybe if $M2_HOME was set it would respect that.

hiredman21:05:45

the maven libraries don't, at least by default, look for artifact in a particular repo, if they need to find an artifact they look through all the repos they know about for it. So really the only error not finding an artifact is "couldn't find that artifact in any of the repos I know about"

hiredman21:05:07

so you can do something like: what artifact failed to fetch? is that the right artifact coords? what repos is it in? are any of those repos configured for me? is the repo correctly configured for me?

hiredman21:05:08

(the way you figure out what repo an artifact is in is by googling "maven $project-name")

Drew Verlee21:05:59

how do i list the repos it knows about? I added

:mvn/repos {""
              {:url ""}}
to my deps so i assume it nows about datomic pro (the dep it cant find). But the only output i get is that it cant find it. > so you can do something like: what artifact failed to fetch? is that the right artifact coords? what repos is it in? are any of those repos configured for me? is the repo correctly configured for me? yea. i'm guessing if its anything its the later. Specifically the clj tool isn't finding the settings.xml file which i have mounted to /HOME/.m2/settings.xml in my container. But it would be nice to distinguish between. we found the repo and the dep but your not authenticated.

hiredman21:05:46

if you aren't authenticated the repo isn't going to let you see anything, so you don't know if they have the dep

hiredman21:05:29

if you are in a container I would start by verifying which user you are and where your home directory is

Drew Verlee21:05:31

that's correct as far as i can tell. I echoed out $HOME and just set it to $HOME/.m2/settings.xml only without the var just to be clear.

Alex Miller (Clojure team)21:05:07

the my.datomic site should give you the creds you need in ~/.m2/settings.xml to access