This page is not created by, affiliated with, or supported by Slack Technologies, Inc.
2021-06-08
Channels
- # asami (15)
- # babashka (123)
- # beginners (174)
- # calva (4)
- # cider (6)
- # cljdoc (4)
- # cljs-dev (4)
- # cljsrn (18)
- # clojure (268)
- # clojure-australia (1)
- # clojure-europe (107)
- # clojure-ireland (5)
- # clojure-nl (2)
- # clojure-uk (18)
- # clojured (1)
- # clojurescript (21)
- # conjure (4)
- # cursive (38)
- # data-science (1)
- # datahike (6)
- # datomic (4)
- # events (1)
- # fulcro (9)
- # graalvm (16)
- # helix (6)
- # honeysql (4)
- # instaparse (3)
- # jobs (1)
- # observability (15)
- # pathom (7)
- # pedestal (15)
- # polylith (9)
- # practicalli (1)
- # re-frame (6)
- # remote-jobs (2)
- # specter (7)
- # sql (16)
- # tools-deps (1)
- # vim (5)
- # xtdb (1)
Are there important drawbacks to slurp
ing a url? All slurped urls would be http1, unauthenticated.
I'd prefer slurp to a lib since under my constraints, the dep tree should be unaltered and there should be JDK < 11 compat
It comes with some weird constraints that are platform-specific to the way the default java http client works
But besides that, if it works for your usecase, I don't see why not.
Ye olde curl URL | sh
As long as you don't care about performance it's okay.
If you want to use it to download files open and copy might be more convenient than slurping
@UK0810AQ2 I would like to discourage the normalization of this approach. Yes, I know several tools that do it. As a developer who works in security, it’s horrifying
@U051N6TTC oh, I did not mean to endorse this approach by way of reference. I find it pretty horrifying as well, which is why I always read the scripts I download before running them.
It was more in the vein of analogizing it to (eval (slurp url))
which I find equally horrifying
I had forgotten about io/copy
but I think that’s ideal. (I always used the byte-array
/`loop`/`.read`/`.write` approach taught to me by C, and never addressed in Java). I love @U04V15CAJ’s tweet and would not have realized that it could be written along with a comment in 240 characters!
> I've used this in several tools to avoid a dependency on an http library
Thanks! Looks like a nice option. I see you've contributed to http://github.com/martinklepsch/clj-http-lite/ . I'd almost just go and choose it but slingshot/throw+
seems bit of an odd thing to pull in
Slingshot's pretty cool. A little bit odd sometimes and it wasn't adequate for what I wanted it for, but it's small and not a lot to depend on.
@U45T93RA6 I have a fork of clj-http-lite without slingshot, because slingshot could not run with babashka at the time (and it also loads faster with less deps)
Hmm, I should think about bb support for farolero. Is there a way to either throw arbitrary objects or extend exception in it?
Right, one of the key features of farolero in the JVM is that the throwable that's used doesn't interact with blanket exception or ex-info catch blocks, because it extends java.lang.Error. I guess a bb script is likely to be small enough and with few enough libraries as to not make this a huge problem, but ideally I'd be able to throw something that wouldn't interact with catching ex-info.
@U5NCUG8NR Let's discuss elsewhere to not distract from the http discussion
(btw, I'm adding java.util.Arrays/copyOfRange
to bb now and this was the only remaining blocker to run slingshot with it)
$ ./bb -e "(require '[slingshot.slingshot :as s]) (s/try+ (s/throw+ {:type ::foo}) (catch [:type ::foo] [] 1))"
1
Does anyone who’s more familiar with Clojure’s bug backlog than I am know if there are any bugs related to the Compiler interpreting vararg sequences provided to macros incorrectly? this is intended behaviour 🙃
Specifically, this is the behaviour I’m seeing -
this is as minimal of a reproducing case as I can find so far. also, if there’s a better channel for stuff like this please let me know
coercing the sequence to a vector also works -
No matching method GET found taking 1 args for class com.amazonaws.HttpMethod
seems telling. Are you sure this is not an interop problem (instead of a defmacro one)?
Seems to be that something in analyseSeq
in the Compiler is attempting to macroexpand the argument sequence
@U45T93RA6 if you try this with only 1 argument to the macro it’s fine as well
(defmacro my-macro
[& enums]
`(println ~enums))
=> #'user/my-macro
(my-macro HttpMethod/GET)
#object[com.amazonaws.HttpMethod 0x2253f919 GET]
=> nil
Where there is more than 1 argument, the compiler is interpreting the first reference in that sequence as a StaticMethodExpr
right. I ran into this because I was trying to pass the list to a function in my macro body
This ends up happening it seems -
(macroexpand-1
'(HttpMethod/GET HttpMethod/PATCH)
)
=> (. HttpMethod GET HttpMethod/PATCH)
The println
is just the first function I thought of for this case, it could be anything
the real-life scenario is more like -
`(let [...
opts# (process-opts ~enums)
...
]
...)
I have to ~enums
in order to capture the sequence & enums
in the input to the macro
Perhaps at this point you'd want to formulate the question again describing the end goal
From intuition, (list 'quote x)
may help but it's hard to tell without a more specific problem to solve
well, my end question is “does anyone know of a bug filed for this already”?
afaik & args
is perfectly fine to pass to a macro, but it breaks when you pass multiple Enum references
there are multiple ways to work around it it would seem
> well, my end question is “does anyone know of a bug filed for this already”? sounds like a "xy problem" to me, sorry One thing worth pointing out is that HttpMethod/GET is not an object you can pass around, so it's not surprising that you can't embed it arbitrarily in macroexpansions
Your problem is that you are unquoting a sequence, which is printed as a list, which is evaluated as a function call. Most likely you wanted unquote splicing.
(defmacro my-macro
[& enums]
`(println ~@enums))
I don’t want unquote splicing here - I want to pass the sequence to the function I’m calling
unquote splicing results in the incorrect generated code
I see. Then you would need to do this: (println (list ~@enums))
Because again, you cannot print a sequence into code except as a function call.
let’s ignore the println
for a moment
one second
Sure, what I'm saying isn't connected to the println. Any time you would like to produce a sequence of items based on varargs to a macro you cannot use it directly in the resulting code because it will be treated as a function call. You have to wrap it in a call to list
.
This is why coercing it to a vector works as intended. Vectors when evaluated return themselves, they are not evaluated as a function call.
ah I see
> because it will be treated as a function call Yeah, this is the behaviour I’m seeing, but it’s inconsistent
well, it appears to be
In what way does it appear inconsistent?
when I pass 1 item it expands OK. It probably will blow up at runtime is what you’re saying though?
The item won't implement IFn, or worse, it will implement IFn and it'll give you some result you don't expect.
Causing an exception somewhere that is hard to trace back to the macro.
right, that makes sense
> Any time you would like to produce a sequence of items based on varargs to a macro you cannot use it directly in the resulting code because it will be treated as a function call Why is this the case?
Well any time you evaluate something of the form (a b c)
it'll be treated as a function call.
All sequences are printed as lists.
right - yeah I’m playing around with this a bit more right now
And while the printing isn't actually used in macroexpansion, it's a good analogy.
All sequences are treated as function calls when evaluated.
Well, they're treated as calls. Macro, builtin, function, or special operator. But a call nonetheless.
yeah I see. I don’t really play around with macros too often is what I’m learning here 🙂
Lol, it's all good. Macros take a bit to really grok.
'~enums
also behaves as I’d expect it to, presumably because the wrapping quote
preserves the “listiness” (data type?) vs. writing out the (GET PATCH)
directly to be interpreted as an IFn
not sure how to accurately describe exactly what '
is doing there that makes it work
hmm I may be incorrect about “behaves as I’d expect” there as well - it “looks right” printed in the REPL, but isn’t actually what I’d want
@U5NCUG8NR I guess the thing that’s a little “surprising” to me here is that wrapping the varargs seq in a list
or a seq
(e.g. ~(list enums)
also throws this exception, whereas wrapping with vec
doesn’t. I know that’s because it’s still getting expanded to a list which is then interpreted as a function call, and the []
syntax doesn’t have that ambiguity, but it just seems like the compiler could do something better about this…
So putting the quote there won't make it work quite like you expect, and the reason for this is that the contents of the sequence will never be evaluated, it'll just be the symbols and expressions you passed as arguments to the macro. If you want the arguments to be evaluated but included in the resulting source as a sequence, calling list
is the way to do it.
So if you use the quote, you just get a sequence of symbols, not a sequence of HttpMethods
Also the reason that wrapping it in a list
or seq
inside the unquote instead of outside is that it's the same thing, just a list of elements, that gets included in the source code, which means that it'll be evaluated as a call.
Lists and sequences have no ambiguity, they are always function calls.
> and the reason for this is that the contents of the sequence will never be evaluated, it’ll just be the symbols and expressions you passed as arguments to the macro.
Right, haha, I realized this after sending that message 😅
> Also the reason that wrapping it in a `list` or `seq` inside the unquote instead of outside is that it’s the same thing, just a list of elements, that gets included in the source code, which means that it’ll be evaluated as a call.
Right, it’s just surprising to have to treat this completely differently. ~(vec enums)
works the closest to how I would expect it to. But if I want to actually use a list
(for some reason), I need to do (list ~@enums)
(obviously these two functions take different arguments, you’d have to do the same thing for vector
)
> Lists and sequences have no ambiguity, they are always function calls.
Definitely, to the compiler. But if you read code like this without knowing this background about macros, you might interpret the intent of the programmer in either manner. But yes, to the compiler it’s un-ambiguous
I’m not sure how you’d actually be able to get this to work with seq
. Maybe you can’t
anyway - thanks for shepherding me on this journey, haha
Well seq
is just a coercion function to turn collections into sequences, so I'm not sure how else you'd like it to work. lists are collections that already implement the sequence abstraction, so there's no difference between seq
and list
really in this case.
Except that if you had no arguments you'd get nil vs an empty list.
And in that case you could just use (seq (list ~@enums))
> so there’s no difference between `seq` and `list` really in this case. Right, the only difference is the argument shape they accept. > And in that case you could just use `(seq (list ~@enums))` Yeah, this just seems like something you shouldn’t have to do, but maybe there’s just no way around that given the syntax of the language
🤷 Usually functions aren't too picky about their aguments, so it probably won't often matter anyway.
yeah exactly. Just filing that tidbit away in my head to prefer vectors in macros for function arguments
Why isn't add-libs et al. part of the official (master branch) tools.deps.alpha distribution? https://github.com/clojure/tools.deps.alpha/blob/add-lib3/src/main/clojure/clojure/tools/deps/alpha/repl.clj#L75
@jumar Because the exact design and integration with Clojure is still being worked on.
Alex has said that “something like add-libs
” may find itself directly into Clojure or t.d.a in a different form at some point.
I periodically ask him to bring the add-libs
branch (which I use heavily — see my dot-clojure repo on GitHub) up-to-date w.r.t master.
hi, any idea how I can change the clojure crash report path?
Full report at:
/tmp/clojure-8030624471491730958.edn
I am running inside a container and it restarts but there is not enough information to figure out why.
Also the files are not preserved in tpm.
I would like to write the file to a specific locationtry --report stderr
or the Java property -Dclojure.main.report=stderr`
This cause memory leak. To be precise memory usage grow faster and it ends with exception Error class: java.lang.OutOfMemoryError
(defmethod ig/init-key ::cli-planner [_ {:keys [supplier cache-key-prefix] :as opts}]
(declare-queue opts supplier cache-key-prefix)
(let [p (promise)
t (doto (Thread.
^Runnable (fn [] (shop-supplier! opts supplier cache-key-prefix p))
"cli-planner thread")
(.start))]
{:thread t :promise p}))
This not:
(defmethod ig/init-key ::cli-planner [_ {:keys [supplier cache-key-prefix] :as opts}]
(declare-queue opts supplier cache-key-prefix)
(shop-supplier! opts supplier cache-key-prefix (promise)))
Why?My first idea is, because promise
oustide thread cause garbage collector to not being able to free memory. But if it is correct assumption and why in details? Or maybe because of Thread
is returned? :thinking_face:
One thread should not be an issue, unless you are already right on the boundary of memory usage.
I’m a bit surprised at this. You’re throwing away the promise, so the JVM knows that it’s not going to be needed, but it will be needed for what shop-supplier!
does to it. I think it may depend on how aggressively the JVM can optimize the code.
May I presume that using a let
wrapper to return the promise (without using a thread) also runs out of memory?
That is what I am trying to figure out, but test take hours 🙂 I can check all possible options, but I hope someone here know the answer 😉
But memory behave differently from the beginning. You can clearly see on right part of the chart how memory behave without memory leak with the fix.
that code is being called by integrant, right? ... does that mean that the results of calling ig/init-key
are getting put into a system map in ram or something similar? Because the one that runs out of ram returns the promise, while the one that doesn't, doesn't ... so the content of the promise might be being held onto by the surrounding code ... unless show-supplier!
returns the promise it's given??
promise
is only to stop the thread. It has value ::done
for example. The memory consumption by promise is not an issue.
but it affect the garbage collector somehow . I mean I don’t know if promise
, but the difference in the code.
Try opening it in visualvm. I think that since threads are GC roots if you hold a reference to a thread it won't be collected
This is too complex system. I can run it only in specific environment to observe this memory usage.
in the version returning the promise, who consumes it and when do they stop referencing it?
It is just intregrant. The promise
is used to
(defmethod ig/halt-key! ::cli-planner [_ opts]
(log/info "stopping cli-planner thread:" (:thread opts))
(deliver (:promise opts) :done))
so not really on production.well anything dereferencing that promise will never return if that deliver is skipped
I am not sure, because I have to wait for results, but it looks when promise
is outside thread
it affect garbage collector inside thread
there's no garbage collector inside a thread, a value can be garbage collected if it doesn't have a gc root the two ways a promise will effect gc: • a thread is a gc root and one that is waiting for a promise deref won't exit if the promise is not delivered • a promise can hold arbitrary data, and if the promise is held by a gc root, the data in the promise will be as well
I'm picking on the promise here because the handling of the promise seems to be the only interesting difference between your code snippets
In the first example would it be correct to say the promise is referenced by two threads? The initializing thread and the created thread
right
Not sure if I understand. It looks like the issue is when promise
is outside thread
, then all data processing in the thread
have memory leak. This is not about value in promise
at all. promise
can have for example value ::done
, so it is very small if even appear.
is the promise always delivered to?
You can take a heap dump then analyze it with visualvm to see who holds references to the promise and the thread
because skipping delivery can make the thread hang
no, it is used only to stop processing in rare cases, so in general it is almost never delivered
@UK0810AQ2 holding a reference to a thread is irrelevant, you can literally ask the jvm to list all existing threads, they are gc roots and are only collected if they exit
@U0WL6FA77 at this point I don't think the information you have provided is sufficient to narrow down your problem
Right, and it isn't daemonized. Still, a heap dump should be a good place to start when hunting down memory leaks and runaway references, no?
I think this is not at all about delivering prmoise. For me it looks like a construct with promise
outside thread which is never delivered affect memory usage in thread
.
The thread is about processing data and this is what consume memory.
You could try posting a (redacted) stacktrace here. There are various types of OOMs; perhaps by posting the specific type/cause something can come to mind
OOM errors are not scoped to a specific stack (though I guess in theory you could capture all active stack traces when the OOM happens?) it's really something you want heap profiling data for, as @UK0810AQ2 mentions. heap profiling with clojure is hard because the same classes are used everywhere and laziness can mess with the tool's idea of who owns something, but profiling does help
I mean technically you can grab the stack of the code that hit the OOM condition, but that's only sometimes the code that's leaking memory
and sometimes the issue isn't a leak, but rather you are trying to use an algorithm that consumes more memory than you provide the vm
another possible source of the problem you could look into (unlikely but possible) is that Thread
doesn't propagate dynamic bindings from the caller's context and always uses the root bindings
you can replace (doto (Thread. ^Runnable (fn [] code goes here)) (.start))
with (future code goes here)
- it's simpler and reuses thread objects from a pool instead of creating new ones on every call so it performs better too
and it propagates dynamic bindings from the call context, which is nearly always what you want
OOMs sometimes are caused by stack consumption which can be accurately correlated with the stack that threw the exception anyway a stacktrace can be useful, there are other reasons also (e.g. sometimes it details the kind, like ran out of metaspace)
ha I found everything was a misleading. I am doing last check but I found bound-fn
inside and outside this thread. It looks like this is the source of the out of memory. I don’t know why exactly it cause memory leak, but I don’t want to have this functions anyway in the code.
What's the best way to write code that runs only locally but won't run in production?
A macro with a compile time check on some condition (e.g. environment variable, dynamic, var, whatever). This allows you to completely elide the non-production code in production.
That's what I'm looking for.
Is there a var that works like that in Clojure (e.g. *assert*
) or should I set it manually in my build tool ?
more fine grained control. turning off ALL assertions in production is maybe something you wouldn't want to do
I do a new class path root that won’t be on the prod class path. dev.nocommit.logging or whatever
I keep it gitignored so each person can have what they want there. Precommit hooks can look for nocommit and no forms will compile in CI with any of those namespaces
Suppose my application has two standalone tasks (for example, two http endpoints). Is it normal to call task.start() inside ig/init-key method?
hey, I think you have to ask more precisely. This is hard to understand your needs and probably this is why nobody answered.
Are there any examples of applying transducers to callback APIs? I know I could wrap the callback API with core.async; but I’d prefer to do something lower level.
if I take this question literally it sounds like you want to turn the callback API into a transducing context, so you could attach a transducer to it the way you would attach one to into
or chan
- is this actually what you are talking about?
yes precisely
Was thinking I’d just deconstruct the xforms
i.e. something like this:
(((map inc) conj) [] 1) ;;=> [2]
where conj
would be the reducing function for adding something to my connection.
and []
would be the connection
no, the callback is equivalent to the 2-arity… i.e. it’d need to be essentially for reducing an individual item
there are other callbacks for completing etc though which would need to call the completing arity
Pretty sure I can knock this together quite easily actually… but that’s me finishing for today 😩 One for tomorrow!
Just wondered if there were existing examples of this sort of thing
where someone has augmented a java callback api and java aggregating object, with an arbitrary xform
.
callbacks are one-shot, but a transducer transforms an operation that is probably not one-shot
I'm wondering what advantage transducers would have when attached to an API over comp
they have the disadvantage of being a lot more complex than comp
Yeah, this isn’t a one shot callback; it’s an evented parser, that emits an event (calling the callback/listener) on each “form” parsed.
Control passes to the parser and isn’t relinquished until the whole file is processed. I’d really like to load all the forms into a db (hence the connection object); with an optional transformation (hence the xform), but there’s an impedence mismatch due to the inversion of control… Similar I guess to why you often need to resort to using PipedInputStream
when redirecting input to an output stream.
I’d really like to handle this without spawning an extra thread (which is what I’ve done in the past), and I see transducers as a solution to this, as the reducing-function can be passed into the parsers thread.
Hence I was thinking I’d write an into-connection
function that would wrap all this up nicely.
is it just my functional brain hallucinating or does this sound like a monadic operation - lifting the procedural logic into an object that can be composed into the file processing pipeline?
in my opinion using a collection abstraction in the middle is still usually the most intuitive thing to work with. my hunch is this would go a lot smoother if you use a collection or queue as your glue, and once the whole pipeline works, a transducer can replace the collection conversion as a performance optimization (if that is in fact needed)
because otherwise my senior engineer side think this smells like it would become the kind of code one ambitious dev will make in a fugue of caffeine and long nights, and nobody will be able to maintain or understand afterward
Well reducers/transducers are similar to monads so it’s probably not surprising.
I was really asking about how to turn a callback based java parser into a clojure.lang.IReduceInit
such that it can take xforms and work with reduce
/ transduce
etc. So I was asking about how to apply those existing abstractions to a new thing, rather than create something new that smells like a monad.
Anyway I’ve worked it out and have it pretty much working now.
right, and to be more concrete, my suggestion is that instead of implementing IReduceInit
and making a transducing function as your first draft, it would go more smoothly if you do it with a collection or queue in the middle as a first draft, and only reach for that interface and transducers as a performance optimization if needed. acknowledged that this is opinionated advice and if you what you have works and is maintainable then cheers 🍻
This already is the performance optimization work, after that queue implementation 🙂
though it’s not just about performance
oh I misunderstand then
what's the "not just about performance" aspect here?
all the other trade offs of using reducers/transducers vs seqs
eagerness, resource control etc
oh to me those aren't issues with seqs, they are issues with laziness (which is already a no go when talking to the outside world)
I think I understand what you are getting at
well seqs complect laziness, caching and sequences, so they are issues with seqs 🙂 Though to be fair you said “collection” and I said “seq” 🙂
Anyway the code is pretty simple
implementing IReduceInit isn’t hard
Essentially I have a callback based parser for a data format, and I’d like to “transduce” (load) it into a connection object with an optional xform
I'm getting a vector of maps with numbers from an API. I am filtering the maps for the numbers above a certain value. When it returns an empty sequence (ie no numbers above the value), I want to receive the number closest to the value. How can achieve this kind of fuzzy filtering?
this seems too simple for what you are asking, but I'd just do (or (seq (filter high-enough result)) [(closest result)])
I put the second conditional in a vector so that the result is always sequential, even though closest should clearly return a single item
in practice this sort of logic is a common reason that you have to change a ->>
body into a let block where bindings refer back to previous bindings
i have used a priority queue to reduce over a collection keeping the top N
items. You could do something like this. Then you're left with either the collection you want, or the largest items if they haven't hit the threshold.
but the largest N you have seen so far might not be the largest N items by the time you reach the end, so you lose some stats about early occurrences of the largest N sometimes?
@noisesmith I understand you are suggesting to solve it algorithmically: if filter result is empty? then run closest. This I can do. The thing is, I have this kind of logic in many places for various vectors of maps. I was thinking about some cool functional pattern inside the mapping function : ), or wrapping in the middleware that will always return the closest result
something inside a mapping function as you suggest requires state management and rarely (never?) performs better than seq / or, and is always more complex / error prone
Hi everyone, with gen-class
why does the following with-meta
not create the (java) annotations on the generated class ?
(The commented reader-macro ^
does work, but I cannot create that from a macro. I'm trying to create gen-class
expressions from a macro...)
(gen-class :name
;; ^{JsonAutoDetect {:getterVisibility JsonAutoDetect$Visibility/ANY
;; :isGetterVisibility JsonAutoDetect$Visibility/ANY
;; :fieldVisibility JsonAutoDetect$Visibility/NONE}}
(with-meta kafka_testdrive.messages.position
{com.fasterxml.jackson.annotation.JsonAutoDetect
{:isGetterVisibility
com.fasterxml.jackson.annotation.JsonAutoDetect$Visibility/ANY,
:getterVisibility
com.fasterxml.jackson.annotation.JsonAutoDetect$Visibility/ANY,
:fieldVisibility
com.fasterxml.jackson.annotation.JsonAutoDetect$Visibility/NONE}})
gen-class is a macro, it does it's thing at compile time, with-meta is a function which does things at runtime. Same reason (let (vector a 1) a)
doesn't work
That makes sense... is there another way I get that ^{...}
or the same effect generated from within a macro?
I see eastwood has a dependency on {:group org.clojars.brenton, :artifact google-diff-match-patch, :version 0.1}
but i cannot find any licensing information for this artifact. doesn't seem to have a repo associated with it through clojars. does anyone know where i could source this information?
otoh eastwood is dev tooling, so licensing matters are more relative (i.e. you'll never bundle eastwood to a prod artifact, so one doesn't have to worry nearly as much)
Does anybody know about an article or some documentation on why clojure.lang.BigInt
exists in addition to java.math.BigInteger
, but no such thing was done for java.math.BigDecimal
? I read some comments that it’s done to prevent autoboxing, but i’d like to read a little deeper into the reasoning.
yeah, that's it
Thanks!
yup
user=> (.hashCode (Long. 12345678901))
-539222985
user=> (.hashCode (java.math.BigInteger/valueOf 12345678901))
-539222925
user=> (.hashCode (bigint 12345678901))
-539222985
What’s the issue?
BigInteger
has a different hash code than the equivalent Long
value, whereas clojure BigInt
is identical
I have a hazy memory of that being given as the explanation for why Clojure needed BigInt
instead of just building on top of the Java type
isn't what you're showing the opposite of that?
(also fyi, Clojure doesn't use .hashCode, hash
is the relevant function here)
at this point I don’t recall where I saw the hash code called out, maybe a very early Joy of Clojure edition? :thinking_face:
regarding difference between Clojure's hash
and Java's hashcode
see https://clojure.org/guides/equality#equality_and_hash
what's the incantation i'm looking for here: clj -Dillegal-access=debug
. is it -J-Dillegal-access=debug
?
and i'm getting illegal access warnings from clojure.data.xml, which i thought was free of them
clojure.data.xml
or clojure.xml
?
reading in a google group message i was expecting this to not be an issue with clojure.data.xml
Ah, reflective access, not reflection. My bad!
So, yeah, I am surprised. I would have expected reflective access to not be an issue with that contrib lib. @alexmiller?
i'll go post it on http://ask.clojure.org and not bother him
Is it perhaps due to an older transitive dependency or something?
remained the same. moving to "0.2.0-alpha6"
from "0.0.8"
actually changed the behavior of the program and it failed to find some things in the poms
Despite the name, you should use 0,2.0-alpha6
gonna have to debug why it failed to hit some info in the poms. but i'm switching now. thanks
ah, what was previously a :tag groupId
is now :tag :xmlns.http%3A%2F%
the bump to "0.2.0-alpha6" does solve the illegal access though. unfortunately breaks all navigation into the csv. but i can patch that. thanks all
Because 0.0.8 didn’t handle XML namespaces but 0.2.0-alpha6 does?
most likely
Which also changed because 0.0.8's behavior was incorrect?
I think the whitespace thing broke us when we upgraded — but that was a long time ago I think?
i'm building a dep license concator and having to parse poms. haven't done that in a while. so had been a bit since i'd looked at xml at all, much less parse it
there's a flag or something for the whitespace thing
content type to skip or something
I believe that's an example