Fork me on GitHub
#clojure-dev
<
2021-02-16
>
souenzzo15:02:57

There is some issue about (name nil) => NPE ? On REPL it not occur, but in my app, when I do (name nil), it throws with an empty stacktrace.

noisesmith15:02:44

it happens in my REPL

(ins)user=> (name nil)
Execution error (NullPointerException) at user/eval144 (REPL:1).
null

souenzzo15:02:35

But (.printStackTrace *e) should show a stack for you From my cloudwatch

:cause nil
:via
[{:type clojure.lang.ExceptionInfo
:message "java.lang.NullPointerException in Interceptor :my-cool-route - "
:data {:execution-id 123, :stage :enter, :interceptor :my-cool-route, :exception-type :java.lang.NullPointerException, :exception #error {
:cause nil
:via
[{:type java.lang.NullPointerException
:message nil}]
:trace
[]}}
:at [io.pedestal.interceptor.chain$throwable__GT_ex_info invokeStatic "chain.clj" 35]}
{:type java.lang.NullPointerException
:message nil}]
:trace
Not sure if it's something with pedestal error-handling

noisesmith15:02:47

right- is that the objection, that it doesn't print the whole trace?

souenzzo15:02:00

:trace [] ???

💡 3
bronsa15:02:08

it should always NPE

bronsa15:02:44

if the problem are the empty stack traces, it's because of

OmitStackTraceInFastThrow
which is enabled by default

bronsa15:02:59

you can set

-XX:-OmitStackTraceInFastThrow
to disable it

souenzzo15:02:25

should I use -XX:-OmitStackTraceInFastThrow in prod?

bronsa15:02:43

well, it will cost you a very marginal performance hit as you're disabling an optimisation

bronsa15:02:57

but depending on what you're deploying it may make no noticeable difference

noisesmith15:02:33

I've definitely lost a lot of time from hard to reproduce prod errors with elided stack traces

3
Alex Miller (Clojure team)15:02:14

many people do set this in prod

Alex Miller (Clojure team)15:02:57

seems like it should only be a cost if you are throwing a lot of exceptions (in which case you might want to know why that is :)

bronsa15:02:03

maybe using core.match in a hot loop :P

andy.fingerhut15:02:21

If some code decided to use JVM exceptions for normal expected case control flow, catching them internally, that could have a performance degradation, but hopefully there are very few libraries that use exceptions that way

✔️ 3
noisesmith16:02:42

at one point there were common jvm internals that worked that way, but I would be surprised if that wasn't optimized

mikerod16:02:20

I too recall finding some potentially hotter paths I thought were going to use try/catch logic for control flow in the core. I can’t remember specifics now. I actually assumed some of that hasn’t likely changed.

mikerod16:02:54

Some class loader logic is based on catching ClassNotFoundException I believe - but perhaps that isn’t considered a “hot path”

dominicm16:02:09

I'm pretty sure a lot of this was optimized

dominicm16:02:15

they're nowhere near as expensive as they once were.

noisesmith16:02:02

@U0LK1552A I'm trying to think of a case where I'd load new classes in a loop as the most perf critical part of my program

noisesmith16:02:49

I guess if my program is meant to be short lived, that would in fact be the case, but then why am I even using clojure on the jvm in that case...

seancorfield17:02:28

If your process is very long-lived, you can hit the stacktrace optimization fairly quickly even if you're only dealing with occasional exceptions. We have that optimization disabled in production for that reason -- and I disable it in dev too because my REPL tends to run for days so I hit that optimization in the REPL too (my main work REPL has "only" been running since last Thursday, but my current open source project REPL has been running for over two weeks at this point!).

👍 6
mikerod17:02:48

Yeah @U051SS2EU I think the general advice of it’s probably fine to disable OmitStackTraceInFastThrow makes sense to me too. I think Alex basically said it, but it’s probably one of those things that would make more sense not optimized until/unless you actually find some unavoidable issue. I find that it’s weird that it’s on by default. That said, all production app’s I’ve ran have always left the default on - and we’ve had a bit of pain before with these empty stacks in our logs. 💀

Alex Miller (Clojure team)17:02:08

it might make sense for Clojure CLI to set this when starting a repl

andy.fingerhut17:02:19

If Clojure CLI doesn't do that by default, then this certainly seems like a strong example motivating some kind of "alias that is enabled by default" in deps.edn, or something similar.

Alex Miller (Clojure team)18:02:17

well, could just be hardcoded, doesn't necessarily need that

Alex Miller (Clojure team)18:02:11

Quote from release notes about this flag: "The compiler in the server VM now provides correct stack backtraces for all "cold" built-in exceptions. For performance purposes, when such an exception is thrown a few times, the method may be recompiled. After recompilation, the compiler may choose a faster tactic using preallocated exceptions that do not provide a stack trace. To disable completely the use of preallocated exceptions, use this new flag: -XX:-OmitStackTraceInFastThrow." which is kind of interesting.

Alex Miller (Clojure team)18:02:44

so first, they "bake in" prebuilt stacktraces for built in exceptions! and second, once a particular exception has been thrown enough, it's only on recompilation that the compiler chooses the faster omitted stack throw.

Alex Miller (Clojure team)18:02:15

which implies a lot more nuanced implementation than what I pictured in my head

mikerod16:02:43

It seems I’ve only ever seen NPE exceptions affected by the OmitStackTraceInFastThrow default

jumar19:02:44

There are about 5 exception types that are optimised this way. Another common is ClassCastException

👍 3
mikerod17:02:02

Interesting to know. thanks!

mikerod16:02:55

which is interesting. I’d think it’d apply to more cases

mikerod16:02:02

In practice though, somehow I’ve never seen it

seancorfield18:02:18

Coming from @alexmiller’s comment in a thread on the 1.10.3-rc1 announcement: when I talk to tooling maintainers, they pretty much all say that prepl is too limiting for them to use -- because you can't interrupt evaluation and you can't "partially consume" a potentially infinite sequence.

hiredman18:02:13

I know you are just relaying stuff others have said, but isn't it wild to refer to a tool that bottoms out at calling eval as too limitting?

😄 3
seancorfield18:02:52

🙂 Well, only insofar as you lose control of things if the evaluation/printing "hangs" (runs too long, never completes). Personally, I find unrepl's "helpful" attempts to limit rendering of values more of a nuisance since it doesn't expand enough of a large returned results -- but at least it protects me from accidental infinite sequences or hung evaluation.

hiredman18:02:38

I think I had a discussion, maybe with ghadi, in this channel a while back about interruptable eval. It is kind of a rock and a hard place. Users really want it, but all the jvm underlying methods to support are marked with things like "deprecated, don't use, very bad"

hiredman18:02:06

it is possible to interrupt a prepl eval though, just open another prepl, find the thread call that hideous .stop method on it

hiredman18:02:25

and run away before anything finds out it was you

😄 3
hiredman18:02:14

so the usage pattern becomes: open a prepl, ask it for its thread id, then send it stuff to eval, and if you need to interrupt it you have the id

seancorfield18:02:23

That's an interesting idea... and I guess handling infinite printing just through the usual print length/depth stuff?

hiredman18:02:14

I dunno, there are some nrepl tickets that discussed that showed issues with the length and depth settings, I think the solution there was just printing to a fixed length buffer

hiredman18:02:57

which like, you can use a repl to define a function for that, and then launch a new repl server (in the same process) that serves repls with your new printing function

hiredman18:02:55

I think to some degree this is tool writers see an existing fixed pattern for how to do this in nrepl, and not wanting to change anything, asking for the same pattern to be available in prepl

hiredman18:02:26

I, of course, am not the man in the arena, so I may be missing a lot

seancorfield18:02:32

I guess you could even set up a new connection for every evaluation as long as you closed out old connections as results came in or got killed...

hiredman21:02:06

https://gist.github.com/1789849d21be38310694dbf214d60d34 is an example, it sends @(promise) to a prepl, and then kills that after a second

seancorfield21:02:30

That's not too bad. I can see building something in front of that which always keeps an execution socket process open and a control socket process that can be used to kill/restart the current execution process as a prepl proxy for it...

hiredman21:02:39

it is basically shifting the machinery from the server into the client

seancorfield18:02:28

I'd love to see more tooling based on just plain socket REPL or prepl but it seems that without those features, tooling maintainers are mostly going to stick with nREPL. Even in Chlorine/Clover -- which can connect to a plain socket REPL -- it side-loads https://github.com/Unrepl/unrepl so that it can provide both of those features, but that also adds a lot of complexity that I know Mauricio finds frustrating.

seancorfield18:02:38

I know Rich favors a simple, streaming REPL -- he's mentioned that several times, both in person, and on the Clojure mailing list -- but it seems that controlling long-running evaluation/printing is an important feature for developers, which seems somewhat at odds with socket REPL/prepl. Has any thought been given to addressing that inside in a future release of Clojure? I think Alex has talked about making it easier to get to an editor-connected-REPL state so it sounds like something is in the hammock, at least?

thheller20:02:56

Is this really something that needs to be in "a future release of Clojure"? I mean this can be done perfectly fine in a library. IMHO prepl should have been a library too but oh well

seancorfield20:02:11

@thheller It was more a question of "is this primitive considered sufficient/complete and might it get future enhancements?" -- I only know of two community projects that used prepl: Reveal and Conjure and the latter has migrated away from it because of these limitations. My understanding of it appearing in core was that there was an expectation more tooling would be built on top of it, but since that isn't happening I'm wondering if it needs to be enhanced in core for it to gain adoption.

Alex Miller (Clojure team)20:02:16

the benefit of both prepl and the socket server being "in the box" means you can rely on always having them available

Alex Miller (Clojure team)20:02:19

which means you can add some Java properties to your clojure program, without touching the program itself, and get access to it externally

Alex Miller (Clojure team)20:02:52

at any rate, that's why they are not libraries

thheller20:02:51

makes sense given how small they are I guess

Alex Miller (Clojure team)20:02:58

I think it's a good long range bet that prepl will continue to receive attention. hard to say if it's a good short to medium range bet.

seancorfield21:02:27

Thanks @alexmiller -- I hope that long range attention includes thinking about interruptible execution 🙂