This page is not created by, affiliated with, or supported by Slack Technologies, Inc.
2023-02-15
Channels
- # announcements (5)
- # babashka (56)
- # beginners (24)
- # biff (15)
- # calva (7)
- # clj-kondo (12)
- # cljsrn (8)
- # clojure (68)
- # clojure-denmark (1)
- # clojure-europe (55)
- # clojure-norway (4)
- # clojure-spec (9)
- # clojure-uk (2)
- # clojurescript (8)
- # cursive (11)
- # data-science (7)
- # datahike (1)
- # datomic (66)
- # emacs (12)
- # etaoin (3)
- # fulcro (10)
- # graphql (3)
- # hyperfiddle (97)
- # jobs (1)
- # kaocha (8)
- # lsp (3)
- # malli (15)
- # meander (1)
- # off-topic (3)
- # overtone (4)
- # polylith (7)
- # rdf (25)
- # re-frame (4)
- # reagent (14)
- # remote-jobs (1)
- # shadow-cljs (126)
- # sql (30)
- # vscode (3)
- # xtdb (8)
Hi, is it ok to use the wrap-reload
middleware from ring in production or is this frowned upon?
That middleware takes a list of source directories to watch. Presumably in production you have a jar and thus no directories at all. In that case there are no directories to watch and it wouldn’t do very much
In compojure, we have a dev environment which uses wrap-reload for practicality and I was wondering if it presents any security issues once the project is uberjar'd
no. it’s just watching a file system with nothing on it. preferable to strip it out as you compose your middleware. It also requires the dep to track namespaces looking for changes which will spin its wheels finding nothing. But ultimately not a security issue.
I think I will still use an alias in deps.edn to remove it of the middleware list when not in dev environment
I would go further and say that it's a smell in dev too: it suggests that the application is not repl friendly
also, if its really repl friendly you should just be able to eval a form and your system’s behavior changes. you only need wrap reload if this isn’t true. and then the only thing that can change is the version of your code that is saved as it reloads from disk rather than using the vars defined by you in the repl
How would you work with middlewares and handlers which take functions as arguments? redefing them won't change the behavior because they were passed by value, unless you eta-expand them
@UK0810AQ2 You can pass anonymous functions that wrap them (or even just use a Var via #'
most times).
I'm very opinionated about REPL-friendly development and these various reload libraries -- I discourage them whenever anyone mentions them (and I don't use them at all). I eval every top-level form as I edit it, including any ns
form I change, and I use #'
where necessary to support redefinition.
Then I can start my app in the REPL and just leave it running while I work on it -- no restarts, no reloads, no breakage due to a "fancy" reload library breaking stuff (which does happen -- they're not a silver bullet).
I was working on one of our web apps yesterday and had it running via the REPL nearly all day while I was adding features and debugging stuff (with tap>
-- I ❤️ tap>
!!!).
Passing anonymous functions which wrap them e.g. eta expansion is usually what I do, more compilation friendly when you use direct linking
Yeah, we don't use direct linking in dev/test but when we AOT compile for staging/production JAR builds we enable direct linking -- and then we live with the restrictions that choice places on REPLs in staging/production (we run nREPL and Portal servers in a couple of our processes in staging/production).
It AOT made a huge difference to app startup -- reducing some apps' startup from around one minute down to 10-20 seconds -- but we didn't bother to check general runtime. We only added AOT compile at all to get those startup times down, so our rolling deployments were faster/safer 🙂 [edited to clarify that it was primarily AOT that improved startup time]
(apparently we did a while back and random things broke. will have to experiment later)
Impossible to truly know the effect of volatile reads a priori. Have to measure for each case
I should restate my comment above: not all of that speed-up was due to direct linking, now that I think back to it. I'm not sure we actually measured independently of AOT but I think it made some difference to startup times. AOT made the "huge difference" but I'd need to trawl through JIRA to see when we added direct linking and what timings we did. I know we've gone back and forth on whether to keep direct linking (since it does make REPL-based debugging/patching harder).
i made an issue and at the end was “make sure this tradeoff is worth it because repl’ing against a prod jar now is weird”
Yeah, now we have our CI/CD pipeline times down for most changes, simply making a change and redeploying is fast enough that we aren't as concerned about trying to patch something "live" -- and in some ways it's good to discourage modification of production via the REPL 🙂 (although direct linking does also hinder debugging occasionally, due to not being able to simply redef a fn to add tap>
or logging or whatever).
we’d never repl to a live prod instance. but nice to grab the actual jar and run it locally
Having nREPL/Portal in a couple of production apps has been great for debugging: start the VPN then hit a hotkey to a) start an SSH tunnel, b) start a browser in VS Code connected to the remote Portal server, and c) start an nREPL connection to the remote server. Then add-tap
on the Portal submitter and start eval'ing code from a local RCF to debug stuff and view the data nicely. Then remove-tap
when you're done and shut it all down.
I blogged about it and shared a version of the Joyride script I used to automate it.
https://corfield.org/blog/2022/12/18/calva-joyride-portal/ in case anyone wants to see how that particular sausage is made.
In case you want a view of how you would handle reloadable http servers without wrap-reload or tools refresh or similar other utilities, I have an article which goes over reloadability which I feel could be useful to you: https://srasu.srht.site/var-evaluation.html
(also @U04V70XH6 is it fine for me to just share something like that in a case like this were it feels perfectly applicable? It feels weird because it seems like self-promotion, but simultaneously it's exactly what I'd want to say on the subject and it says it better than I could pulling it out of a hat, and it's not like I'm making ad revenue off it)
I just shared one of my blog posts in the thread so it seems eminently reasonable for you to share a relevant one of yours @U5NCUG8NR! 🙂
I remember reading that when you wrote it last year and thinking "Oh, good article..." until I got to the gnarly macro at the end :rolling_on_the_floor_laughing: but you do warn people not to use it in general...
I just have this tiny macro lying around for such cases:
(defmacro $ [sym]
`(fn
([x#] (~sym x#))
([x# y# z#] (~sym x# y# z#))))
Yeah, I absolutely hate that macro I wrote, and I hate that I saw someone on hacker news say they were going to use it. I'm considering redacting it from the article entirely, but I genuinely do think it has some value in short-form teaching, and so I don't want to remove it.
@UK0810AQ2 what kinds of code does that work for that #'
wouldn't work for?
Doesn't introduce volatile read when you use direct linking. It should get inline by the JIT in production and give you a reloadable experience in dev time by way of (ab)using the compiler's value/reference semantics
Ah, that makes some sense. I'd be curious what kind of impact it would make on performance.
basically exercise a server with wrk with a trivial handler passed once by value and once as a var
@U5NCUG8NR > I'm considering redacting it from the article entirely, but I genuinely do think it has some value in short-form teaching, and so I don't want to remove it. ( At one time I actively stopped putting example code in blog posts because folks would copy it without understanding it and then complain to me that "your code doesn't work" when they were using it in a context that just wasn't applicable. These days, I'm really careful to explain code with caveats but I still put a lot less examples in my blog posts than some people think I ought (#1 complaint is "you don't show (enough) code so I don't understand your post").
That's actually part of why I deliberately made my example webserver use real libraries but refer to non-existing code, so that it was clear that I wasn't trying to build something that could be copy/pasted.
The only downside is now people can't experiment with it in a repl without understanding what I was writing.
@UK0810AQ2 have you written about that macro? I’d love to read the thinking and explanation behind it
I should write something, but the tldr:
When you write a reitit handler, a specific route looks like ["path" {method {:handler foo}}]
When you write it like that, foo is always passed by value
If you pass a function which calls foo, the emitted instruction during indirect linking will include a var dereference which leaves foo dynamic wrt to redefinition
The difference between passing foo as a var and this method is that during direct linking the emitted bytecode will be to the invokeStatic method in foo, and the method call in the surrounding function will probably be inlined, so there is zero performance overhead
I need to look into how the inliner works more, because the max depth of 9 that it has by default and the fact that usually clojure function calls end up being two layers deep with invoke and invokeStatic makes me feel like that's an important thing for me to be able to think about when writing clojure code.
> When you write a reitit handler
And reitit
doesn't allow Vars there, right?
@U5NCUG8NR we should explore the impact of modifying MaxInlineLevel for Clojure applications
@UK0810AQ2 everything I've ever heard about optimizing things on the JVM says that you should never modify MaxInlineLevel because there's other optimizations and similar which rely on it being set to the default. I still think it'll be worth testing, but having heard all that is going to make me dubious of the performance benefit it will have, and honestly maybe even correctness.
I haven't seen those caveats, do you happen to have any reference? I'd be happy to read it. WRT correctness I wouldn't worry, the C2 compiler is good enough to not make those mistakes. In any case, the defaults are correct for the typical java application. Are they correct for the typical Clojure application? Without searching the configuration space, like in https://www.youtube.com/watch?v=RG9Ne2tkRuQ, we're guessing. There are several changes I think a Clojure application might benefit from, including bigger Eden area, bigger TLAB, more aggressive inlining, etc, but I don't touch those flags because I have no idea and haven't tested it yet. It's certainly something I'm interested in knowing, though, so in case anyone reading this wants to fund performance research in Clojure, hmu 😸
That makes sense. Unfortunately I don't have a reference to where I saw that before. I've been trying to keep references to things more recently, but that's a new habit for me.