Fork me on GitHub

Unless you have something big worthy of mesos, I struggle to see the benefit of anything beyond git+maven. Uberjars feel very slow and cumbersome to me, can't see the value myself


how do you transfer your app + dependencies onto the target machine if not an uberjar?


you just run mvn on the target?


I see uberjars as a historic relic from j2ee ears


If the host has access to the internet or a maven repo proxy, I run lein run, or boot run.


I don't understand why dev and ops have to have such different environments. @jarohen convinced me of this at DB. Never seen a strong arg for have two separate camps. It just creates silos and make support more complex


DB=Deutsche Bank, sorry


I've become a devops skeptic. :)


@malcolmsparks: you just need to pick the good bits of any process. just adopt what makes sense in your environment.


that is my take on it.


malcolmsparks: I tend to like having a deployable artifact I can point to, but if you have a git tag + reproducible builds I guess it’s not a big difference


although I like knowing that my n nodes all use the code built from CI, rather than re-building each one


agreed that dev/ops environments should be very similar, I generally think of that as a devopsy principle


@malcolmsparks: that approach definitely works too... it does of course require you to give the repo to the client - which may or may not be an issue... and require lein is installed on the server... also if dependencies are pulled onto the server they might not be repeatable (and might include snapshots etc...) I quite like uberjars because they're a single artifact; they boot quicker than lein ..., require only one JVM, and I find that building uberjars can help catch some classes of errors... But you make a good point at keeping the environments more similar...


I think push based deployment models are also more secure


@rickmoynihan: I think the quicker boot is a +ve. When I'm rapidly iterating, the time it takes to build and distribute an uberjar feels overly burdensome. I'm happy to take a slight perf. hit on startup but ymmv. But you don't need 2 JVMs for lein (by using lein trampoline), nor with boot (because of pods). Snapshots are often forbidden by the repo manager (not allowing a release that contains snapshot dependencies - does clojars allow this?) Which errors do building uberjars catch? I haven't much experience with them to know. The one thing I /really/ don't like about uberjars is the way that LICENSE files and other artifacts don't survive the lossy munging involved. If a third-party lib chooses to ship its LICENSE in its distribution, who are we to remove it? Is that a sound legal basis for code distribution? It might 'work' from the code's point of view, but legally it's on dodgy ground, and whatever your views on that are, I don't think the uberjar process should be the 'standard' way to distribute software.


@malcolmsparks: Agreed on it not being the standard way; I don't think there should be a one size fits all approach to this... I wasn't aware about the license stripping... but here I'm really talking about deploying to servers I own - so those issues aren't relevant for my normal use case. lein trampoline is possible of course - but it does contaminate the classpath which may not be acceptable. I have our jenkins setup to uberjar every project - simply because it forces the namespaces to be compiled from the main method... which is a path that isn't typically exercised by unit tests and lein test so it can catch some simple syntax/compile errors - or executing code at compile time instead of runtime... e.g. things like (def foo (future ....) which are stylistically bad. Its no substitute for tests of course - but it can cause builds to fail sooner. I suspect if you use lein run you'll get this benefit anyway - though lein run may not be something your CI executes. Like I said I really don't think there's much difference between the approaches - I suspect the main reasons to choose one over the other come down to push vs pull


I like your argument about the extra testing achieved by lein uberjar - I hadn't thought of that. Might be a good idea to run it anyhow, even if you don't deploy the actual artefact


If there was a lein jar+repo which created a zip of everything including the maven repo subset, that would be great


in the case of things like (def foo (future ...)) you'll usually spot the problem by the compile not terminating... rather than a hard failure


its really not a lot of extra testing mind - but I have found that it helps catch cases where you've pushed without running/building the app


I don’t really like the idea that each of my servers is building the software from scratch


it’s less of an issue in Javaland, but with other toolchains that means I need build dependencies on servers which will only run


When I think about it logically, I can’t see why it should be a problem


but for some reason I still think I feel happier knowing i’m deploying something that is “built” on CI, and then distributed to nodes


regarding cons of using pull methods... they increase your surface area to potential attack vectors like this and others:


If I'm developing quickly, and/or pushing out something for users to comment on, I want my deploy to be fast. By fast, I mean, ideally, sub-second. At least as fast as I can push a commit to a server,


I suppose that further downstream, nearer to 'proper' production, I'd value the features of uberjars more - as @glenjamin has made a good case for. I suppose it's about trade-offs- what I can't cope with is waiting for an uberjar to build and deploy on /every/ change. I've seen some development teams do that and it feels anti-lisp to me.


Lately, I've been working on improving with @dominicm . What I like about edge is that, thanks to #C053K90BR, any change you make, to any file (sass, cljs, html, edn...) causes the system to reload and the browser to display the result. And if you change a .clj, you're just a reset away from seeing the effect. (I'd someday like to build in a push-to-deploy receive-pack trigger to cause a remote system to reset itself)


That begins to feel like what we used to have back in the day (rapid application development) and that has been virtually killed off by 'continuous delivery' regimes. I know that's not the intent of CD (it's trying to increase agility, not kill it), but I've seen the pendulum swing that far on some projects.


So, tl;dr, sub-second deployments please simple_smile


I don't think I'm disagreeing with anyone here, it's just we all make different trade-offs depending on our context. In mine, I want to bring back that 'conversational development' style with clients whereby you can code 'over the phone' on in a meeting and get fast feedback


That's a compelling upside


One of my main arguments against docker or VM images for deployments is you'll never get down to the speed of just jars


And I do a bunch of work in Dev to see sub-second feedback - but have mostly settled for "a few minutes" from dev to prod


yeah docker is certainly slow to commit, push, build on ci, deploy to docker hub, and pull onto the box... that's for sure! Uberjar's are a lot quicker - certainly not sub second - but a lein uberjar && scp target/my-uber.jar production:/opt/app.jar && ssh production "sudo /etc/init.d/app restart" could certainly take less than a minute - a little longer if you go via a CI server


malcolmsparks: but I do take your point - and on staging servers in the past I Have had that conversational development style of development by just using cider-connect to connect an nrepl over ssh to the remote host


I've found that very useful to develop and test things in a production like environment - with bigger data etc...