This page is not created by, affiliated with, or supported by Slack Technologies, Inc.
2020-04-03
Channels
- # announcements (4)
- # aws (13)
- # babashka (35)
- # beginners (162)
- # boot (8)
- # calva (5)
- # chlorine-clover (15)
- # cider (64)
- # clj-kondo (20)
- # cljs-dev (29)
- # clojars (6)
- # clojure (166)
- # clojure-europe (3)
- # clojure-finland (6)
- # clojure-france (8)
- # clojure-germany (3)
- # clojure-italy (3)
- # clojure-nl (7)
- # clojure-spec (49)
- # clojure-uk (83)
- # clojurescript (39)
- # clojurex (5)
- # core-typed (2)
- # cursive (3)
- # data-science (17)
- # datascript (3)
- # datomic (22)
- # exercism (5)
- # fulcro (3)
- # jobs-discuss (2)
- # joker (2)
- # kaocha (3)
- # malli (26)
- # off-topic (89)
- # pathom (10)
- # pedestal (14)
- # protorepl (14)
- # re-frame (23)
- # reitit (2)
- # shadow-cljs (27)
- # slack-help (10)
- # spacemacs (14)
- # tools-deps (10)
- # tree-sitter (3)
- # xtdb (19)
- # yada (2)
Is there a "Clojure function of the day" app somewhere? I just stumbled upon bounded-count
and wondering how many more less known functions are there. An app like that can help surface such functions.
There's a Twitterbot: https://twitter.com/rcfotd
This was a good talk that did that: https://www.youtube.com/watch?v=QI9Fc5TT87A
There's also a script for it: https://github.com/borkdude/babashka/#print-random-docstring
may be not quite what you're looking for, but someone did a graalvm native-image thing somewhat along those lines: https://github.com/tomekw/cotd
Using the great print-random, I added it to my emacs init scratch buffer message:
(let ((clj-docstring (shell-command-to-string "docstring.clj")))
(when clj-docstring
(setq initial-scratch-message clj-docstring)))
"docstring.clj" is in my PATH. Also made an interactive
fn to call it and show the answer in a new buffer 😃
(defun bk/clj-random-docstring ()
"Random doc-string into new buffer."
(interactive)
(let ((docstring (shell-command-to-string "docstring.clj"))
(buffer-name "*Clojure Random Docs*"))
(when (get-buffer buffer-name)
(kill-buffer buffer-name))
(get-buffer-create buffer-name)
(with-current-buffer buffer-name (insert docstring))
(switch-to-buffer-other-window buffer-name)
(special-mode)))
There is some reason to NOT deploy production apps with clojure -Sdeps "{:deps {my.comp/my-app {:git/url "
?
I can think of a lot of reasons on why NOT to do 😄
- Your app needs to access git (if private, needs credentials to git
into server)
- Startup is slower (need to clone, compile, then there's the warmup of JVM)
- You have no control over the environment (different cli / JVM versions can give unpredictable results)
- You can't restrict access to internet if you deploy like this
- SHAs are not absolute: a rebase / amend can remove the SHA, and its hard to understand if that SHA was before / after some version
1
run clj -Stree
on "build server" and store it's m2
and gitlibs
, now "production server" will run offline
2
not a issue for me
3
I control everything in EC2 or K8
4
(eval
*1)`
5
I trust git sha's as I trust maven signatures. I can also add a maven untrusted repo and use a bunch of untrusted jars.
1 - well, you're reinventing uberjar / clone the repo and run clojure cli 😄
presumably you stop previous version and then run that version which involves downloading from git (if easily visible from prod) while app is down. if you create a jar the data movement happens while the previous version is still up
> if you create a jar the data movement happens while the previous version is still up > What does this mean?
if you do the ci server approach you have different versions ready to go, you are aren't stopping the current version, hoping the new version builds correctly on every server, then starting it
Ah right
Kinda similar to loading a massive jar anyway, so most tools have a solution for loading your code first
I would prefer not to run dependency resolution and fetching on each node (avoid the possibility of getting different results)
But I think you can achieve a similar effect by using clojure on a ci server to resolve and fetch once, and push individual deps to servers (skip if already there) and then push a prebuilt classpath
I've even got a poc of that https://gist.github.com/hiredman/d68cafb6aa8cea563c7b77d54f522421
You get the benefits of avoiding a large single artifact and only push changes, while avoiding the pitfalls of resolving and fetching multiple times
Some of the Datomic build stuff is similar to that
(internal stuff)
but the example code is kind of all in one, and doesn't require you to copy everything out to the on disk layout and then rsync it
and using wildcards in the classpath precludes some useful things, like having a single directory containing all the dependencies for multiple different versions
yeah, I take that back, you lose all the best stuff if you just cobble it together via skinny + rsync + wildcards
e.g. you have some directory on your server /code and are running version 1, you want to be able to rsync the new stuff over /code and then switch to version 2, and then roll back to version 1 if needed
so /code is sort of like git's object directory, a pile of everything, and it is the classpath files that tell you which parts of it fit together for which versions
it looks like pack's skinny writes out the version from the deps.edn, which is likely good enough (in the poc I think I sha1'ed everything so versioned by file contents)
Is there a success story channel?
@neo2551 nice idea. there are enough to warrant a channel
@hiredman outputting a classpath file or a jar with a classpath entry were ideas I had but nobody asked for yet :)
I think it is tricky to do that independent of whatever code actually does the deploy, because the way the classpath is built will depend on how things are deployed. using relative paths works until it doesn't, and you want to java -cp
cat classpath` your.main` in some other working directory for some reason
and I guess all the cool kids are deploying docker images and don't care about this kind of thing
Pack also has a mode which generates a docker image and does exactly what we are talking about with the hardcoded classpath.
Isn't it good idea to clone GitHub repo at remote server, build jar file and deploy it? (Everything on server) (Using ssh shell script or #spire)
the way you would usually do that is you have something like jenkins setup to monitor a git repo, when it sees changes it generates a build(often an uberjar), and then makes it available to whatever is next
the annoying thing with that is uberjars can be rather large, and convey a lot of redundant information for upgrades (most of your deps don't change at all between versions of your code)
so the idea is to always accumulate code (your code, dependencies, etc) and then a "build" is just a small text file that points into this accumulation of code to pull together all the right versions of dependencies
so a deployment can be just pushing whatever new code needs to be accumulated on a server, and a text file describing how to build a version out of all that accumulated code
In local machine while doing development we have cached versions of dependencies on class-path, which are downloaded only on initial run of application. So, we want to have similar conditions on remote server. Right?
Having dependencies co-located with the building place of application.
but I want to do all the resolution and download once for all servers to ensure they get the same thing
another solution to the same problem (making deployments smaller and artifacts less redundant) that I have played with is using binary diffs
if I have version 1 deployed and I want to deploy version 2, I take the uberjar for version 1 and diff it with version 2, and send the diff to the servers and have them apply it
I didn't really get far enough with that to get a feel for how much redundant shipping around of dependencies it removes
The way you are saying, we'll have to sync folder of dependencies on all servers and make sure each of them is same.
I don't understand idea of diff
So you send dependencies of diff of A and B to servers, because dependencies of A are already there. Right?
I don't quite understand idea of diffing ubrjars
just like you can diff two text files and get a patch that transforms one into the other, there are tools to do that for arbitrary binary files
After that you reconcile diffed dependencies between all servers and current build. Right?
the uberjar contains all the deps, so by diffing uberjars the diff includes dependency changes
So, you rsync or send these changes to all servers.
so you do something diff version1.jar version2.jar
and the output is a patch file hopefully smaller than version2.jar which can be used to turn version1.jar in to version2.jar
you ship that patchfile to all the servers, and apply patchfile version1.jar > version2.jar
then stop version1.jar
and start version2.jar
the binary diff idea is entirely different approach from what I discussed earlier in the channel, it is just intended to solve a similar set of issues
This is great idea.
most the binary diff algorithms are tuned for executables and not jar files though, so hard to say how effective they will be there
There should be java specific solutions already in existence
> if you create a jar the data movement happens while the previous version is still up > What does this mean?
When upgrading running application, While stopping version 1 and running version 2 of application, how do you manage time duration in between? Or do you simply drop requests for this duration?
that is highly app and deployment specific, but however you answer it the same answer should work for uberjars, accumulations and classpaths, or binary diffs. some answers may even help with some of the downsides of deploying right from git
Running new version with different port than old one, pointing traffic to new port (Nginx) for new running version and closing old one while new one is in service. This may be one solution.
https://www.youtube.com/watch?v=thpzXjmYyGk&feature=youtu.be is the video of rhickey that caused me to start thinking beyond uberjars(and similar things) for deployments