This page is not created by, affiliated with, or supported by Slack Technologies, Inc.
2016-03-17
Channels
- # admin-announcements (4)
- # beginners (35)
- # boot (183)
- # cider (28)
- # cljs-dev (4)
- # cljsjs (1)
- # cljsrn (5)
- # clojure (52)
- # clojure-austin (4)
- # clojure-russia (83)
- # clojure-sdn (1)
- # clojure-uk (18)
- # clojurescript (48)
- # core-matrix (5)
- # cursive (4)
- # datomic (23)
- # devcards (2)
- # dirac (43)
- # editors (2)
- # emacs (4)
- # events (5)
- # funcool (2)
- # hoplon (81)
- # immutant (3)
- # juxt (3)
- # ldnclj (1)
- # luminus (12)
- # off-topic (6)
- # om (72)
- # onyx (32)
- # parinfer (2)
- # pedestal (1)
- # proton (6)
- # protorepl (3)
- # re-frame (30)
- # reagent (2)
- # spacemacs (2)
- # specter (1)
- # testing (1)
- # uncomplicate (3)
- # untangled (15)
- # yada (10)
@seancorfield: is it normal that the output of boot-expectations
includes this exception info?
Ran 1 tests containing 1 assertions in 20 msecs
1 failures, 0 errors.
clojure.lang.ExceptionInfo: Some tests failed or errored
data: {:test 1,
:pass 0,
:fail 1,
:error 0,
:run-time 20,
:ignored-expectations 0,
:type :summary}
clojure.core/ex-info/invokeStatic core.clj: 4617
clojure.core/ex-info core.clj: 4617
seancorfield.boot-expectations/eval151/fn/fn/fn boot_expectations.clj: 65
...
also it takes ~3s to rerun the tests when i only have 1 sample check (expect nil? nil)
adding :size 4
option to the pod-pool
decreased subsequent runs to ~1.8s, which is still too high for a fluid TDD flow, plus doesn't make sense.
what takes so much time when i have a warmed up pod?
hmm... 2 pods seem to be initialized after every run of the expectation
task
(i can see the requires happening twice if i put a (println "requiring into test pod:" r)
before (pod/require-in fresh-pod r)
in seancorfield.boot-expectations/init
)
Hi everyone, we have been using boot to help build our docker files with some success but I would like to know a better way and use boot more correctly. The goal is to build docker files where the rapidly changing jars are on one layer and the rest of the dependencies are on another layer. The boot file lists the target jar as the only dependencies and then I use the uber command with as jars. I finally (in another command) do some magic to go through the jars that the uber command found and copy different sets into a Dockerfile so when we run docker build it puts things on the correct layers. This approach has some serious potential because I can potentially make docker images out of any jar in maven just by pointing to it and knowing the launch command I need to use....
I feel like this is a good direction but very hacky and I get warnings and such with latest boot.
Is there a better approach? In this case the filesystem abstraction works against me because I need the files on disk else I need to do something else; this is the primary reason I split it into two different commands.
Really I would like everything including building the container (and finding native depenencies and such done in one place in a library we can just include and use.
why do you need the files outside of the fileset? (they are still on disk, even if inside the fileset)
@chrisn: what's the point of separating the rapidly changing jars from the rest of the deps? docker handles updates very efficiently anyway, no? it does a binary diff and all that, iirc.
If, for instance you have an uberjar you can look forward to a very large upload every time you build a docker image.
@dm3: I specifically need to do things like unzip the files into other files and such for native dependencies.
Yes, that was the only way I could figure out to get boot to place all the dependencies into one directory.
and also there is a BOOT_HOME
env var where u can specify where should the local maven repo be
u just run something which pulls down all the dependencies and u will have it under the folder u specified in BOOT_LOCAL_REPO
u would just need to prime the local maven repo and it wouldn't go out to the network then
but that definitely an option; the docker build would be to get boot to gather the dependencies into the docker image and then off you go.
Although then everything would be on one layer and you have the original problem again.
okay, i still don't understand the original problem then. you said your problem is that u would need to upload a full uber jar every time u change something small in it
K. Now, lets say I have one 75K jar I am working on and 100MB of dependency jars which isn't unlikely.
What I want is the dependencies on one layer and the target jar (or set of jars) on another layer. Furthermore I want native dependencies placed into a particular directory so that an LD_PRELOAD command can work.
So first you need to, given the set of jars, separate rapidly changing jars from the rest and pull out native dependencies.
so @juhoteperi was doing something around this problem domain.
iirc, he was running a boot show -d
for example to get all the deps (which unfortunately hasn't included the pod-only deps).
that could be your dependency layer.
I feel like the main problem is that I am not using the fileset after the uber task but exiting and restarting boot.
It seems like if I would do the processing on the fileset produced after the uber task then I could do everything on one task except build the docker image.
I currently use java.file.listFiles or something like that and I could do an analogous operation on the current fileset.
@chrisn: but have you looked into the uber
task yet? it's just effectively 32 LoC:
https://github.com/boot-clj/boot/blob/master/boot/core/src/boot/task/built_in.clj#L424-L512
that way you can just use the sift :move
task to move the jar files into the output fileset, which you can then dump into the (set-env! :target-paths #{"somewhere"})
using the (target)
task
@seancorfield: my main question above has been solved; the expectations
task is fast now. see my PR
https://github.com/seancorfield/boot-expectations/pull/11
@micha: would it make sense to have boot.notify accept a msg instead of just a number?
that way for example boot-expectation
could say "2 failures and 1 error", or something like that.
@onetom: Yeah, we’ve had several discussions about the error handling / exiting strategy in Boot. I identified four types of exit that might be needed and we only have two right now (success and exception).
I saw the PR notice, haven’t yet looked at it … will try to get to that today …
@onetom: I'm not longer running show -d
to retrieve the deps, instead I just run boot run-tests
or whatever in CircleCI depdencies step, this way even the dynamic deps from tasks will be cached ny CircleCI
For now it just lets me do a form of literate programming. But I hope to get a ton of options in there eventually.
Yeah, there was some work in that direction but was never finished/merged
I saw that, calling out to jruby directly, but asciidoctor has a pretty good java library (that uses jruby for you!)
Hi. Is it possible to programmatically load an arbitrary build.boot file and run get-env in that file?
Right but I need to get a key off the boot env through get-env. How would I run that in the context of the arbitrary build.boot?
Yes but how does get-env know which build.boot I am referencing? I am already running a boot project. It's like I need to switch the ns but this needs to be done programmatically, not in the repl.
Hmm.. This idea may not work then. Here is some context on what I am trying to do. I am writing a local build server (similar to lambdacd but using boot tasks). Any time a change is detected in a monitored git repo the build server will clone the project, and execute a boot task in that project (e.g. By running boot build
in the command line in that directory). Rather than having the task be hard coded in my builder I would like it to be optionally specified in the boot env for that project. But I need a way to get that value from that project's boot env.
So then if I require boot.core in my code, run load-file
on a arbitrary build.boot, and call get-env
, how will it know which build.boot file to use, assuming that multiple previous projects' build.boot files have already been loaded.
why not have your thing just calla specially named task that people would implement in their build.boot?
I thought about that too but then you get the problem of overriding. For example at my company we have a project that has a whole bunch of default boot tasks that are nice to have in every project. In that project we have a boot task called build
which just does boot pom build-jar
, if I remember right. Now if I have a new project that requires that boot task project in its build.boot, I can't make another task named build
without using replace-task!
. Just a little bit of annoying boilerplate.
you have the same problem with the env anyway, since you need to have a specially named key
you can just name the special task whatever you would have make the name of the env key be
Right but then in that distinctive task you would need to call the build
task in every project. Seems more effective to just have a specific task set if needed.
So if the build-task is not set in the boot env for the cloned project then the builder will just run the default boot build
Maybe it is better to explicitly force the user to specify how he/she wants that project to be built rather than making any assumptions about the project
you're going to be building projects that people will be adding a marker for you, to help you know how to build it, right?
So then maybe having an edn file would be better. The reason why the specially named task wouldn't work the best is because you may need to override the task. For example, say the specially named task was called auto-build
. Also say that you are in a similar situation where you have a project with a collection of frequently used boot tasks. In that collection of frequently used boot tasks is one called build
. 80% of projects are built by simply using the build
task. So in order for the specially named task to work I would need to call the build
task in the auto-build
task which would need to be done in 80% of my projects. That seems like a lot of boilerplate. I was thinking why not have the default be to run boot build
, however, if the project has specified a different task to build the project then use that task instead.
I guess it depends on your env. All that stuff is already done for me because we use docker.
It seems like your use case for shell scripts is more for a deploy than building though, true?
and the system should put all the context my program will need in the environment or command line arguments
and it supports even the most complex requirements, because the shell script can do anything
That is true but I think the use case for my project may be different than the one you are thinking. Though I can see it heading toward that in the future. My project is essentially trying to solve the problem of missing dependencies when pulling a new project from github. For example, my company has lots of private repos that we don't want on clojars/maven/etc and we dont want to maintain a build server. So instead we want a local continuous delivery pipeline to be always running on each of our computers. This will pull and build all the deps from our private repos so that we you actually open a project that uses 20 different private libraries you wont spend 20 mins pulling all the projects from github and building each one individually.
We did that at my previous company and it sort of worked. I distinctly remember consistently getting a stacktrace printed when a project did not exist in S3 which was annoying. Anyways, I did think of doing that but there are a couple reasons I wanted to avoid it. One is that the developer still has to remember to deploy it to S3. By using my project we eliminate that problem because the developer only has to remember to push his code. Secondly we are very protective of IP so the less places IP exists, the better.
So essentially my project is eliminating the developer from needing to run a deploy task every time he wants the lib available to other developers. In fact, if a dev forgot to run the deploy task I imagine you would find yourself in a very similar situation where you would need to pull and manually build possibly several projects.
if we're making a non-compatible change we have some more involved procedures we use anyway, which makes the issue of forgetting something a moot point