This page is not created by, affiliated with, or supported by Slack Technologies, Inc.
2016-10-12
Channels
- # beginners (34)
- # boot (210)
- # cider (16)
- # cljs-dev (65)
- # cljsrn (3)
- # clojars (2)
- # clojure (107)
- # clojure-austin (8)
- # clojure-berlin (10)
- # clojure-brasil (1)
- # clojure-canada (1)
- # clojure-dev (1)
- # clojure-fr (1)
- # clojure-italy (22)
- # clojure-new-zealand (12)
- # clojure-nl (28)
- # clojure-russia (13)
- # clojure-spec (25)
- # clojure-uk (10)
- # clojurescript (109)
- # cursive (18)
- # datomic (44)
- # defnpodcast (1)
- # dirac (4)
- # emacs (2)
- # funcool (1)
- # hoplon (16)
- # jobs (14)
- # lambdaisland (23)
- # leiningen (2)
- # luminus (3)
- # off-topic (7)
- # om (58)
- # onyx (16)
- # proton (6)
- # re-frame (42)
- # reagent (55)
- # ring-swagger (5)
- # untangled (47)
- # vim (9)
seems to me that i’m running into an issue where the environment is loaded twice, it seems
http://pastie.org/private/7tkkhiuw8sc7buchkhmwaa shows what i see after invoking boot test
— you can clearly see it is invoking al the logging initializations twice, etc
it seems as if there are two instances running, one being my application, and another being the instance boot is running in
@lmergen boot-test runs your application in a pod (an isolated runtime of sorts) which may cause startup outputs to be printed twice
@danielsz sounds cool, but I’m not sure I understand the differences/implications
can you elaborate?
Hey, been trying to get a Docker container setup for boot but I’m new to both clojurescript and docker! Does anyone know have any pointers to get started - there are a few clojurescript docker containers but they don’t seem well explained or maintained, and none for boot
@balupton could you outline what you expect the container to handle specifically?
installation of java, clojure, clojurescript - then attachment of a clojurescript projects directory to the docker maintainer - such that the project’s team doesn’t need to all maintain or share their own vmware virtual machines
@balupton so you're looking for a development setup if I understand that right?
that is correct - although the GUI editor and everything would of course happen on the host machine outside of the docker container
I don't quite see why you'd want to do that tbh. In any case have you used Boot to compile ClojureScript before?
okay I just found https://github.com/adzerk-oss/boot-clj-docker-image and https://github.com/kozily/docker-cljs/blob/master/Dockerfile - seems the trick is to search github for docker images rather than searching docker’s hub for docker images
as I think perhaps I should try and get a basic setup going before attempting to get a project onto it
yeah, try understanding how boot works and then make it fit into a container. As Java + Boot/Leiningen are the only dependencies for Clojure/Script development I don't see a big benefit in putting it into a container for development but I don't know your situation 🙂
@martinklepsch aha, i think that pod issue might indeed be the case. is it correct that not every task is executed in such a pod ?
anyway, in my case, the AWS SDK is being loaded twice, which causes it to register for javax metrics twice, which throws an error the second time
@lmergen pods are a boot features tasks may use or not
so yes not every task uses pods
@anmonteiro Sure, thanks for asking. The difference between standard Lisp mode and tools.namespace mode is that standard Lisp mode doesn't pretend to "unload" code. It recompiles your namespaces in your project, with new code overwriting old code. Tools.namespace uses remove-ns to "remove" old definitions first, which in theory is good, but in practice not, because all remove-ns does is a unmapping, with old definitions continuing to linger in memory. If you're interested in this subject, I wrote a blog post that goes in detail, and I gave a talk around this at ICFP recently. With standard Lisp mode, your REPL behaves like all Lisp and Scheme REPLs, where you "recompile until it breaks". Does that make sense?
@danielsz oh alright, I get it yeah
I’ve read your post and the c.t.n. README so it makes sense
anmonteiro Cool. I switched to Lisp mode, I prefer it. It's less troublesome and you don't need to restart as often as with tools.namespaces. You should try it.
I will!
Hmph, I wonder if you are being hit by some c.t.n bug if you need to often restart REPL (when letting ctn to remove ns)
I haven't had any problems since I started using ctn fork with tns-45 fix
@juhoteperi No, sorry I didn't mean restarting the REPL, I meant restarting the application. With tools.namespace, you need to restart all threads everytime "remove-ns" is called, because old Vars remain trapped in threads. With standard Lisp mode this doesn't happen, so you can just recompile and continue working.
Aha. I anyway call reset
quite rarely, like when I have edited several files. Often I just require the changed ns from editor.
Yes, that is the standard way of doing things in Lisp REPLS, you just recompile the forms you changed.
What would be the danger of old defs lingering in memory? At least it removes vars from ns-publics
and that is important in our projects
@juhoteperi Good question, and you've put the finger on the benefits of using remove-ns
, because it is indeed useful to clear the Vars.
The blog post seems to answer this
is it possible to have a piece of code executed in a library without requiring anything?
@juhoteperi Yes, no real danger, it's a tradeoff.
@danielsz well, it would be for something like https://github.com/borkdude/boot.bundle.edn - when loading it, it could merge some data in a map. It could also be done on require, but I just wondered if such a thing is at all possible without it
@borkdude can you elaborate on what you want?
@alandipert I dismissed the idea already, but it was something like this: in build.boot:
(set-env!
:dependencies '[[boot-bundle "0.1.0-SNAPSHOT" :scope "test"]
[nl.michielborkent/boot.bundle.edn
"0.1.0-SNAPSHOT"
:scope "test"]])
then the second library automatically merges something in a map defined in the first.Morning boot friends. Having finished writing my aws-cli-wagon to take an end run around dependency hell, I tried using bootlaces to push it to clojars. Unfortunately, it fails:
~/src/aws-cli-wagon[master]$ boot push-release -f target/aws-cli-wagon.jar
Could not sign target/aws-cli-wagon.jar
gpg: can't open `target/aws-cli-wagon.jar': No such file or directory
gpg: signing failed: No such file or directory
The file does exist 😕
And I can install via boot install -f target/aws-cli-wagon.jar
how about boot build-jar push-release
?
@borkdude i think you might be able to do what you might want by abusing export-tasks meta on a clj namespace
The build-jar
task doesn’t do the aot
I need
(deftask build
[]
(comp (aot) (pom) (jar) (target)))
I suppose I could replace target
with push-release
in a separate bespoke push-my-release
task?
maybe boot aot build-jar push-release
?
i'm surprised to (re)discover that bootlaces needs target
at all
i guess it's been a couple years and target has been deprecated since we made it
I don’t know that it does. I’m surprised bootlaces’ push-release
doesn’t work with a specified path
yeah, me too
Or at least that it’s inconsistent with install
Oh, is target
deprecated?
err not the target
task, but the default behavior of writing to target always
target
task is the new and good way to doit
cool, that was my impression
i guess the push
task must be doing something magical
since push-release
sends it a nil file
arg
https://github.com/boot-clj/boot/blob/master/boot/core/src/boot/task/built_in.clj#L914-L916
Okay, boot aot build-jar target
gives me the jar file I want
then in the task body it does some sniffing by .jar extension if file i nil
when you do boot aot build-jar show -f
are there jars in the fileset?
but boot aot build-jar push-release
fails ha ha
clojure.lang.ExceptionInfo: Assert failed: project repo is not clean
(or (not ensure-clean) clean?)
it refuses to do it if git is in dirty state
~/src/aws-cli-wagon[master]$ boot aot build-jar show -f
Compiling 1/1 sparkfund.maven.wagon.aws-cli...
Writing pom.xml and pom.properties...
Writing aws-cli-wagon.jar...
Installing aws-cli-wagon.jar...
├── leiningen
│ └── wagons.clj
├── META-INF
│ └── maven
│ └── sparkfund
│ └── aws-cli-wagon
│ ├── pom.properties
│ └── pom.xml
├── aws-cli-wagon.jar
└── sparkfund
└── maven
└── wagon
├── aws_cli$_closeConnection.class
├── aws_cli$_get$fn__75.class
├── aws_cli$_get.class
├── aws_cli$_getIfNewer.class
├── aws_cli$_init.class
├── aws_cli$_openConnectionInternal.class
├── aws_cli$_put.class
├── aws_cli$fn__66.class
├── aws_cli$loading__5340__auto____64.class
├── aws_cli.class
├── aws_cli.clj
└── aws_cli__init.class
ha ha c’mon boot, you’re not the boss of my git state 😛
well boot isn't, but this bootlaces thing
is a bit overbearing
cool, boot aot build-jar push-release
does the trick
With a clean git
And now I have a private s3 wagon that works with boot and amazon sts credentials
excellent!
I am using boot to build what is more or less a multimodule build, a bunch of subprojects together with a single build system. I have a boot task which tests a given subproject in isolation, meaning it doesn't load the other subprojects stuff in to the environment. I would now like to write a test-all-subprojects boot task, which would run each subproject's tests in isolation. It seems like pods would be the way to do this, but...
@hiredman have you seen this https://github.com/micha/multi-module-build/blob/master/build.boot ?
2. The existing testing framework's boot task already uses a pod to run the tests, which gets its environment with get-env
that example has dependencies between the modules, which you may or may not have in your own project
one to build alpha, one to build bravo, and the doit task to coordinate launching the other two
alpha is built and its artifacts are emitted to a directory that is on the classpath for bravo's build
and @richiardiandrea is the resident parallel boot build expert 🙂
I wrote some code that scans the subprojects using tools.namespace and figures out the dependencies between them
@micha: is there a way to get the root project directory without passing it around?
When using multi-boot
@flyboarder how do you mean "root project directory"?
The directory containing build.boot
I have a user who reported an error on one of my tasks seems that path is different when they use multi-boot
I'll have to dig more I think
Ok, it's probably not the path itself then
Maybe some inferring going on
Cool thanks!
my normal process is to boot repl
. watch repl
is not a good idea, because it would keep restarting, right?
It seems to be idiomatic boot to wrap project var names with +
, er, pauldrons. Why is that? Just for visual distinctiveness or is there some magic of which I’m as yet unaware?
@donaldball no magic, just silliness
@donaldball I think it's a convention @micha came up with and many just adopted without much thought
it was a thing in lisps: for example http://docs.lfe.io/style-guide/5.html#global-variables-and-constants
@donaldball Earmuffs for *
and, er, pauldrons for +
? Really? Interesting. Please share the etymology for that word if you can.
so I have runboot working pretty nice, except it seems like runBoot returns before clean up actions have been run. I haven't teased out exactly what is happening but when I use runboot to run tests for two subprojects (call them subprojects A and B) I get clean test runs for A and B, but an exception in middle about not being able to load the namespace that subproject A loads as part of initializing the pod where tests are run.
(Pauldrons are the shoulder parts of some kinds of armor)
the test framework uses a pool of pods and registers a clean action to shutdown the pool
so it looks sort of like the tests finish running, the refresh starts, then runboot exits, then a new runboot starts, which changes the environment, which somehow effects the refresh of that other pod, which causes it to throw an error in the background
@hiredman that makes sense to me and to be honest I don't even know if it's worth handling it...there was a discussion here with @micha about tests in a pod and how slow it can be to load the symbols in a fresh env every time...
it seems like async tasks started while executing boot.App/runBoot should finish or be stopped before runBoot returns, no?
i think one difference between boot-in-boot and boot is that the JVM will wait for certain things before exiting, but boot-in-boot isn't asking the JVM to exit
in breaks the abstraction, because I just can't go calling tasks willy nilly in runboot then
yes, there is a explicit future somewhere there which is kind of "unexpected"
and they do, but it seems like some pod related stuff is left outstanding, specifically, I think, the pod being built as a result of the refresh called by the testing framework's task, is still being built after runBoot returns, and some how (this seems really weird to me, but I am not familiar with boot internals) it is still trying to init with the requires setup from the previous tests, but doesn't have the environment for those requires to work
maybe somewhere the pooled pods are recalculating their environment / classpath when refreshed instead of re-using the previous environment in the pool
@donaldball Yes, but I was wondering how the term became associated with +
. Is this your invention, or have you seen this somewhere?
Oh, my own invention
@donaldball Ah, well done!
so @micha you are saying it is not worth working on the line number feature because of cat -n
right? 🙂
You never know what they do until someone points out that your monstrosity is a one-liner in bash
that is why it might make sense to include them I would say, but I am asking first of course ...it is 5 lines of code more or less
i don’t suppose anyone knows why boot would completely ignore the src
directory. it just started happening today. i can literally move all the files out of it and the application still runs. i feel like i’m going crazy...
@sekao there is a lot of caching going on in ~/.boot/
, this is not entirely surprising , even though I never happened to try that 😄
thanks @richiardiandrea you pointed me in the right direction. i just remembered that earlier today I started pulling the deployed version of my project in my profile.boot
. so when i started developing again, the deployed version was still being pulled and used 😂
np probz, yeah profile.boot
sometimes gets in the way indeed 😄
@richiardiandrea i would prefer not to add features that are so natural to do separately in the shell
Ok @micha, fair enough 😉