This page is not created by, affiliated with, or supported by Slack Technologies, Inc.
2015-12-25
Channels
Our big ol’ Ant script no longer uses Leiningen anywhere… just Boot… which has simplified a lot of things (although simulating our homegrown "with-browser" plugin for WebDriver testing was… interesting!).
@seancorfield: are you running on windows?
Our CFML applications now use Boot instead of Leiningen to figure out their classpath. The only thing left now is our test suite which still relies on the old Leiningen classpath logic in the CFML/Clojure bridge.
Mac OS X for DEV, Linux for CI onward.
I only use Windows for casual use in the evenings (I have a Dell XPS 12 Convertible — tablet / ultrabook — which is great for sitting in front of the TV and dabbling or sitting in bed with).
Did something specific change on Windows? Or do you need something tested there?
Yes. Fast Ring Insider builds
Been on Windows 10 since February...
Boot has been behaving well on that laptop, to be honest.
The only weirdness is that boot repl
hangs in Git Bash (but works just fine in the regular Command prompt).
All the other Boot stuff seems to work fine in Git Bash, except for the REPL.
Any more thoughts on the lein template stuff?
Do you think a boot new
task would be worth having?
Seems like leveraging all the existing lein template ecosystem would be better?
Next week I’ll take a look at a couple of lein-templates I’ve written and see what those might look like in the Boot world… Being able to run existing lein-templates or something similar (boot-templates) seems worth exploring…
I know.
But I bet if Boot could run existing lein-templates, it would encourage template maintainers to add build.boot files to those templates...
so boot new something myproject
would download and run the something/lein-template
project and the result could easily have both project.clj
and build.boot
(how much of build.boot
could be autogenerated from project.clj
?)
i can imagine maybe it would be uncool if people made lein-templates that don't work with leiningen, or araen't maintained for lein
Right, I would expect boot new something myproject
to look for something/boot-template
first, and fall back to the Leiningen template if that can’t be found.
So it would know how to run existing lein templates but would primarily know about new Boot templates (so we’d have to figure out what that code should look like).
I’m curious enough to have a go at a prototype regardless
Oh, and while I think about it, I found a case where boot-test
doesn’t compose nicely because it sets up the pod dependencies outside the with… middleware call. I’ll send a PR for it once I have a simple repro test.
I ran into that today and had to modify how I was setting up my testing context. TL;DR: if you do (comp (with-pass-thru fs (set-env! … testing dependencies …)) (test))
then the tests don’t see the dependencies all the time.
Right, but I shouldn’t have to do that.
If I’m composing "anonymous" tasks that isn’t so easy.
Because (comp (sometask) (someothertask) (with-pass-thru fs (set-env! …)) (test))
should "Just Work" without worrying about lifting the set-env!
call.
Then perhaps what you want is a with-…
macro that supports that better so anonymous task composition is cleaner?
I think the (do … identity)
thing is really ugly and non-intuitive (remember: you work with Boot all the time — I’m coming from the p.o.v. of a relatively new user).
That’s still ugly.
Don’t know yet. Will give it some thought.
I got caught out because the comp
pipeline I used for boot-expectations
didn’t work for boot-test
and it took me a while to figure out why...
Well, then you can’t compose tasks where you don’t what the set-env!
to affect earlier parts of the composed pipeline...
one of the reasons it's good to keep set-env!
out of the pipeline is that any step of the pipeline shouldn't see the effects of the steps after it
Hmm… OK… I can see that being a valid reason…
like a task can aggressively cache things, because it can be sure that the state can only change if the fileset is different
but if the state includes the mutable portion from tasks after it, it can no longer safely do that
I still think this is a big gotcha for folks new to Boot. It held me up for a couple of hours this morning because I couldn’t see why (comp (testing) (expectations))
worked but (comp (testing) (test))
did not — essentially — and it was due to boot-expectations
creating the pod inside the with-…
macro and boot-test
creating it outside.
And there’s nothing in the docs to explain this subtlety… at least not that I could find.
it should be noted here maybe: https://github.com/boot-clj/boot/wiki/Tasks#task-anatomy
Well, that does mention "Local state" but it doesn’t call out where set-env!
should or should not be called.
And you get subtly different behavior trying to compose tasks depending on when set-env!
is called (unfortunately).
Essentially you’re saying set-env!
should never be called inside a with-…
macro I think?
Well, in (comp (task-a) (task-b) (task-c))
you actually run "[4] Local state" first, then you compose, then you run.
so three stages.
For users who are used to writing their own transducers, perhaps that is an appropriate analogy 😉
This is important for the docs, since it determines whether a task later in the pipeline might succeed or fail: https://clojurians.slack.com/archives/boot/p1451012348006025
Actually, no. Look again at where the state is created...
…inside the fn
Bottom line: drawing a parallel with transducers is not going to be very helpful for a lot of Clojure users.
I think the docs should at least be clear that (with-… (set-env! …) …)
is a bad idea
and going back to boot-expectations
, do you think this code should change? https://github.com/seancorfield/boot-expectations/blob/master/src/seancorfield/boot_expectations.clj#L35-L44
(to move the with-pass-thru
below the creation of the pods
And then boot-expectations
would have the same "restriction" as boot-test
that any preceding set-env!
would need to occur in "stage 1" of the pipeline (before composition)...
arent you creating a new pod pool each iteration if the watch task for instance is running the pipeline multiple time?
Apparently so. That wasn’t clear from the docs (or from our discussions when I was writing the task and getting feedback here)
https://github.com/seancorfield/boot-expectations/issues/8 <— I’ll fix it, and then update my build.boot
at work to "do the right thing" with set-env!
…
It’s heavily borrowed from boot-test
TBH.
Sure. Turnabout is fair play!
And then I must go feed the cats!
i'm using the jtds driver with windows sql server, and in sql server date types are stored in UTC always (there's no timezone info whatsoever in the database)
when i perform a query that has a column of type TIMESTAMP i get a java.sql.Timestamp object
Sounds like this section of the docs might help? http://clojure-doc.org/articles/ecosystem/java_jdbc/using_sql.html#protocol-extensions-for-transforming-values
java.jdbc itself doesn’t do anything with the values that come back from the driver (as far as I know)
i have bad feels about adding my own transformations to something as important as dates
We use MySQL at work and have all our servers ntp sync’d and set to UTC so everything — app, db, O/S — all know it’s UTC time.
As far as I know, the only way to reliably use UTC end-to-end is to have everything running in UTC… ?
I don’t think you’re correct. Joda Time is UTC, java.util.Date is local time by definition.
That’s been my experience over the years, working with dates in Java and databases.
Well, I’m PST and java.util.Date locally matches my own time. On servers on the East Coast, I get EST, unless the server is running on UTC, in which case java.util.Date is UTC.
What TZ is your client? What TZ is your DB? What TZ is the server on which the DB is running?
MySQL, for example, can operate in a different TZ to the server its own which makes life very complicated.
But your client is timezone aware.
i should get some object representing the same absolute coordinate in time as what i inserted in the database
Google shows a lot of questions like yours about timezones so this is clearly a common confusion people have.
What data type are you actually using in the database?
http://stackoverflow.com/questions/409286/should-i-use-field-datetime-or-timestamp/602038#602038 for example
"In MySQL 5 and above, TIMESTAMP values are converted from the current time zone to UTC for storage, and converted back from UTC to the current time zone for retrieval"
I know. I think you’ll find it’s the same.
When I Google this I see similar comments for Oracle, Postgres, etc
All I can say is: this is not an issue with clojure.java.jdbc itself.
This is what everyone has to deal with using date/time in Java with JDBC
Have you tried the MS SQL Server JDBC driver?
java.jdbc supports it (I test with both drivers)
You just have to manually d/l it and localrepo install it.
See https://github.com/clojure/java.jdbc/blob/master/src/test/clojure/clojure/java/test_jdbc.clj#L83-L91 and https://github.com/clojure/java.jdbc/blob/master/src/test/clojure/clojure/java/test_jdbc.clj#L37-L56 for setup.
I have a Windows XP VM running SQL Server Express and hit it via both drivers from my host Mac when testing java.jdbc.
Note that I don’t specifically test a timestamp column in any DB in those tests (such a test would only verify that putting a particularly value into a column and retrieving it just yielded the same value).
Anyways, must go feed cats.
boot-expectations 1.0.2 released with that pod caching issue fixed.
Now I’m updating our build.boot
at work to account for your feedback Micha
Haven’t tried that part yet. I’ve shaved over a minute off our local build/test script just by using Boot instead of Leiningen (reducing the number of JVM startups!).
Our Ant script is already 100 lines shorter too
Not having to deal with building, installing, and calling two custom Leiningen plugins helps there.
Last part of the first pass done: Leiningen is no longer needed for any aspect of our build process (it’s still invoked in a few shell scripts — they’re tomorrow’s priority).
One thing the discussion about templates made me think about is - how about considering rails-like generators in scope for that? It could be a great boon for a rails-killer we may hopefully get someday.
Also another question I have - I've made a task to sync dependencies with an EDN file when it changes. This of course has the downside of not working when existing dependencies conflict with newly introduced ones (mainly for transitive ones). As far as I understand pods could be a solution to that, since they introduce classpath separation. Any pointers how should I approach writing a task which would run the rest of the pipeline inside a pod with appropriate dependencies which is torn down and rebuilt if they have changed? I'm not sure how to approach that so I don't break things like watch and whatnot.
I mean, I think I sorta see how to do it - do a control task that evaluates the next handler inside the pod, but that would let me only use the task only once in the pipeline and not for example have different dependencies for parts of the pipeline.
Now, I'm not sure if that's something that would be ever needed, but would that be possible to implement somehow?
So doing it like this doesn't seem to work:
(deftask with-dependencies
[]
(let [dependencies (atom (read-dependencies!))
current-pod (atom (make-pod-with-dependencies @dependencies))]
(fn [next-handler]
(fn [fileset]
(let [new-dependencies (read-dependencies!)]
(when (not= @dependencies new-dependencies)
(util/info "Dependencies changed, updating...\n")
(reset! dependencies new-dependencies)
(pod/destroy-pod @current-pod)
(reset! current-pod (make-pod-with-dependencies new-dependencies))))
(pod/with-eval-in @current-pod
(~next-handler ~fileset))))))
I guess I need to read up some more on that Hmm, I think I'm probab;y hitting this restriction here > The expr and result must be able to be printed by pr-str and then read by read-string. So what would be a way to pass the fileset into the pod? Or at least evaluate a boot pipeline inside a pod?
@micha: well, the goal is for dependencies to be updated automatically - the dependencies.edn
is just a vector of project dependencies and each time it changes I want to updated the dependencies without restarting boot.
So for I'm just using set-env
to update the dependencies and it mostly work, until something conflicts.
And I was wondering if there would be a way to isolate that inside a pod I can tear down and create again with new dependencies.
the way you have it set up you add dependencies to the project by adding them to a dependencies.edn file instead of calling set-env! right?
Yes, I remember you saying that. But I thought pods are isolated so that issue could be sidestepped that way?
so to paraphrase: you want to be able to snapshot the classloader state and restore to the snapshot state later if bad dependencies were added etc
Basically I'm just trying to figure out how to run boot, if I would want to be able to isolate the project dependencies so I can update everything without killing boot.
And the most sensible way seems to be able to run all the tasks in a boot instantiated inside the boot.
the goal of that was to demonstrate how to build multiple separate modules in a project when the modules depend on each other
@jaen the nature of dependency conflicts makes it hard to handle them in a reversible way
the bad jar contains class definitions for some classes, but you need a different version of those classes
but it's transitive, so usually the instances of those classes are in other code you don't necessarily have visibility into
so even if you could remove the jar from the classpath and replace it with the correct version
to make sure you fix the dependency problem you need to rebuild any objects that refer to the bad objects
I understand the problem this poses if I do it outside of a pod for example - I can contaminate the boot's main classloader
Well, I mean those I get yellow warnings for when using set-env
to inject new dependencies.
yeah so the reason for those warnings is because traditionally you'd declare all dependencies in an xml file or project.clj
the maven machinery can work out dependency conflicts only if it computes them all at once
like if it can see all the dependencies it can work out which transitive deps to choose
that new dependency could have a transitive dep that would have changed which version of another already loaded transitive dep was chosen originally
Yeah, that's the reason I started looking into pods. I'm just wondering if there's a way to tell if adding a new dependency produced a conflict that would require unloading a jar
At least, it doesn't seem to give me yellow warnings each time I try this, just sometimes.
the only issue with the fileset is the java.io.File objects, which you can serialize and deserialize yourself
Because if you look at that example code I've given I basically just want to call next-handler
inside the pod I construct. Not sure if your fix would make that possible or not though.
next-handler might be callable from the pod, but you'd need to use the java runnable interface
because if you pass a clojure function to another pod the receiver won't see it as a clojure function
although i discovered that arg
can be a clojure object.. if it was previously returned from the same pod
right
the problem with passing clojure objects is that protocol interfaces are constructed dynamically in each clojure runtime
Is that how boot-shim.clj
should look?
(defmethod print-method java.io.File [file ^java.io.Writer w]
(.write w (str "#test/file " (pr-str (.getAbsolutePath file)))))
(alter-var-root #'clojure.core/default-data-readers
(fn [current-readers]
(merge current-readers {'test/file (fn [path] ( path))})))
I can now shuttle the fileset into a pod, but I can't do the same for the next-handler
. Let's figure something out for this then.
It's hacky but I created a namespace that just does
(ns boot-hack)
(defn invoke-hack [fun]
(boot.util/info "INVOKE HACK\n")
(.run fun))
and I .invoke
it on the pod passing the fileset.I just need to figure out why dependencies change doesn't trigger, but it's a step forward for sure.
But it seems I'm not taking something into account still, since I'm getting
java.util.concurrent.ExecutionException: java.nio.file.NoSuchFileException: target/public/assets/javascripts/application.out/goog/deps.js
java.nio.file.NoSuchFileException: target/public/assets/javascripts/application.out/goog/deps.js
file: "target/public/assets/javascripts/application.out/goog/deps.js"
when I change dependencies. Maybe somehow I close over a stale fileset.Well, I've never gotten that error without my pod shenanigans, so it's probably somewhat related; I get that when compiling cljs files.
@micha: Btw. I had an idea related to separating files to multiple artifacts. It might be useful to have the path of original directory as a field of TmpFile?
Like in less4clj I have three resource-paths and files from each are put into separate jars
Currently I use regex against path inside fileset
But would be even simpler to say that files from this resource-source should be put into this jar
Yeah something like that
I'm not still sure what would be best for task created files?
like if sift
had a mode where instead of just removing files, it would stash them on a stack that you could pop
I think it would be useful for these kind of operations to provide functions which wrap "middlewares"
like the with-files
I wrote
(add-tag (cljs) :cljs)
that would add :cljs to the meta on all the files produced by the cljs task, which it returns?
Yeah hmm, wouldn't bee too useful to tag all files in fileset after cljs. I think it should tag files created by cljs task.
I guess that's not as trivial to implement.
seems like we really want to stash the files somewhere and handle each stash separately?
Yeah I guess this needs more thought
With-files I wrote works quite well. Each artifact can be created inside one with-file context and doesn't see files from others.
In my case I'm not even interested in filtering task created files, only files from resource-paths
Maybe. I don't yet see what a complete build task would look with this.
I think some "utility" tasks wrapping other tasks (and creating closures or "fileset context") would be useful instead of adding more and more tasks to "base level" with comp
Yeah I think I'm trying to describe a tree
the target task has a --no-clean
option that tells it not to delete all the things in the target before emitting
so you could have total control over how you partition the filesets with simple operations
I mean if you have independent resource dirs that you can process in parallel to get independent outputs - why do we care about the classpath?
so if you have independent resource dirs and you add them in parallel they'll all be on the classpath for all processes
you can run pipelines in parallel using the multi-module pattern, like this: https://github.com/micha/multi-module-build/blob/master/build.boot#L70-L71
the problem with processing sections of a single pipeline in parallel is that when you call commit!
on a fileset it replaces the current things on the classpath with the things in the fileset
the code itself is reentrant, there are semaphores and locks to prevent corrupting the filesystem
and the underlying filesystem wouldn't correspond to the fileset object being passed to the next task if another thread called commit! on its own different fileset
i think a lot of the need for paralleization in builds is mostly eliminated by caching in tasks
I was looking at https://github.com/ndmitchell/shake previously
because usually the order in which tasks should run is fairly obvious to the programmer
but if you need to make a system that can do it in the general case it gets really hard
if you know that a task doesn't mutate the environment and is just a pure function that takes fileset -> fileset
@micha tasks which do that may not want to be run in parallel tho
speak
for example should only run when a build finishes, even tho it’s actions can be run in parallel to others
i think it would make more sense to create a parallelizing task
which takes the following tasks and runs them in parallel pods
or something like that
or if tasks could somehow be more context aware
like a pod being told which other pods were started in parallel and the task can sort out blocking? I havnt read much pod code so im not sure how it currently works
yeah i was poking around in there
semaphores was an interesting read
interesting
im a bit confused as how checkout
is adding deps to the fileset
could it maybe be the checkout task causing hoplon files to not be seen?
im just poking a round, but the hoplon boot task looks fine
ah the repo is kinda large and broken 😛
well when the project builds the first time it finds all the .hl in my deps, but after the checkout task it doesnt extract or compile them, show -f has the .hl fines in the fileset before and after checkout
are the new deps extracted to the same location as the previous jar?
but stopping the build and restarting it builds with the new files the first go, before ever needing to do a chekout
that would be a useful addition to the show task, an option where it can print like -f but show only changed files
i just removed the checkout task
im using the dev-osx
directly after watch
I had the lounge.* shapshot’s in it
like dont have them in both deps and checkout?
ah ok, trying
is there already a task that reads deps from an edn that gets watched?
Just pushed a new implementation for the junit reporter PR using the new target
task: https://github.com/adzerk-oss/boot-test/pull/11. target
is used to sync the junit reports before throwing an exception in case of test errors/failures. Includes a refactoring in the test task so the test summary is saved as fileset metadata, so it can be potentially used by downstream tasks (someone suggested this idea here, I don't remember who but thanks!)
yeah thats working! thanks micha! i should have tried that sooner 😕
@flyboarder: one thing you can try is making your own version of the checkout task that removes the checkout dep from the env :dependencies in its constructor phase
i thought it was bad for tasks to modify the deps unless in a pod
what the modification will achieve is it will prevent downstream tasks from creating pods that have the checkout dep in the dependencies
basically the place in the checkout task where it adds the dependencies of the checkout dep to the project
ok ill fiddle some more
@micha nice to hear! what do you think about moving the run-tests
stuff to a separate PR first? I'd like to rewrite the commits anyways
I thought about a throw-on-test-failures
task or something like that, but wasn't convinced to do it
because you want to get the junit reports when there are test failures or errors, but we throw an exception that makes the target to not be synced
if commits could be organized to be friendly to git bisect and revert that would be a bonus
Sweet. Yes, I think it opens the door for more options for downstream tasks working on the test summary... I remember someone suggested to do that, not sure if it was Martin Klepsch
I even wanted to separate the junit stuff from the run-tests
task, but then we need a way to set the clojure.test/report for the run-tests
pod... so the pod would need to be started before I guess...
like {:notification/status :warning, :notification/source 'boot-test, :notification/message "Some tests failed, some passed."}
the idea of the meta on the file being that we could establish some general types of notifications that could be handled by the notification task
the info that you're putting in the file could be extracted by that task if it wants to
perhaps actually attaching notification meta to the namespace file would be better, even
i guess the real thing we want to do for now is just add metadata about the notification to the fileset instead of throwing an exception
so, this notification mechanism is something new, or is it already something in boot?
one thing to note: the process should exit with error status (not 0) to signal test failures... that's what the exception does. Are we going to still do that?
yeah if you mark run-tests as experimental then we can change it as we figure out how to do this
Just to be clear, are you talking about leaving the notification stuff out of the scope of this PR, just mark the task as experimental as we continue discussing the notification mechanism?
Or to try and do the new notification mechanism, but also leave it as experimental because it is... experimental?
I'll play a bit with your notification idea, will let you know as soon as I have something, and feel free to chime in in any way
by the way, how do I mark the task as experimental? ^:experimental
, or in the docstring?
as long as people know that it might change so they don't get too invested in it if they aren't into experiments
Hmmm... I'm now thinking that maybe we should not mark the run-tests
task as experimental, but the notification mechanism. We could add an option to use the new notification mechanism, and that option would be experimental. So by default it would work as it works now: throw exception and do not sync on failure. But it would sync when junit is enabled, and it would not throw when the new notification option is enabled
@micha: https://github.com/flyboarder/boot/blob/master/boot/core/src/boot/task/built_in.clj#L105-L105
if im correct that line would result in a deps list that has the old dep and the new ones correct?
ok i think i can work this out 😛
boot-expectations 1.0.3 released — adds —requires
to preload namespaces into the testing pod and —shutdown
to run specific functions inside the pod when testing is over (we needed this at World Singles).