Fork me on GitHub
#boot
<
2015-12-25
>
seancorfield02:12:25

Our big ol’ Ant script no longer uses Leiningen anywhere… just Boot… which has simplified a lot of things (although simulating our homegrown "with-browser" plugin for WebDriver testing was… interesting!).

micha02:12:56

@seancorfield: are you running on windows?

seancorfield02:12:11

Our CFML applications now use Boot instead of Leiningen to figure out their classpath. The only thing left now is our test suite which still relies on the old Leiningen classpath logic in the CFML/Clojure bridge.

seancorfield02:12:25

Mac OS X for DEV, Linux for CI onward.

micha02:12:44

ah ok, cool

seancorfield02:12:12

I only use Windows for casual use in the evenings (I have a Dell XPS 12 Convertible — tablet / ultrabook — which is great for sitting in front of the TV and dabbling or sitting in bed with).

seancorfield02:12:26

Did something specific change on Windows? Or do you need something tested there?

micha02:12:45

i'm just trying to get an idea of where boot is at with windows support

micha02:12:56

windows 10?

seancorfield02:12:08

Yes. Fast Ring Insider builds simple_smile

seancorfield02:12:24

Been on Windows 10 since February...

seancorfield02:12:42

Boot has been behaving well on that laptop, to be honest.

micha02:12:53

i think the issues with boot on windows were mostly fixed by changes in windwos 10

seancorfield02:12:14

The only weirdness is that boot repl hangs in Git Bash (but works just fine in the regular Command prompt).

seancorfield02:12:47

All the other Boot stuff seems to work fine in Git Bash, except for the REPL.

micha02:12:57

whew! good news

micha02:12:14

i'll be able to look into the repl thing now

seancorfield02:12:59

Any more thoughts on the lein template stuff?

seancorfield02:12:12

Do you think a boot new task would be worth having?

micha02:12:43

we had a few ideas bouncing around

micha02:12:53

one of them would be to include the template with the project

micha02:12:05

like for boot-expectations for example

micha02:12:22

you could make a task in that namespace that generates stub tests

micha02:12:27

then a user could do like

seancorfield02:12:49

Seems like leveraging all the existing lein template ecosystem would be better?

micha02:12:53

boot -d boot-expectations new --arg something

micha02:12:19

the new task in your project could use any underlying template machinery

micha02:12:06

or like for hoplon for example

micha02:12:42

we have a boot task that runs the compiler

micha02:12:51

hoplon/boot-hoplon in maven

micha02:12:01

we could make a task named new in that namespace

micha02:12:13

that scaffolds up a hoplon app

seancorfield02:12:28

Next week I’ll take a look at a couple of lein-templates I’ve written and see what those might look like in the Boot world… Being able to run existing lein-templates or something similar (boot-templates) seems worth exploring…

micha02:12:36

boot -d hoplon/boot-hoplon new --name my-project

micha02:12:52

you wouldn't need a build.boot or anything to do that

micha02:12:15

lein encodes the info about how to find the template in the group-id

micha02:12:19

and the artifact-id

micha02:12:29

but with boot you can just specify the dependency directly

micha02:12:38

which i think is nicer

seancorfield02:12:58

But I bet if Boot could run existing lein-templates, it would encourage template maintainers to add build.boot files to those templates...

micha02:12:12

i bet that can be done

micha02:12:21

generate a project map

micha02:12:26

and call the entry point

micha02:12:41

which is well defined for templates anyway, because that's what the lein task does

micha02:12:27

yeah i like that idea

seancorfield02:12:39

so boot new something myproject would download and run the something/lein-template project and the result could easily have both project.clj and build.boot

micha02:12:12

yep i believe so

seancorfield02:12:42

(how much of build.boot could be autogenerated from project.clj?)

micha02:12:45

i guess it would be polite to ask what they think about this in #C0AB48493

micha02:12:36

i can imagine maybe it would be uncool if people made lein-templates that don't work with leiningen, or araen't maintained for lein

micha02:12:47

confusing leiningen users

seancorfield02:12:51

Right, I would expect boot new something myproject to look for something/boot-template first, and fall back to the Leiningen template if that can’t be found.

seancorfield02:12:40

So it would know how to run existing lein templates but would primarily know about new Boot templates (so we’d have to figure out what that code should look like).

seancorfield02:12:16

I’m curious enough to have a go at a prototype regardless simple_smile

micha02:12:35

sweet, i can migrate the hoplon lein-template to the new scheme

seancorfield02:12:43

Oh, and while I think about it, I found a case where boot-test doesn’t compose nicely because it sets up the pod dependencies outside the with… middleware call. I’ll send a PR for it once I have a simple repro test.

micha02:12:41

great, thanks

seancorfield02:12:51

I ran into that today and had to modify how I was setting up my testing context. TL;DR: if you do (comp (with-pass-thru fs (set-env! … testing dependencies …)) (test)) then the tests don’t see the dependencies all the time.

micha02:12:18

one thing you might want to try there, is doing the set-env! before the with-pass-thru

micha02:12:27

in the body of the task definition

seancorfield02:12:36

Right, but I shouldn’t have to do that.

micha02:12:51

that's part of the "lifecycle"

seancorfield02:12:55

If I’m composing "anonymous" tasks that isn’t so easy.

micha02:12:33

a task like that is similar to a lien profile

micha02:12:41

the general pattern for those is:

seancorfield02:12:59

Because (comp (sometask) (someothertask) (with-pass-thru fs (set-env! …)) (test)) should "Just Work" without worrying about lifting the set-env! call.

micha02:12:02

(deftask with-foo []
  (set-env! ...)
  identity)

micha02:12:20

you can just wrap in (do ...

micha02:12:35

it's useful to have a more nuanced lifecycle

micha02:12:56

like it's useful to be able to make the pods before the pipeline starts

micha02:12:57

generally i think it's better to mutate global state outside the pipeline

micha02:12:10

like set-env! especially

seancorfield02:12:36

Then perhaps what you want is a with-… macro that supports that better so anonymous task composition is cleaner?

micha02:12:58

(comp (task1) (task2) (do (set-env! ...) (task3)))

seancorfield02:12:18

I think the (do … identity) thing is really ugly and non-intuitive (remember: you work with Boot all the time — I’m coming from the p.o.v. of a relatively new user).

micha02:12:41

you can do like this if you prefer:

micha02:12:03

(comp (task1) (task2) (do (set-env! ...) (with-pass-thru _)))

micha02:12:12

that should work, too

seancorfield02:12:15

That’s still ugly.

micha02:12:46

what would you suggest for macro?

seancorfield02:12:56

Don’t know yet. Will give it some thought.

micha02:12:25

i guess the clearest thing would be to lift the set-env! out of the comp entirely

seancorfield02:12:34

I got caught out because the comp pipeline I used for boot-expectations didn’t work for boot-test and it took me a while to figure out why...

seancorfield02:12:20

Well, then you can’t compose tasks where you don’t what the set-env! to affect earlier parts of the composed pipeline...

micha02:12:43

one of the reasons it's good to keep set-env! out of the pipeline is that any step of the pipeline shouldn't see the effects of the steps after it

seancorfield02:12:14

Hmm… OK… I can see that being a valid reason…

micha02:12:28

that gives you the benefit of all the functional immutability stuff

micha02:12:13

like a task can aggressively cache things, because it can be sure that the state can only change if the fileset is different

micha02:12:28

i.e. given two filesets, if they're the same it can use cached artifacts

micha02:12:48

but if the state includes the mutable portion from tasks after it, it can no longer safely do that

seancorfield02:12:50

I still think this is a big gotcha for folks new to Boot. It held me up for a couple of hours this morning because I couldn’t see why (comp (testing) (expectations)) worked but (comp (testing) (test)) did not — essentially — and it was due to boot-expectations creating the pod inside the with-… macro and boot-test creating it outside.

seancorfield02:12:13

And there’s nothing in the docs to explain this subtlety… at least not that I could find.

micha02:12:54

a diagram would be helpful perhaps

seancorfield02:12:06

Well, that does mention "Local state" but it doesn’t call out where set-env! should or should not be called.

seancorfield02:12:49

And you get subtly different behavior trying to compose tasks depending on when set-env! is called (unfortunately).

micha02:12:49

yeah i'll add a note there

micha02:12:11

yeah the pipeline is built in two stages

micha02:12:17

it's first composed, and then it's run

micha02:12:27

very similar to stateful transducers really

seancorfield02:12:33

Essentially you’re saying set-env! should never be called inside a with-… macro I think?

micha02:12:36

same structure and whatnot

micha02:12:08

well you might do it, but it might not be seen by other tasks in their constructor

micha02:12:14

which is where they'll make pods etc

seancorfield02:12:15

Well, in (comp (task-a) (task-b) (task-c)) you actually run "[4] Local state" first, then you compose, then you run.

seancorfield02:12:20

so three stages.

micha02:12:38

same as transducers, no?

seancorfield03:12:11

For users who are used to writing their own transducers, perhaps that is an appropriate analogy 😉

micha03:12:27

i haven't actually used them for real yet

seancorfield03:12:45

This is important for the docs, since it determines whether a task later in the pipeline might succeed or fail: https://clojurians.slack.com/archives/boot/p1451012348006025

micha03:12:51

the top defn could be changed to deftask and you have a task

seancorfield03:12:11

Actually, no. Look again at where the state is created...

seancorfield03:12:16

inside the fn

micha03:12:24

well in that slide yes

seancorfield03:12:12

Bottom line: drawing a parallel with transducers is not going to be very helpful for a lot of Clojure users.

micha03:12:14

that would work with tasks, too

micha03:12:41

it would just run them in reverse

micha03:12:46

i mean reverse order

seancorfield03:12:50

I think the docs should at least be clear that (with-… (set-env! …) …) is a bad idea simple_smile

micha03:12:21

that task anatomy section can have an example

seancorfield03:12:56

(to move the with-pass-thru below the creation of the pods

seancorfield03:12:47

And then boot-expectations would have the same "restriction" as boot-test that any preceding set-env! would need to occur in "stage 1" of the pipeline (before composition)...

micha03:12:48

i think so

micha03:12:10

arent you creating a new pod pool each iteration if the watch task for instance is running the pipeline multiple time?

seancorfield03:12:09

Apparently so. That wasn’t clear from the docs (or from our discussions when I was writing the task and getting feedback here) simple_smile

seancorfield03:12:31

https://github.com/seancorfield/boot-expectations/issues/8 <— I’ll fix it, and then update my build.boot at work to "do the right thing" with set-env!

micha03:12:54

that's some pretty lean code, looks very nice

seancorfield03:12:14

It’s heavily borrowed from boot-test TBH.

micha03:12:28

oh i ran into an issue with clojure.jdbc on friday, can i ask a quick question?

seancorfield03:12:36

Sure. Turnabout is fair play!

micha03:12:40

awesome thanks!

seancorfield03:12:46

And then I must go feed the cats!

micha03:12:24

i'm using the jtds driver with windows sql server, and in sql server date types are stored in UTC always (there's no timezone info whatsoever in the database)

micha03:12:49

when i perform a query that has a column of type TIMESTAMP i get a java.sql.Timestamp object

micha03:12:06

but it's not GMT

micha03:12:41

if i do (.getTime the-timestamp) i get the number of millisec since epoch

micha03:12:10

i could transform the values, yes

seancorfield03:12:19

java.jdbc itself doesn’t do anything with the values that come back from the driver (as far as I know)

micha03:12:22

but it seems scary to me, because the driver should know

micha03:12:44

since this is part of the definition of the datatype in sql server

micha03:12:08

i have bad feels about adding my own transformations to something as important as dates

seancorfield03:12:19

We use MySQL at work and have all our servers ntp sync’d and set to UTC so everything — app, db, O/S — all know it’s UTC time.

seancorfield03:12:48

As far as I know, the only way to reliably use UTC end-to-end is to have everything running in UTC… ?

micha03:12:12

like java.util.Date is by definition UTC

micha03:12:20

and all the things that subclass it

micha03:12:55

there is the deprecated timezone offset stuff

seancorfield03:12:01

I don’t think you’re correct. Joda Time is UTC, java.util.Date is local time by definition.

micha03:12:29

it's a very strange thing, because i'm in EST

seancorfield03:12:31

That’s been my experience over the years, working with dates in Java and databases.

micha03:12:03

if i have a date in the database of 1970-01-01 00:00:00

seancorfield03:12:18

Well, I’m PST and java.util.Date locally matches my own time. On servers on the East Coast, I get EST, unless the server is running on UTC, in which case java.util.Date is UTC.

micha03:12:35

it will come back as "1969-12-31 19:00:00 GMT"

micha03:12:37

it applies the transformation to GMT on a date that's already GMT

seancorfield03:12:07

What TZ is your client? What TZ is your DB? What TZ is the server on which the DB is running?

micha03:12:25

it bothers me that this should matter

seancorfield03:12:35

MySQL, for example, can operate in a different TZ to the server its own which makes life very complicated.

micha03:12:35

when i store a date in the db it's just data

micha03:12:59

an offset in nanosec from an arbitrary absolute coordinate

micha03:12:03

the epoch

micha03:12:13

local time zones etc. shouldn't have anything to do with that

micha03:12:24

like "how far is it from ny to la?"

micha03:12:38

"well where are you now?"

seancorfield03:12:58

But your client is timezone aware.

micha03:12:22

sure, but the data in the database is absolute, not relative to my client in any way

micha03:12:53

so if i make a query from any client i should get the same result

micha03:12:23

i should get some object representing the same absolute coordinate in time as what i inserted in the database

seancorfield03:12:01

Google shows a lot of questions like yours about timezones so this is clearly a common confusion people have.

seancorfield03:12:11

What data type are you actually using in the database?

micha03:12:16

TIMESTAMP

micha03:12:28

which is defined as nanosec since epoch

seancorfield03:12:43

"In MySQL 5 and above, TIMESTAMP values are converted from the current time zone to UTC for storage, and converted back from UTC to the current time zone for retrieval"

micha03:12:12

oh this is the ms sql server

seancorfield03:12:22

I know. I think you’ll find it’s the same.

seancorfield03:12:42

When I Google this I see similar comments for Oracle, Postgres, etc

micha03:12:34

the type has no timezone info associated with it

seancorfield03:12:52

All I can say is: this is not an issue with clojure.java.jdbc itself.

seancorfield03:12:15

This is what everyone has to deal with using date/time in Java with JDBC simple_smile

micha03:12:19

yeah i think the jtds driver is bonkers here

seancorfield03:12:37

Have you tried the MS SQL Server JDBC driver?

micha03:12:48

i haven't

seancorfield03:12:50

java.jdbc supports it (I test with both drivers)

micha03:12:57

oh really?

micha03:12:05

i had a hard time getting it to work

seancorfield03:12:10

You just have to manually d/l it and localrepo install it.

micha03:12:18

is an example in the tests for java.jdbc?

micha03:12:51

i bet the MS driver will do the right thing

micha03:12:57

i hope, anyway

micha03:12:00

amazing, thank you

seancorfield03:12:14

I have a Windows XP VM running SQL Server Express and hit it via both drivers from my host Mac when testing java.jdbc.

seancorfield03:12:58

Note that I don’t specifically test a timestamp column in any DB in those tests (such a test would only verify that putting a particularly value into a column and retrieving it just yielded the same value).

seancorfield03:12:05

Anyways, must go feed cats.

micha03:12:39

thanks for the tips

seancorfield04:12:09

boot-expectations 1.0.2 released with that pod caching issue fixed.

seancorfield04:12:26

Now I’m updating our build.boot at work to account for your feedback Micha simple_smile

micha04:12:41

does it seem any snappier?

micha04:12:58

when doing incremental build/test cycles

seancorfield05:12:06

Haven’t tried that part yet. I’ve shaved over a minute off our local build/test script just by using Boot instead of Leiningen (reducing the number of JVM startups!).

seancorfield05:12:40

Our Ant script is already 100 lines shorter too simple_smile

micha05:12:21

removing lines of ant script is satisfying

seancorfield05:12:24

Not having to deal with building, installing, and calling two custom Leiningen plugins helps there.

seancorfield06:12:29

Last part of the first pass done: Leiningen is no longer needed for any aspect of our build process (it’s still invoked in a few shell scripts — they’re tomorrow’s priority).

jaen10:12:45

One thing the discussion about templates made me think about is - how about considering rails-like generators in scope for that? It could be a great boon for a rails-killer we may hopefully get someday.

jaen10:12:42

Also another question I have - I've made a task to sync dependencies with an EDN file when it changes. This of course has the downside of not working when existing dependencies conflict with newly introduced ones (mainly for transitive ones). As far as I understand pods could be a solution to that, since they introduce classpath separation. Any pointers how should I approach writing a task which would run the rest of the pipeline inside a pod with appropriate dependencies which is torn down and rebuilt if they have changed? I'm not sure how to approach that so I don't break things like watch and whatnot.

jaen10:12:54

I mean, I think I sorta see how to do it - do a control task that evaluates the next handler inside the pod, but that would let me only use the task only once in the pipeline and not for example have different dependencies for parts of the pipeline.

jaen10:12:30

Now, I'm not sure if that's something that would be ever needed, but would that be possible to implement somehow?

jaen11:12:00

So doing it like this doesn't seem to work:

(deftask with-dependencies
  []

  (let [dependencies (atom (read-dependencies!))
        current-pod  (atom (make-pod-with-dependencies @dependencies))]
    (fn [next-handler]
      (fn [fileset]
        (let [new-dependencies (read-dependencies!)]
          (when (not= @dependencies new-dependencies)
            (util/info "Dependencies changed, updating...\n")
            (reset! dependencies new-dependencies)
            (pod/destroy-pod @current-pod)
            (reset! current-pod (make-pod-with-dependencies new-dependencies))))
        (pod/with-eval-in @current-pod
          (~next-handler ~fileset))))))
I guess I need to read up some more on that '

jaen11:12:06

Hmm, I think I'm probab;y hitting this restriction here > The expr and result must be able to be printed by pr-str and then read by read-string. So what would be a way to pass the fileset into the pod? Or at least evaluate a boot pipeline inside a pod?

micha16:12:20

@jaen: can you explain the dependency EDN file please?

micha16:12:42

like the high level goal?

jaen16:12:28

@micha: well, the goal is for dependencies to be updated automatically - the dependencies.edn is just a vector of project dependencies and each time it changes I want to updated the dependencies without restarting boot.

jaen16:12:49

So for I'm just using set-env to update the dependencies and it mostly work, until something conflicts.

jaen16:12:48

And I was wondering if there would be a way to isolate that inside a pod I can tear down and create again with new dependencies.

jaen16:12:52

So conflicts would be avoided.

micha16:12:41

you can run boot in boot

micha16:12:47

that might be what you want

micha16:12:08

i don't fully understand the issue with conflicts

micha16:12:54

the way you have it set up you add dependencies to the project by adding them to a dependencies.edn file instead of calling set-env! right?

jaen16:12:06

But sometimes it can so happen that a dependency will pull a transitive dependency

jaen16:12:18

That conflicts with a depdendency already pulled by something else

jaen16:12:23

And I thought pods would avoid that.

micha16:12:29

yeah there is no way to remove jars from a classloader

micha16:12:37

also you don't have access to boot.core namespace in pods

jaen16:12:38

Yes, I remember you saying that. But I thought pods are isolated so that issue could be sidestepped that way?

micha16:12:54

yeah that's true

micha16:12:06

if you load the dependencies into a pod you can destroy the pod later

micha16:12:14

that destroys the classloader and everything

jaen16:12:40

Yeah, so that's what I'm trying to achieve.

jaen16:12:48

So I can build a pod from the dependencies in the file

jaen16:12:50

Run tasks in that

jaen16:12:58

And if dependencies change, tear it down and rebuild

jaen16:12:02

So no conflicts arise.

micha16:12:04

so to paraphrase: you want to be able to snapshot the classloader state and restore to the snapshot state later if bad dependencies were added etc

micha16:12:15

you can use pods if you don't need anything from boot.core namespace

micha16:12:41

but boot.core contains singleton stateful things

micha16:12:53

so it can't be easily imported into a pod

jaen16:12:07

Oh, I see.

jaen16:12:34

So that basically makes what I want to do impossible?

jaen16:12:39

Because from what I can tell

jaen16:12:46

You can't send things over to a pod

micha16:12:47

i don't want to say impossible

jaen16:12:50

If it's not serialisable

jaen16:12:02

And you seem to say you can't really run boot inside a pod

jaen16:12:11

Because if something needs boot.core then you're toast.

micha16:12:24

well you can run the whole boot in a pod

micha16:12:37

haha this is confusing

jaen16:12:31

Yeah, a bit : D

micha16:12:45

this is an example of running boot inside boot

jaen16:12:47

Basically I'm just trying to figure out how to run boot, if I would want to be able to isolate the project dependencies so I can update everything without killing boot.

micha16:12:03

so maybe that's a good solution, the runboot thing

jaen16:12:09

And the most sensible way seems to be able to run all the tasks in a boot instantiated inside the boot.

jaen16:12:14

Ok, let me look at that.

micha16:12:41

the goal of that demo was slightly different from whta you want to do

micha17:12:07

the goal of that was to demonstrate how to build multiple separate modules in a project when the modules depend on each other

micha17:12:32

the bravo subproject there depends on the alpha project

micha17:12:48

with that build.boot you can build them both in parallel

micha17:12:31

@jaen the nature of dependency conflicts makes it hard to handle them in a reversible way

micha17:12:08

usually the problem is when you get an incompatible version of some transitive dep

micha17:12:40

the bad jar contains class definitions for some classes, but you need a different version of those classes

micha17:12:25

but it's transitive, so usually the instances of those classes are in other code you don't necessarily have visibility into

micha17:12:40

so even if you could remove the jar from the classpath and replace it with the correct version

micha17:12:54

there would still be instances of the old classes out there

micha17:12:10

so you'd still see the broken behavior

micha17:12:15

does that make sense?

micha17:12:59

to make sure you fix the dependency problem you need to rebuild any objects that refer to the bad objects

jaen17:12:20

Yeah, hence why I want to run all the tasks inside a pod

jaen17:12:23

And tear it down

jaen17:12:36

So the conflict remains contained to the classloader inside the pod.

jaen17:12:08

I understand the problem this poses if I do it outside of a pod for example - I can contaminate the boot's main classloader

jaen17:12:24

And the conflict could only be resolved by restarting it.

micha17:12:27

how do you do your development, via repl?

jaen17:12:58

Well, I use component with system.boot, so this all gets reloaded on each code change.

jaen17:12:21

So I don't mind if that makes global state go away

jaen17:12:28

Since by application structure I avoid it.

micha17:12:12

if you have a repl you'd need to kill it and start a new repl in the new pod, no?

micha17:12:19

and then connect to that from your repl client

jaen17:12:38

It would appear so, yes.

jaen17:12:48

Okay, I can see how that could be problematic.

micha17:12:48

boot has things for that

micha17:12:05

the launch-nrepl function can be used to launch nREPL server in any pod

micha17:12:09

so that's fine

micha17:12:26

you'd need to somehow have the nREPL client connect to the new server though

jaen17:12:29

But I would have to reconnect, right? I assume that's what you were getting at?

micha17:12:32

seems doable though

jaen17:12:59

But dependencies being updated should be rare enough to not be a hassle.

jaen17:12:07

Is there any way for boot

micha17:12:13

maybe nREPL middleware that routes expressions to the right pod

jaen17:12:19

To tell me whether a dependency conflict occured or not?

micha17:12:27

not really

micha17:12:35

i mean there are dependency conflicts all the time

micha17:12:44

if you do boot -p in any project you'll see them

jaen17:12:59

Well, I mean those I get yellow warnings for when using set-env to inject new dependencies.

jaen17:12:11

I could avoid tearing down the pod

jaen17:12:15

Unless I know it's necessary.

micha17:12:48

yeah so the reason for those warnings is because traditionally you'd declare all dependencies in an xml file or project.clj

micha17:12:54

a static map of deps

micha17:12:21

the maven machinery can work out dependency conflicts only if it computes them all at once

micha17:12:41

like if it can see all the dependencies it can work out which transitive deps to choose

micha17:12:55

but if you resolve dependencies and then later add a new dependency

micha17:12:25

that new dependency could have a transitive dep that would have changed which version of another already loaded transitive dep was chosen originally

jaen17:12:35

Then you can't do anything, because you can't unload the original choice, right.

jaen17:12:00

So there's no callback for that or anything?

micha17:12:06

however if you create a new pod with those deps it does the right thing

micha17:12:18

because you provide all deps in the pod constructor

micha17:12:44

so when you add a dependency it would make sense to make a new pod

micha17:12:49

and you'd avoid that issue

micha17:12:21

but then you wouldn't be able to have a repl in that pod easily

micha17:12:29

i mean you'd have the reconnecting issue

jaen17:12:37

Yeah, that's the reason I started looking into pods. I'm just wondering if there's a way to tell if adding a new dependency produced a conflict that would require unloading a jar

jaen17:12:40

And hence a new pod

jaen17:12:53

Because not all set-env calls updating the deps result in conflicts, right?

jaen17:12:13

At least, it doesn't seem to give me yellow warnings each time I try this, just sometimes.

jaen17:12:34

Anyway, I'll look into those links you gave me and see how it works out

micha17:12:03

i think if you want to do it you'd wnt to make a new pod whenever you add a dependency

micha17:12:10

i'm working ona solution to the pod serialization problem today

jaen17:12:23

Oh, that would be cool if possible

micha17:12:26

but i don't know if that's really going to help in your case

jaen17:12:45

Well, if it would mean I could pass a fileset object into a pod, it just might.

micha17:12:48

you can make your own serialization in the meantime

jaen17:12:54

Unless I'm misunderstanding pods.

micha17:12:13

the only issue with the fileset is the java.io.File objects, which you can serialize and deserialize yourself

micha17:12:00

i think you can use boot-shim.clj to install data readers for the tmpdir record types

jaen17:12:02

Because if you look at that example code I've given I basically just want to call next-handler inside the pod I construct. Not sure if your fix would make that possible or not though.

micha17:12:10

then pr-str/read-string would work on them

micha17:12:24

you can add to the clojure default data readers there perhaps

micha17:12:30

and they'll be present in all pods

micha17:12:39

just for testing could be a fast way to try it

micha17:12:59

you can do anything in there, including add things to clojure core, whatever

micha17:12:25

next-handler might be callable from the pod, but you'd need to use the java runnable interface

micha17:12:55

because if you pass a clojure function to another pod the receiver won't see it as a clojure function

micha17:12:04

it will see it as a java runnable (i thnk)

micha17:12:05

you can also use the low-level pod methods

micha17:12:16

(.invoke mypod "some.ns/f" arg arg)

micha17:12:56

where arg can be any java object

micha17:12:17

the some.ns/f function will see the args as java objects

micha17:12:27

no pr-str/read-string will be done there

micha17:12:54

this is pretty much the most complicated part of boot lol

alandipert17:12:58

although i discovered that arg can be a clojure object.. if it was previously returned from the same pod

jaen17:12:20

I guess that's due to it being created by the same runtime instance?

micha17:12:34

the problem with passing clojure objects is that protocol interfaces are constructed dynamically in each clojure runtime

micha17:12:47

so like IFn is different in pod a and pod b

jaen17:12:59

And it's not enough to be the same runtime jar, has to be the same instance of runtime?

jaen17:12:06

You've just answered that, gotcha.

micha17:12:20

but i think you can use the Runnable interface

micha17:12:27

all functions in clojure implement that

micha17:12:35

and that's loaded from the system classloader

micha17:12:39

so that's the same in all pods

micha17:12:57

i guess that only works with 0-arg functions though

micha17:12:26

and it doesn't return anything

micha17:12:33

so pretty mcuh useless

jaen17:12:10

Is that how boot-shim.clj should look?

(defmethod print-method java.io.File [file ^java.io.Writer w]
  (.write w (str "#test/file " (pr-str (.getAbsolutePath file)))))

(alter-var-root #'clojure.core/default-data-readers
  (fn [current-readers]
    (merge current-readers {'test/file (fn [path] ( path))})))

micha17:12:12

looks good to me

jaen18:12:54

Then it doesn't seem to help, I must be doing something else wrong. Let's see...

jaen18:12:29

Ah, you need to define both print-method and print-dup it seems.

jaen18:12:22

I can now shuttle the fileset into a pod, but I can't do the same for the next-handler. Let's figure something out for this then.

jaen18:12:22

Hah, I think I've got it.

jaen18:12:58

It's hacky but I created a namespace that just does

(ns boot-hack)

(defn invoke-hack [fun]
  (boot.util/info "INVOKE HACK\n")
  (.run fun))
and I .invoke it on the pod passing the fileset.

jaen18:12:01

It kinda works

jaen18:12:33

I just need to figure out why dependencies change doesn't trigger, but it's a step forward for sure.

jaen18:12:14

Welp, was editing the file in target not in the resources '

jaen18:12:45

But it seems I'm not taking something into account still, since I'm getting

java.util.concurrent.ExecutionException: java.nio.file.NoSuchFileException: target/public/assets/javascripts/application.out/goog/deps.js
      java.nio.file.NoSuchFileException: target/public/assets/javascripts/application.out/goog/deps.js
    file: "target/public/assets/javascripts/application.out/goog/deps.js"
when I change dependencies. Maybe somehow I close over a stale fileset.

jaen18:12:54

But all in all, that's some progress.

micha18:12:17

the fileset shouldn't contain references to files in the target dir

micha18:12:21

so that's weird

jaen18:12:54

Well, I've never gotten that error without my pod shenanigans, so it's probably somewhat related; I get that when compiling cljs files.

micha18:12:36

interesting yeah

juhoteperi18:12:11

@micha: Btw. I had an idea related to separating files to multiple artifacts. It might be useful to have the path of original directory as a field of TmpFile?

micha18:12:49

original?

juhoteperi18:12:19

Like in less4clj I have three resource-paths and files from each are put into separate jars

juhoteperi18:12:39

Currently I use regex against path inside fileset

micha18:12:43

basically tagging files in the fileset

juhoteperi18:12:56

But would be even simpler to say that files from this resource-source should be put into this jar

juhoteperi18:12:59

Yeah something like that

micha18:12:02

i was thinking about this too

micha18:12:10

i wonder if we want some kind of stack

juhoteperi18:12:12

I'm not still sure what would be best for task created files?

micha18:12:13

like git stash

micha18:12:39

like if sift had a mode where instead of just removing files, it would stash them on a stack that you could pop

juhoteperi18:12:03

I think it would be useful for these kind of operations to provide functions which wrap "middlewares"

juhoteperi18:12:08

like the with-files I wrote

juhoteperi18:12:33

(add-tag (cljs) :cljs)

micha18:12:03

that would add :cljs to the meta on all the files produced by the cljs task, which it returns?

juhoteperi18:12:49

Yeah hmm, wouldn't bee too useful to tag all files in fileset after cljs. I think it should tag files created by cljs task.

juhoteperi18:12:05

I guess that's not as trivial to implement.

micha18:12:19

it can be done already, by composing with sift

micha18:12:37

sift can do all the set operations by meta

micha18:12:46

so you can tag all the files before the cljs task with one tag

micha18:12:51

then after with another

micha19:12:00

and sift the ones that have onle the after

micha19:12:14

i forget exactly, but it can be done on the command line

micha19:12:36

but tagging is maybe not the ultimate solution?

micha19:12:54

seems like we really want to stash the files somewhere and handle each stash separately?

juhoteperi19:12:01

Yeah I guess this needs more thought

micha19:12:26

i think stashes can be like registers where you can put a fileset

juhoteperi19:12:39

With-files I wrote works quite well. Each artifact can be created inside one with-file context and doesn't see files from others.

micha19:12:03

can i see an example build.boot with this?

micha19:12:04

ah i see yes

juhoteperi19:12:15

In my case I'm not even interested in filtering task created files, only files from resource-paths

micha19:12:37

right, but i can see wanting to do the other too

micha19:12:28

we can really do anything with the filesets because they're immutable

micha19:12:50

so there is a lot of flexibility to design something that works well

micha19:12:13

lots of ways we can partition and organize them

micha19:12:28

what about registers instead of stacks?

micha19:12:10

like (sift :include #{#"^foo"} :save :a)

micha19:12:54

then later you could do (sift :load :a)

micha19:12:18

or instead of :a maybe a var or atom

micha19:12:25

that was very un-lcdi of me lol

juhoteperi19:12:52

Maybe. I don't yet see what a complete build task would look with this.

micha19:12:06

i got to thinking in stacks because the problem is very similar to forth

juhoteperi19:12:09

I think some "utility" tasks wrapping other tasks (and creating closures or "fileset context") would be useful instead of adding more and more tasks to "base level" with comp

micha19:12:19

we want to make a tree from a pipeline

micha19:12:34

like we have this pipeline, but we want to fork and branch

micha19:12:39

and emit multiple things

micha19:12:41

at the end

juhoteperi19:12:43

Yeah I think I'm trying to describe a tree

micha19:12:56

this is exactly the problem faced by forth

micha19:12:05

which they solve with stacks

micha19:12:19

then you can process the tree sequentially

micha19:12:32

by rearranging the things on the stacks

micha19:12:53

but i think registers might be better and easier to use

micha19:12:22

well maybe not as good, but easier to use

micha19:12:46

like this for example:

micha19:12:54

boot stash -i sadf -- proc1 proc2 target -- pop -- proc3 proc4 target

micha19:12:15

that would create a fileset and stash it

micha19:12:31

then run a pipeline on the stuff remaining in the non-stashed fileset

micha19:12:36

and emit to target

micha19:12:50

then you pop the stashed fileset and run another pipeline on that

micha19:12:55

and emit to target

micha19:12:39

the target task has a --no-clean option that tells it not to delete all the things in the target before emitting

micha19:12:59

the -- in the example is just to visually separate the pipelines

micha19:12:09

ignored by boot basically

micha19:12:20

maybe a stack is the best thing actually

micha19:12:41

you just order your subpipelines according to the stack order

micha19:12:32

i could also imagine stack operations, like union, difference, intersection, etc

micha19:12:44

that would operate on the top two items on the stack

micha19:12:49

to merge filesets, etc

micha19:12:33

so you could have total control over how you partition the filesets with simple operations

micha19:12:02

then you could build any of the more specialized things on top fo that

dm319:12:56

you would also probably want to run independent operations in parallel

dm319:12:41

not sure how well that works with stacks

micha19:12:19

i don't thnk you'd want to do that

micha19:12:28

because the classpath in the jvm is a singleton

micha19:12:40

so you can't really run things that affect the classpath in parallel

micha19:12:46

nondeterministic

dm319:12:09

I mean if you have independent resource dirs that you can process in parallel to get independent outputs - why do we care about the classpath?

dm319:12:23

(I didn't read the fileset code :))

micha19:12:07

well the jvm has only one classpath

micha19:12:27

so if you have independent resource dirs and you add them in parallel they'll all be on the classpath for all processes

micha19:12:39

unless you do some things with pods

micha20:12:47

you can run pipelines in parallel using the multi-module pattern, like this: https://github.com/micha/multi-module-build/blob/master/build.boot#L70-L71

micha20:12:02

the problem with processing sections of a single pipeline in parallel is that when you call commit! on a fileset it replaces the current things on the classpath with the things in the fileset

micha20:12:21

so concurrent commits are possible, but you'd get nondeterministic results

micha20:12:46

the code itself is reentrant, there are semaphores and locks to prevent corrupting the filesystem

micha20:12:17

one thread would be seeing the results of commit! in the other thread

micha20:12:24

which would be bad

micha20:12:52

because they'd be clobbering each other

micha20:12:26

and the underlying filesystem wouldn't correspond to the fileset object being passed to the next task if another thread called commit! on its own different fileset

micha20:12:54

that would cause exceptions when trying to read form files etc

micha20:12:14

if the files no longer are in the filesystem

micha20:12:23

i think a lot of the need for paralleization in builds is mostly eliminated by caching in tasks

micha20:12:50

with the immutable fileset tasks can compute diffs easily and not do unnecessary work

dm320:12:13

which parallelizes automatically

dm320:12:41

was trying to see if that would work for boot

micha20:12:18

shake relies on building a dependency graph, right?

dm320:12:35

it builds the dependency graph dynamically though

micha20:12:49

but you need to guide it

micha20:12:59

like with annotations

dm320:12:37

yeah, but then I was thinking you could annotate boot tasks

dm320:12:43

if you wanted to

micha20:12:54

the reason why we didn't go down that road was simplicity

dm320:12:00

and infer taks results from the fileset changes

micha20:12:22

yeah, that's the usual build tool approach

micha20:12:43

but i think there is value in allowing the user to order the tasks

micha20:12:59

instead of devising a dependency graph via inversion of control

micha20:12:21

because usually the order in which tasks should run is fairly obvious to the programmer

micha20:12:27

for the specific task they're doing

micha20:12:41

but making a dependency graph that will be generally correct is hard

micha20:12:01

like in any specific instance you can see it, the order is obvious

micha20:12:13

but if you need to make a system that can do it in the general case it gets really hard

dm320:12:26

shake builds are also ordered explicitly

micha20:12:27

and you need to introduce coupling between tasks so you can annotate

micha20:12:07

ah so it performs a sort of JIT optimization?

micha20:12:14

learning as it goes?

dm320:12:37

wait, maybe I'm wrong. The build steps are monadic

micha20:12:50

i wonder about boot tasks

micha20:12:59

i imagine there is a monad for what they are

micha20:12:03

with a name

micha20:12:41

(defn what-am-i
  (fn [continuation]
    (fn [arg]
      (continuation (f arg))))

micha20:12:20

knowing the name of it might unearth something interesting in the literature

micha20:12:48

is the clojure compiler reentrant?

micha20:12:04

like can you compile namespaces for AOT in parallel?

micha20:12:20

i guess in pods you can for sure

micha20:12:32

oh, maybe not actually

micha20:12:52

pods wouldn't produce compatible bytecode

micha20:12:08

aot is really the only thing in boot that could be slow

micha20:12:14

i mean in a build

dm320:12:24

boot could be used to run gcc and other stuff which is slow

micha20:12:43

you could make your gcc task compile things in parallel

micha20:12:58

that would be pretty straightforward, using send-off or future

dm320:12:03

of course

dm320:12:19

but with shake that's out of the box

dm320:12:38

I'm not solving a problem that I have right now simple_smile

micha20:12:43

i suspect that shake doesn't provide the simplicity and flexibility of boot though

micha20:12:56

like with boot you can noodle around in the repl all day

dm320:12:21

of course, I'm just thinking if we could add that to boot in some way

micha20:12:43

JIT type optimiztions might be a possible approach

micha20:12:52

like what we do for the uber task

micha20:12:02

but externally

micha20:12:23

detecting immutability basically and leveraging that to avoid doing work

micha20:12:20

if you know that a task doesn't mutate the environment and is just a pure function that takes fileset -> fileset

micha20:12:28

perhaps something can be done there to parallelize

flyboarder20:12:10

@micha tasks which do that may not want to be run in parallel tho

flyboarder20:12:24

speak for example should only run when a build finishes, even tho it’s actions can be run in parallel to others

micha20:12:02

interesting

flyboarder20:12:49

i think it would make more sense to create a parallelizing task

flyboarder20:12:30

which takes the following tasks and runs them in parallel pods

flyboarder20:12:48

or something like that

flyboarder20:12:54

or if tasks could somehow be more context aware

flyboarder20:12:12

like a pod being told which other pods were started in parallel and the task can sort out blocking? I havnt read much pod code so im not sure how it currently works

micha20:12:15

i like the explicit approach

micha20:12:31

the multi-module-build example has something like that

micha20:12:49

it uses semaphores

flyboarder20:12:13

yeah i was poking around in there

flyboarder20:12:32

semaphores was an interesting read

micha20:12:01

the semaphore isn't the ideal concurrency thing there, exactly

micha20:12:12

but it's close enough to work

micha20:12:03

i wanted something more like a priority queue of semaphores or something

micha20:12:07

like named checkpoints

micha20:12:23

where you could wait on a specific checkpoint to be reached

micha20:12:39

i couldn't find anything that did that exactly

micha20:12:47

maybe phasers, but i can't understand them at all

flyboarder20:12:23

im a bit confused as how checkout is adding deps to the fileset

micha20:12:39

the checkout task?

micha20:12:01

the checkout task takes a dependency maven coord as argument

micha20:12:16

and when it runs in the pipeline it finds the jar in your local maven repo

micha20:12:31

actually before it runs in the pipeline, at construct time

micha20:12:38

it finds the jar in local maven

micha20:12:47

and adds the directory that jar is in to the :source-paths

micha20:12:57

so now it's being watched for changes

micha20:12:16

now, when the checkout task runs in the pipeline it unzips that jar into a temp dir

micha20:12:23

and it adds that tempdir to the fileset

micha20:12:38

so now the files in the jar are in your build classpath

micha20:12:50

as if you had them in your project source paths

micha20:12:06

that's why you want to put the checkout task after the watch task

micha20:12:20

it needs to unzip each time and add to fileset

flyboarder20:12:52

could it maybe be the checkout task causing hoplon files to not be seen?

micha20:12:15

i can take a look at that right now if you want

micha20:12:34

is there a repo handy?

flyboarder20:12:34

im just poking a round, but the hoplon boot task looks fine

micha20:12:52

yeah it might be that hoplon only extracts the deps once

flyboarder20:12:02

ah the repo is kinda large and broken 😛

micha20:12:04

it assumes the jar is immutable

micha20:12:09

i think that's the problem

micha20:12:32

that shouldn't matter

micha20:12:40

because the .hl files should be on the classpath now

micha20:12:44

directly as files

micha20:12:18

i forgot, did you see the .hl files with show -f?

micha20:12:29

i think they were in there ok?

flyboarder20:12:44

well when the project builds the first time it finds all the .hl in my deps, but after the checkout task it doesnt extract or compile them, show -f has the .hl fines in the fileset before and after checkout

flyboarder21:12:11

are the new deps extracted to the same location as the previous jar?

micha21:12:31

yeah, a temp dir

micha21:12:52

maybe the .hl files aren't being updated in the jar?

micha21:12:02

if the .hl files are the same then they won't be recompiled

flyboarder21:12:50

but stopping the build and restarting it builds with the new files the first go, before ever needing to do a chekout

micha21:12:15

that would be a useful addition to the show task, an option where it can print like -f but show only changed files

micha21:12:50

what order are you calling your tasks in?

micha21:12:54

maybe there is an issue there?

flyboarder21:12:07

i just removed the checkout task

flyboarder21:12:25

im using the dev-osx

micha21:12:31

where was the checkout task before?

micha21:12:52

right after watch?

flyboarder21:12:54

directly after watch

micha21:12:31

and which dependency was the checkout dep?

flyboarder21:12:57

I had the lounge.* shapshot’s in it

micha21:12:57

ah i think i know what the issue is

micha21:12:11

try removing those from your :dependencies

micha21:12:22

and add the checkout back

flyboarder21:12:32

like dont have them in both deps and checkout?

micha21:12:00

i think maybe hoplon is overwriting the checkout files with ones from the jar

micha21:12:05

i mean cached ones

micha21:12:12

that it got from the jar originally

micha21:12:27

i think this can be fixed in boot-hoplon

flyboarder21:12:44

ah ok, trying

flyboarder21:12:40

is there already a task that reads deps from an edn that gets watched?

nberger21:12:01

Just pushed a new implementation for the junit reporter PR using the new target task: https://github.com/adzerk-oss/boot-test/pull/11. target is used to sync the junit reports before throwing an exception in case of test errors/failures. Includes a refactoring in the test task so the test summary is saved as fileset metadata, so it can be potentially used by downstream tasks (someone suggested this idea here, I don't remember who but thanks!)

micha21:12:43

@nberger: awesome! while you're here i'll have a look

flyboarder21:12:11

yeah thats working! thanks micha! i should have tried that sooner 😕

nberger21:12:15

cool @micha, thanks! I'll be here for a while simple_smile

micha21:12:53

@flyboarder: one thing you can try is making your own version of the checkout task that removes the checkout dep from the env :dependencies in its constructor phase

micha21:12:18

like before the with-pre-wrap or whatever, do a set-env! to remove the dep

micha21:12:29

that might fix everything so you don't need to remove the dep

micha21:12:39

if that works then we can fix the checkout task

flyboarder21:12:41

i thought it was bad for tasks to modify the deps unless in a pod

micha21:12:57

well in this case it's okay i think

micha21:12:06

because it's sort of mocking things

micha21:12:27

so you are going to need to do some unsafe mutations to fake the environment

micha21:12:55

you won't be using checkouts when you're building for production or creating jars etc

micha21:12:05

so i think it's ok to mess with :dependencies there

micha21:12:28

what the modification will achieve is it will prevent downstream tasks from creating pods that have the checkout dep in the dependencies

micha21:12:18

basically the place in the checkout task where it adds the dependencies of the checkout dep to the project

micha21:12:26

you can also remove the checkout dep itself

micha21:12:46

because that jar will be provided as files

micha21:12:28

@nberger: i like this!

flyboarder21:12:43

ok ill fiddle some more

nberger21:12:50

@micha nice to hear! what do you think about moving the run-tests stuff to a separate PR first? I'd like to rewrite the commits anyways simple_smile

micha21:12:03

the difference between run-tests and test is whether or not it throws on failure?

nberger21:12:26

yeah... well, with the junit stuff it's also whether it syncs to target or not

nberger21:12:11

I thought about a throw-on-test-failures task or something like that, but wasn't convinced to do it

micha21:12:36

why does it need to sync to the target?

micha21:12:53

wouldn't it just add the results file to the fileset?

micha21:12:01

or is that in case it throws?

nberger21:12:10

because you want to get the junit reports when there are test failures or errors, but we throw an exception that makes the target to not be synced

micha21:12:39

interesting!

nberger21:12:03

yeah... the introduction of the target task is making this possible! simple_smile

micha21:12:11

re: separating into multiple PRs i think whatever you prefer would be fine

micha21:12:05

if commits could be organized to be friendly to git bisect and revert that would be a bonus simple_smile

micha21:12:16

not that they're not already

micha21:12:21

just in general

nberger21:12:48

Yes, I'll squash the commits into fewer commits (2 or 3 I guess).

micha21:12:50

i really like what you did with the edn result file and the fileset meta

micha21:12:59

this could be a powerful tool

nberger21:12:00

Sweet. Yes, I think it opens the door for more options for downstream tasks working on the test summary... I remember someone suggested to do that, not sure if it was Martin Klepsch

micha21:12:12

i think maybe the metadata to add to the file could be something more general

micha21:12:22

like maybe a more general notification type metadata

nberger21:12:45

I even wanted to separate the junit stuff from the run-tests task, but then we need a way to set the clojure.test/report for the run-tests pod... so the pod would need to be started before I guess...

micha21:12:36

like {:notification/status :warning, :notification/source 'boot-test, :notification/message "Some tests failed, some passed."}

micha21:12:11

the idea of the meta on the file being that we could establish some general types of notifications that could be handled by the notification task

nberger21:12:27

interesting

micha21:12:37

the info that you're putting in the file could be extracted by that task if it wants to

micha21:12:47

by reading the file

micha21:12:34

perhaps actually attaching notification meta to the namespace file would be better, even

micha21:12:59

like if you test 5 namespaces, and you get errors in one of them

micha21:12:12

you could attach that as metadata to the namespace the failures were in

nberger21:12:38

Hummm, yeah, that might be better

micha21:12:43

i guess it's not strightforward to find the namespace file

micha21:12:50

maybe you're testing something in a jar

micha21:12:05

that would be strange though

micha21:12:30

you don't usually run tests on namespaces in jars

nberger21:12:04

yeah, but it would be nice to support it I guess

micha21:12:46

i guess the real thing we want to do for now is just add metadata about the notification to the fileset instead of throwing an exception

micha21:12:05

the other stuff can be done later

micha21:12:32

perhaps adding metadata to the fileset itself

micha21:12:41

instead of on a file in the fileset

nberger21:12:45

so, this notification mechanism is something new, or is it already something in boot?

micha21:12:53

no it doesn't exist yet

micha21:12:00

but we need to figure something out

micha21:12:18

this could be a good testbed

nberger21:12:51

one thing to note: the process should exit with error status (not 0) to signal test failures... that's what the exception does. Are we going to still do that?

micha21:12:58

maybe mark the run-tests task as alpha ans subject to change

micha21:12:08

sure we can do that

micha21:12:21

we can make a notifier task that looks for the meta

micha21:12:29

and throws an exception if necessary

micha21:12:31

or whatever

micha21:12:44

yeah if you mark run-tests as experimental then we can change it as we figure out how to do this

nberger21:12:20

Just to be clear, are you talking about leaving the notification stuff out of the scope of this PR, just mark the task as experimental as we continue discussing the notification mechanism?

micha21:12:52

the latter

nberger21:12:55

Or to try and do the new notification mechanism, but also leave it as experimental because it is... experimental? simple_smile

micha21:12:18

we need to start with something concrete

micha21:12:22

and iterate on that

nberger21:12:25

sounds good

micha21:12:33

awesome! this is great

nberger21:12:11

I'll play a bit with your notification idea, will let you know as soon as I have something, and feel free to chime in in any way simple_smile

micha21:12:40

cool, you can add as many experimental tasks to boot-test as you want simple_smile

micha21:12:26

i think a lot of people use it so we'll probably get good feedback

nberger21:12:14

by the way, how do I mark the task as experimental? ^:experimental, or in the docstring?

micha21:12:44

maybe both?

nberger21:12:08

maybe, why not? simple_smile

micha21:12:30

as long as people know that it might change so they don't get too invested in it if they aren't into experiments

micha21:12:21

i dunno if printing a warning would be overkill there or not

nberger22:12:38

Hmmm... I'm now thinking that maybe we should not mark the run-tests task as experimental, but the notification mechanism. We could add an option to use the new notification mechanism, and that option would be experimental. So by default it would work as it works now: throw exception and do not sync on failure. But it would sync when junit is enabled, and it would not throw when the new notification option is enabled

micha22:12:36

that's a good plan

nberger22:12:53

Cool. I'll work on it and we'll keep discussing it once there's some code to look at simple_smile

micha22:12:03

awesome, thanks!

nberger22:12:10

thank you!

flyboarder22:12:56

if im correct that line would result in a deps list that has the old dep and the new ones correct?

flyboarder22:12:04

ok i think i can work this out 😛

micha22:12:16

probably want something like (core/set-env! :dependencies #(...))

micha22:12:25

where the anon fn removes and adds

micha22:12:07

that variant of set-env! works like swap! on an atom

micha22:12:43

given a function f it replaces :dependencies with (f (get-env :dependencies))

seancorfield23:12:49

boot-expectations 1.0.3 released — adds —requires to preload namespaces into the testing pod and —shutdown to run specific functions inside the pod when testing is over (we needed this at World Singles).