Fork me on GitHub
#cljs-dev
<
2017-07-11
>
mfikes02:07:51

A fallout of the recent aget work is that it will be very easy to add type inference checks for runtime function invokes (I think currently most of that kind of stuff is done for macros).

cljs.user=> (defn foo [^boolean x] (bit-count x))
WARNING: cljs.core/bit-count is being passed [boolean] at line 1 <cljs repl>
#'cljs.user/foo
cljs.user=> (bit-count "abc")
WARNING: cljs.core/bit-count is being passed [string] in file <cljs repl>
0
^ Trivial to add

mfikes02:07:39

Here’s how, by adding a symbol and a validator to a map of existing validators, along with the warning type to emit:

{,,,
  'cljs.core/bit-count
   {:valid?       #(every? numeric-type? %)
    :warning-type :generic-type-error}}

anmonteiro04:07:46

expanded on my JavaScript modules blog post to account for previous feedback: https://github.com/clojure/clojurescript-site/pull/104

anmonteiro04:07:52

^ looking for final feedback

Roman Liutikov09:07:25

@martinklepsch @dnolen I’d also refer to a recipe project which demonstrates how one can use code splitting in a web app for example with routing

Roman Liutikov09:07:44

I think small recipe projects is a good thing in general

bronsa10:07:05

@dnolen just released tools.reader 1.0.2 with set duplicate checks fixed and error messages chagnes in, if you want to give it a go in cljs

dnolen10:07:44

@roman01la routing is just not in scope for talking about code splitting on the website

dnolen10:07:59

Webpack doesn’t bother with that bit either

dnolen10:07:59

@mfikes cool that you got that working 🙂 think cljs.spec here 😄

Roman Liutikov10:07:39

@dnolen Yes, sure. I’m just thinking about recipe projects for different compiler features more or less in a context of web apps. Maybe will find time to work on this.

Oliver George23:07:41

Quick thought in passing: what about getting a guest post from someone like the re-frame guys showing how it could be used (or replace an existing technique).

Oliver George04:07:38

@U051MTYAB I have no authority at all related to this but does it sound like something you'd be interested in contributing?

dnolen11:07:33

@bronsa worked great! thanks much

rauh11:07:50

What do I have to do to make the tests run again? I'm getting a ClassNotFoundException: com.google.javascript.jscomp.Es6RewriteModules when running ./scripts/test on master

juhoteperi12:07:06

@rauh ./script/bootstrap should download new Closure

rauh12:07:06

@juhoteperi Thanks! That worked.

dnolen12:07:48

@mfikes hrm yeah your aget stuff has some interesting implications for cljs.spec

dnolen12:07:28

@mfikes I reviewed the patch, let’s clean it up a bit more then we can look at the double warning issue

mfikes12:07:31

Are you referring to the general ability to warn when we see thing being wrong statically?

dnolen12:07:41

I may even just apply your thing, and then apply the fix when I find it

mfikes12:07:03

Cool. The double warning does very much appear to be related to the last bit of macroexpansion.

dnolen12:07:19

@mfikes yes but it can be a bit fancier for everything except really higher order stuff like map

mfikes12:07:27

(Via the stacks attached in the ticket)

dnolen12:07:43

since we have fspecs, this can basically be super-charged tag checking

mfikes12:07:29

Right. I think Colin was dreaming of doing the same essentially in the static analysis Cursive does. Perhaps he needs to add type inference. But the ClojureScript compiler is half way there 🙂

mfikes12:07:38

I’ll start putting together a patch that addresses the last two comments you have in the ticket @dnolen in case the double analysis ends up being treated as a different issue. (Hopefully a perf gain if we find it.)

mfikes12:07:19

I’ve slapped analyzed on lots of different things around there, but can’t seem to nail it.

dnolen12:07:25

@mfikes the only reason I’m somewhat skeptical about macros -> double-warning is why it doesn’t affect other things. Do you have a hypothesis about that?

dnolen12:07:17

I don’t suppose @ambrosebs has looked at fancier type checking based on specs? 🙂

mfikes12:07:02

Perhaps the only guess we haven’t seen it before is that + turns into js* and the warnings are on js*. Here we have a new pass that looks and sees checked-aget twice.

dnolen12:07:07

I guess I don’t understand how it sees it twice?

mfikes12:07:11

At least with the stack diff attached in the ticket we can see the extra bit is right at the last line of analyze-seq

mfikes12:07:49

Right. A mystery to me too.

dnolen12:07:37

ah so I think I know

dnolen12:07:49

earlier in the stacktrace - analyze let binding is the clue

dnolen12:07:56

that’s from invoke optimization

dnolen12:07:08

so probably the double warning is coming from there

dnolen12:07:18

@mfikes and you’re not respecting analyzed in your new pass

dnolen12:07:31

yeah that’s exactly it

dnolen12:07:17

@mfikes though actually the problem maybe in the optimization itself

dnolen12:07:29

@mfikes your pass really should work w/o checking this

dnolen12:07:44

I think line 3027 in analyzer.cljc is wrong

dnolen12:07:43

actually nope

dnolen12:07:07

ok what happens is that parse-invoke analyzes the fn expression once - so you get your first warning

dnolen12:07:29

if it can be optimized then it rewrites the form and analyzes that

dnolen12:07:34

so you get your second warning

dnolen12:07:46

analyze* always applies all passes afterward

dnolen12:07:28

which means those passes should be affected by analyzed

dnolen12:07:55

so yes - pass writers should never have to check for it when emitting a warning

dnolen12:07:05

the issue is in parse-invoke*

dnolen12:07:57

@mfikes ^ so we need to apply analyzed to the whole seq not the symbol

mfikes12:07:29

OK. I can test that and include it in the patch revision I’m cooking up if you’d like

dnolen12:07:19

yes please, thanks 🙂

dnolen13:07:22

@mfikes if we get this wrapped up, maybe we should go with your post on Thursday? The post itself I think is in a really good state except date / var names.

mfikes13:07:36

@dnolen I’ve got the revised patch apart from the bit above, which is giving me a little grief when starting the REPL

mfikes13:07:20

Yeah, there is a bit in the post that refers to “in the future” adding this stuff. That small bit could be revised / expanded a little.

dnolen13:07:49

well because my snippet is wrong

mfikes13:07:54

With respect to applying analyzed to the whole seq, the REPL spews bad JS

dnolen13:07:29

snippet is wrong, some bad unquoting, fixed now I think

mfikes13:07:49

Yeah, some nasty stuff that I haven’t sorted yet 🙂

dnolen13:07:18

was unquoting the f-sym f bit

dnolen13:07:01

you probably don’t need the splicing unquote either

mfikes13:07:19

@dnolen The REPL works again with the update… but I still get two warnings. Arg.

dnolen13:07:55

@mfikes target directory is empty right?

mfikes13:07:24

let me try that kind of stuff to be sure. I was just removing .cljs_node_repl/

dnolen13:07:59

@mfikes if for some reason you cannot make this work, lets get a patch with my suggested change and I can take a look

mfikes13:07:43

I can’t make it work. I’m starting to wonder if you patch is probably good “in spirit”, but perhaps in this case my code is going down the other branch.

mfikes13:07:14

I have my other stuff running unit tests now, and I’ll attach a revised patch without the double warning issue stuff.

dnolen13:07:49

@mfikes no worries, I suspect it’ll end up being a pretty simple issue - I’m still pretty confident that this has to do w/ invoke optimization based on the stack trace - but perhaps we missing a detail somewhere

rauh13:07:29

Should the args get an analyzed tag?:

bindings (cond-> []
                       bind-args? (into (interleave arg-syms args))
                       bind-f-expr? (conj f-sym (analyzed f)))

dnolen13:07:12

@rauh in this case it should not matter

dnolen13:07:24

the parent form prevents checking of any child forms

mfikes13:07:48

Cool—patch is now in https://dev.clojure.org/jira/browse/CLJS-2198 so anyone can mess with it

dnolen13:07:51

this is why I want to move analyzed to the seq, not its elements

mfikes13:07:11

If we solve the double-warning issue, we can revise the tests that are actually checking for this warning to go back to looking for a sequence containing one warning. You can see the patch revises it to look at the first warning.

mfikes14:07:44

The new aget warnings trigger for core.async FWIW. Here is one

----  Compiler Warning on   target/ios/cljs/core/async.cljs   line:248  column:6  ----

  cljs.core/aset, arguments must be an array, followed by numeric indices, followed by a value, got [objects number number] instead

mfikes14:07:51

One thing I need to figure out: Using this stuff on a real project, for some reason it appears to be failing when loading/compiling stuff in libs until I do a require :reload, and that’s when the triggers occur. One troubling thing is that the checked-aget aliasing doesn’t seem to appear in the JS for libs when I would expect it to. So, probably some real-world corner cases to sort through.

dnolen14:07:19

@mfikes for the former thing, you mean you don’t see checks emitted until you reload a namespace?

dnolen14:07:44

the later thing definitely needs to be sorted out

mfikes15:07:13

TL;DR seems to be a problem with initial compile (I’m using downstream tooling) What I see occur: I set :warnings true. Then lein clean, lein figwheel. My code gets compiled by Figwheel when it starts up, but no warnings appear. If I look at the JavaScript in my out directory, I see the {:invalid-array-access-warning-enabled true} build-affecting option recorded, but I don’t see checked-aget in the JavaScript. If I then go in the REPL and use :reload on a namespace, I then see warnings appear in the REPL console, and if I go back and look at the JavaScript in out, the unchecked return (({"foo": (1)})["foo"]) code snippets are replaced with code using checked_aget: return cljs.core.checked_aget.call(null,({"foo": (1)}),"foo");

dnolen15:07:59

@mfikes ok, sounds like a bug?

mfikes15:07:41

The only think I’m suspecting is *unchecked-array-access* leaking to other namespaces, but have no reason to believe that this is really happening. Yes, a bug.

dnolen15:07:44

@mfikes are you binding it to make sure it only holds for file scope?

dnolen15:07:00

if you don’t a set! will affect the root binding

mfikes15:07:49

Everywhere *unchecked-if* had a binding, I added one for *unchecked-array-access* (otherwise you are right, it throws if you try to set!)

dnolen15:07:51

i.e. you should not bind this in cljs.closure/build

dnolen15:07:21

I would ignore how *unchecked-if* is handled for now (we might need to fix it)

dnolen15:07:35

the more important thing is that it must be bound at file scopes only

dnolen15:07:52

and you have to do it consistently

dnolen15:07:16

the reason other dyn vars don’t have this problem?

dnolen15:07:20

we rarely use set!

anmonteiro15:07:55

@dnolen thanks for the feedback on my draft. I might not have time to work on the node module indexing until Friday/weekend

mfikes15:07:57

Another high level observation about the feature: If you enable :invalid-array-access, it is nice that you get the static analysis checks, but the runtime checks then essentially make it so you can’t run your app if there are enough in your way. (In other words, instead of it acting like a lint, you quickly find you have to turn it back off in order to continue developing.)

anmonteiro15:07:59

Just setting expectations

dnolen15:07:39

@anmonteiro understood, I might give it a poke - probably the only thing I need to understand, as not obvious from module_dep.js how do you find react/dom-server? you said it cannot be reached from just indexing top-level - sounds like you need to go one node_modules down or something?

anmonteiro15:07:11

So you always need to go theough the project's node_modules. react-dom will be provided by 1. The package.json's main in the react-dom folder 2. Or the index file in that folder in the absence of a main entry

juhoteperi15:07:48

dnolen module_dep.js is passed JS file with with e.g. require("react-dom"); require("react"); and it will resolve dependencies of these requires

juhoteperi15:07:04

currently missing JS modules are included in this JS file

anmonteiro15:07:04

react-dom/server is the server.js file in that directory OR the server/index.js file

anmonteiro15:07:24

@juhoteperi yeah that's how we do it now but I think David was asking about the new approach

juhoteperi15:07:50

If we need to implement this ourselves, we probably could just implement the whole indexing in Clojure

juhoteperi15:07:03

I do have one version of such code somewhere

dnolen15:07:07

@mfikes that is a very good point actually! we probably need a knob for that so people can transition

mfikes15:07:36

Yeah, we are probably complecting warnings with checked code emission anyway 🙂

anmonteiro15:07:38

@juhoteperi that's a good point actually

juhoteperi15:07:10

Would make me feel much better about npm-deps also, using node modules wouldn't be as tied with having Cljs call npm binary then

anmonteiro15:07:13

We were only doing it in JS to get the transitive closure of dependencies from a set of requires

anmonteiro15:07:45

But that would work for Node

anmonteiro15:07:12

For Closure I think we don't want to blindly pass every file

dnolen15:07:40

yes we need separate indexing (for ourselves)

juhoteperi15:07:42

No but we can look at Cljs dependency graph to see which modules need to passed into Closure

dnolen15:07:45

from stuff Closure needs to look at

dnolen15:07:00

though my impression is that Closure actually supports that too

dnolen15:07:12

(it can prune for you if you set a flag)

dnolen15:07:43

in anycase I’m fine with indexing ourselves - it’s orthogonal to finding the deps

dnolen15:07:55

the only reason to avoid it is that someone’s already done that work

dnolen15:07:59

and sorted out the bugs

dnolen15:07:35

@mfikes updating ticket

anmonteiro15:07:53

I'll try to find time before the end of the week

anmonteiro15:07:09

This is exciting

dnolen15:07:53

@mfikes updated the ticket

dnolen15:07:27

seems way better - the only tricky part is the warning - but I described a simple plan - you may have better ideas

mfikes15:07:35

@dnolen The meaning of the third bullet is not clear to me * add checked-aget', checked-aset' which allocate an Error to get the raw stacktrace

dnolen15:07:03

@mfikes printing that something went wrong without knowing the bad caller isn’t helping anyone 🙂

mfikes15:07:39

Ahh, the :pre is no good

dnolen15:07:58

@mfikes it’s fine actually

dnolen15:07:13

checked-aget checked-aget' are 2 different checkers

dnolen15:07:29

you need them to be separate to not run afoul of AOT issues

mfikes15:07:17

Ahh, cool. Yeah, figure something like that might be needed to actually implement it all.

rauh18:07:46

@mfikes How would I trigger the double warning when all I have is master? I tried adding something to a test but didn't get any warnings

mfikes18:07:53

@rauh You’d have to 1. Apply the latest patch in https://dev.clojure.org/jira/browse/CLJS-2198 2. Either set :warnings true or :warnings {:invalid-array-access true} 3. Try evaluating (aget #js {:foo 1} "foo")

rauh18:07:32

Is there any way to do this on the cljs repo? I mean without having to build the jar?

dnolen18:07:00

./script/repl

mfikes18:07:12

Or perhaps even easier, script/noderepljs, but first make a minor tweak to that script right before the last ) to throw in :warnings true

rauh18:07:56

Oh I didn't know about these. Still a noob when it comes to much of the codebase 🙂 That worked!

mfikes18:07:11

I’m secretly hoping that every macroexpansion results in a form that is analyzed twice, and if we find it, another huge compiler perf win.

anmonteiro18:07:01

^ this would mean so much for bootstrap

mfikes18:07:26

Oh. My guess is the burden of :read-cond :allow :features might make CLJC slow.

darwin19:07:46

@mfikes having troubles running planck tests inside canary’s travis job: https://travis-ci.org/cljs-oss/canary/builds/252479615 not sure what is going wrong, I suspect the travis machine runs out of memory or something like that it works for me on local machine (it runs inside docker for mac)

darwin19:07:53

or directly

mfikes19:07:56

@darwin Right. Planck builds themselves have been timing out on Linux https://travis-ci.org/mfikes/planck/builds

darwin19:07:57

I don’t want you to investigate it, just FYI in case you knew what could be the issue

mfikes19:07:52

I checked, and it builds fine on Ubuntu locally for me in a VM, so I think Travis is having some sort of issue with Ubuntu that just cropped up over the past few hours

darwin19:07:33

ok. good to know, I don’t see it as canary issue at this moment, I want ideally all tests to run externally on their own CI anyways, this was just a test if embedding whole test suite was possible

darwin19:07:21

@mfikes now it completed: https://travis-ci.org/cljs-oss/canary/builds/252541373#L1192 just having issue with port 123 inside the docker container, same as on my local machine (inside mac docker)

mfikes19:07:49

By the way, the latest ClojureScript commit on master led to Planck failing—a canary run would have caught that. I caught it on my own and fixed the Planck issues.

dnolen19:07:44

@mfikes tools.reader bump?

anmonteiro19:07:21

Lumo is failing too

anmonteiro19:07:25

but we know the fix

mfikes19:07:30

@dnolen Yes, it was that Planck resorts to looking at the strings in the exceptions thrown by tools.reader. A fragile approach, but necessary.

dnolen19:07:59

oh yeah I don’t know that we can do anything about tests like that, we have to do the same thing

mfikes19:07:58

Right. tools.reader is actually throwing rich ex-info objects, but the data doesn’t have the stuff that Planck needed to react appriately. All cool. In this case, the breakage is clearly Planck’s fault

mfikes19:07:31

Actually, I blamed Russ Olsen and he was happy about it https://twitter.com/russolsen/status/884863183256784896

bronsa20:07:28

@mfikes if there's any data missing from the ex-info that tools.reader is throwing that you'd find useful, let me know

mfikes20:07:37

@bronsa If you are curious, Planck does a read and then attempts to deduce if the thrown exception is due to an EOF condition

bronsa20:07:29

so a :type :eof in ex-data would help?

mfikes20:07:53

Yes. I haven’t looked closely, but I think if you do a read-string on something that results in one of the EOF conditions, as opposed to trying to read a malformed number, like 34f, then something exactly like you propose above @bronsa would be sufficient for Planck (and Lumo) to avoid depending on the string message

mfikes20:07:19

@bronsa The use case for a REPL is the user types [1 2 and hits return. You want to attempt to read that, and when it throws, you want to react differently than if the user had typed 34f and hit return. Currently they both have the same ex-data, but the first is an EOF (where the REPL will just go to a new line and let the user type more input), and for the second, you need to display to the user that they typed an incorrect form.

bronsa20:07:53

yes that makes sense

mfikes20:07:56

Reading "" and "[1 2" both result in exceptions that are fortuitously prefixed with the string "Unexpected EOF", with different trailing suffixes

mfikes20:07:10

@darwin For the Canary stuff, I’m wondering how you get projects to build with a specific version of ClojureScript (while at the same time not changing those project’s .travis.yml files)

darwin20:07:16

@mfikes for projects with travis CI, I’d like to trigger travis build and passing a special env var, like you did

darwin20:07:36

for other types of projects it will depend on their setup

mfikes20:07:54

OK, that answers it. I suppose the projects that participate would need to support some way of specifyig the verision

darwin20:07:01

definitely

darwin20:07:27

I’m wondering if we should specify just a version, and clojurescript jar will be downloaded from maven, or maybe full jar url?

mfikes20:07:32

The Planck hack in project.clj feels a little gross, but it is expedient and easy

mfikes20:07:04

Ahh, right. Was wondering about that: How do projects know where to get the JAR.

darwin20:07:24

wanted to ask you guys what would be preferred, also we should bounce the idea off the boot guys, what is their preferred way

mfikes20:07:25

Perhaps a similar project.clj hack around :repos

darwin20:07:58

canary as a first step given a git url (or sha) will build the jar and then has to put it somewhere so that the projects could consume it conveniently

darwin20:07:26

if we could do this override on maven-layer, not touching project.clj or boot files would be best IMO

mfikes20:07:33

I guess if the ClojureScript JAR is in a local maven repo, all maven-based projects would pick it up regardless

darwin20:07:08

but we would have to know what version of clojurescript jar project.clj want to consume, if we wanted to replace it under them

mfikes20:07:02

I was thinking: Shove a specific version into the local repo, and tell the oss project to use that version

darwin20:07:09

ok, that sounds fine

darwin20:07:28

I think we could provide a shell script which they could put into their .travis.yml, in before_install section, and that would do the deploy of that jar

mfikes20:07:42

Not arguing that this is the only way… just one that could work

darwin20:07:32

I was thinking about some “magical” solution just through maven, but that is probably not a good idea anyways, sometimes projects want to test different clojurescript versions and have complex deps

darwin20:07:36

that would confuse them

mfikes20:07:09

The easiest: Deploy every snapshot to Clojars 2nd easiest: Deploy to local maven repo (somehow) 3rd: Deploy to a network repo that Canary “owns”, and tell projects to use that repo

darwin20:07:38

…participating projects will have to accommodate their project.clj or boot configs

mfikes20:07:36

I wonder if you can (for lein at least), go use a local profile to override a dep. Hmm.

darwin20:07:37

quick idea: we could use github releases feature and host the jars from there

darwin20:07:03

I like the idea that everything is just in one repo, so no additional setup of S3 or some other machinery

darwin20:07:46

or dirty would be to have another branch and commit jars there using some naming scheme, so we get hosting that way

darwin20:07:58

and archiving

juhoteperi20:07:06

I think reading the version from env (like in Planck project.clj) is good way.

juhoteperi20:07:22

Committing jars to repo is bad way. Cloning the repo would get VERY SLOW fast.

darwin20:07:57

indeed, env var is easy

darwin20:07:40

git is pretty smart these days, git clone --branch ... won’t bring unneeded stuff

juhoteperi20:07:33

Github releases would work, but each release needs have a tag, but that is probably not a problem.

darwin20:07:00

for github release we would need to interact with the github api, which is more work ATM

darwin20:07:12

and not that portable, if we wanted move off github

juhoteperi20:07:19

Hmph, even when cloning can be told not to fetch everything, committing large files directly to git is not good idea. But if we want to do that, we could use Git-LFS, but I'm not sure how much storage Github offers for free.

darwin20:07:49

let’s do the first iteration via git, and we will see, we can scrap it every month or so, and if it will be problem we can find a better solution

juhoteperi20:07:58

Hhm, okay, if the each release would be commited to it's own branch and the branches removed at some point, I think the files would be removed from index during GC.

darwin20:07:31

the idea is to commit those into a separate branch using some well-defined naming convention, so we can reach them via raw github urls

darwin20:07:00

I’m thinking about using clojurescript repo SHAs to identify them

darwin20:07:57

btw. I have some other private repos with lots of binary data, not using git lfs and it hasn’t been a problem, except for initial clone

mfikes20:07:07

@darwin In the Planck run (https://travis-ci.org/cljs-oss/canary/builds/252541373#L1192), would something like Planck be the only think inside that job/VM? Is that VM set up by Planck’s .travis.yml file?

anmonteiro20:07:17

@dnolen hrm… turns out I have most of the node module indexing done

anmonteiro20:07:37

I’ll clean it up and attach a patch this evening

darwin20:07:23

@mfikes canary runs bunch of script in paralllel inside a docker container: https://github.com/cljs-oss/canary/blob/master/docker/Dockerfile

mfikes20:07:01

Oh, I see, you have some Planck specific things in there.

darwin20:07:36

but I used planck just as a complex example of embedding tests inside canary, that won’t be recommended way: https://github.com/cljs-oss/canary/blob/master/runner/src/cljs_oss/projects/planck.sh#L5-L6

mfikes20:07:38

I guess Planck and Lumo are outliers in that they are native

darwin20:07:16

canary should just launch a battery of relatively lightweight scripts/functions, which will trigger the CI work somewhere else

darwin20:07:19

and collect results

anmonteiro20:07:27

btw re: canary I remembered Node.js has this https://www.npmjs.com/package/citgm

anmonteiro20:07:11

> The Node.js project uses citgm to smoketest our releases and controversial changes.

darwin20:07:15

@anmonteiro cool, someone had the same ideas 😉

darwin20:07:50

didn’t see this one, btw. the name is subject to change, ideas welcome, I just had to name it somehow

anmonteiro20:07:10

I don’t really care about the name, as long as it does what we want

darwin20:07:11

I was considering using lumo or planck for building the runner tool, but I had some code snippets for clojure already in some other projects, so I took the clojure path

anmonteiro20:07:22

@darwin thank you for the work you’re putting into doing this

mfikes20:07:09

I wonder roughly how many projects we would get to participate? I suspect if 20–50 large mainstream ones participated that would be sufficient to meet the overall goal of catching subtle regressions.

anmonteiro20:07:54

more could also become unmaintainable

darwin20:07:49

the goal here is to set it up and forget, we will see how this will play out in practice

darwin20:07:19

also the idea of triggering the build directly on participating project repos will hopefully buy attention of the project maintainers, because they will be gently notified that their build were failing (not by us, but by travis ;)

mfikes20:07:29

In my experience with Planck, I think it breaks once a month or so, half the time a bug in Planck, half the time something good it caught in ClojureScript

mfikes20:07:33

Oh, that is interesting. So, using Planck as an example, I’d somehow give a Travis token to cljs-oss that it could use to trigger mfikes/planck?

mfikes20:07:44

That might have some good properties, in that each project maintains a single copy of its own stuff.

mfikes20:07:50

The mental model for a participant: Today, Travis builds whenever I push to my project. If I participate, it can also build my project whenever ClojureScript has a commit.

darwin20:07:39

hmm, I don’t 100% follow, I was thinking that to trigger mfikes/planck build via API you would have to give me an auth-token and this token has nothing to do with cljs-oss, it just grants permission to trigger builds to anyone who has it

darwin20:07:12

so my idea was to store those tokens in .travis.yml as encrypted env vars, for canary scripts to use

mfikes20:07:48

Yes. They are probably security-senstitive things.

darwin20:07:13

we could also set them in travis web-ui

darwin20:07:39

but I’m not sure who can see that admin panel, that is kinda blurry in my mind

darwin20:07:51

also who “owns” canary’s travis

mfikes20:07:04

By the way, when you use the API to trigger a build, you can, in the message, put a “fake” commit message. That way, if you look at these builds https://travis-ci.org/cljs-oss/planck/builds you can see that some of them are triggered builds for specific ClojureScript versions, while others are conventional builds that result from commits to the project itself.

darwin20:07:33

good point, I’d like to generate meaningful commit messages there

mfikes21:07:22

In concrete terms body='{"request": {"branch":"master", "message": "cljs-oss ClojureScript 1.9.671", "config": {"env": {"CLOJURESCRIPT_VERSION": "1.9.671"}}}}'

anmonteiro21:07:04

so indexing a node_modules installation in plain Clojure: https://dev.clojure.org/jira/browse/CLJS-2211 cc @juhoteperi @dnolen

dnolen21:07:30

whoa awesome!

anmonteiro21:07:18

we never go down a node_modules/module/node_modules thing

anmonteiro21:07:36

I don’t think it makes sense to do it

dnolen21:07:43

@anmonteiro and that works for react/dom-server ?

juhoteperi21:07:46

The js files have provides with and without .js, is the .js extension used in Node requires?

anmonteiro21:07:50

@dnolen yeah, check tests

dnolen21:07:57

@mfikes sadly macros are not a perf bottleneck 😉

anmonteiro21:07:23

@dnolen specifically:

(closure/maybe-install-node-deps! {:npm-deps {:react "15.6.1"
                                                :react-dom "15.6.1"}})
  (let [modules (closure/index-node-modules-dir)]
    (is (true? (some (fn [module]
                       (= module {:module-type :commonjs
                                  :file (.getAbsolutePath (io/file "node_modules/react/react.js"))
                                  :provides ["react"]}))
                 modules)))
    (is (true? (some (fn [module]
                       (= module {:module-type :commonjs
                                  :file (.getAbsolutePath (io/file "node_modules/react/lib/React.js"))
                                  :provides ["react/lib/React.js" "react/lib/React"]}))
                 modules)))
    (is (true? (some (fn [module]
                       (= module {:module-type :commonjs
                                  :file (.getAbsolutePath (io/file "node_modules/react-dom/server.js"))
                                  :provides ["react-dom/server.js" "react-dom/server"]}))
                 modules))))

anmonteiro21:07:26

small snippet

juhoteperi21:07:33

Does Closure module processing work correctly if the dependencies are not indexed?

dnolen21:07:55

@mfikes the issue is quite simple - we analyze the macro expansion, but we analyzeed the macro - so the checking passing run twice on the same AST

anmonteiro21:07:01

@juhoteperi yeah you can require a file with extension in NOde

anmonteiro21:07:15

require('react-dom/server.js') works so I added that to the provides too

dnolen21:07:34

@mfikes all you need to do is call analyzed on the return value of check-invoke-arg-types, and not check if the ast is already analyzed?

anmonteiro21:07:13

I think that function covers all the use cases I could think of

anmonteiro21:07:25

if you think something is not handled, do let me know 🙂

juhoteperi21:07:16

Next part would be probably to call this index in handle-js-modules and look at js-sources to see what is required by Cljs sources

dnolen21:07:12

@anmonteiro applying & testing

anmonteiro21:07:24

this patch is just adding a function that’s not used yet

dnolen21:07:30

yeah I know

anmonteiro21:07:31

next steps are of course applying it 🙂

anmonteiro21:07:43

@juhoteperi we still need to handle 1 case

anmonteiro21:07:59

we need to keep using module_deps.js when targeting the browser

anmonteiro21:07:11

because its resolve function respects the browser setting in package.json

anmonteiro21:07:26

that’s the only caveat I can think of

dnolen21:07:37

@anmonteiro but I think taking the next step to do the right thing automatically for Node.js target will be trivial after this

anmonteiro21:07:40

but we can definitely use this one when target is node

juhoteperi21:07:52

This should be used always

dnolen21:07:05

yes, we should always index node_modules

juhoteperi21:07:10

We can't simplify the dependency logic if we need to use module_deps.js

anmonteiro21:07:16

@dnolen also feel free to tweak function naming

dnolen21:07:31

@juhoteperi you still need module_deps.js though

dnolen21:07:50

but we’re only doing top-level …

dnolen21:07:56

but actually it does matter

anmonteiro21:07:02

what do you mean?

dnolen21:07:02

we don’t want those other things to become orphans

dnolen21:07:20

so you still need module_deps.js for client builds so Closure doesn’t get noise

dnolen21:07:31

unless we figure out what the dep flag is for Closure to drop that stuff

anmonteiro21:07:34

is that a problem?

dnolen21:07:45

DEPENDENCY_MODE (or something like that is a thing in Closure)

anmonteiro21:07:56

yeah it’s the same thing I linked

anmonteiro21:07:00

dependency mode STRICT

dnolen21:07:08

oh hah …

juhoteperi21:07:24

Supporting package.json browser field should not be hard?

dnolen21:07:57

@anmonteiro applied and pushed to master

juhoteperi21:07:24

This doesn't work with module-processing because Closure needs to know about all the dependencies (fbjs etc.)

dnolen21:07:00

@juhoteperi yes I think you need both things for sure

dnolen21:07:10

for browser case module_deps.js finds everything

dnolen21:07:17

in Node.js case, require sorts it out for you

dnolen21:07:46

for ClojureScript :require resolution, top level is good enough

anmonteiro21:07:53

I don’t understand why you’re saying this doesn’t work

anmonteiro21:07:19

is it because it doesn’t index things of the form node_modules/xxx/node_modules?

juhoteperi21:07:37

Yes, module processing needs to know about those

anmonteiro21:07:50

those are fairly uncommon though

anmonteiro21:07:02

that only happens when you have conflicting dependency versions

anmonteiro21:07:19

there will still be one dependency of the same name at the top level

anmonteiro21:07:25

just a different version

juhoteperi21:07:32

Hmh, maybe if I had new npm

juhoteperi21:07:42

With old one the top-level only has the deps that are on my package.json

dnolen21:07:52

what are we talking about here - if you use both kinds of indexing this will always work? 🙂

dnolen21:07:07

what just landed doesn’t supplant how we did it before

dnolen21:07:36

what @anmonteiro is just did is about one thing and one thing only

anmonteiro21:07:37

@dnolen I don’t know if --dependency_mode=STRICT helps our use case anyway

dnolen21:07:40

fixing :require

anmonteiro21:07:53

> --dependency_mode=STRICT only keeps files that goog.provide a symbol.

anmonteiro21:07:59

from the Closure wiki

anmonteiro22:07:17

but we wanna filter noise out before processing them

anmonteiro22:07:38

we won’t have any goog.provides if modules are commonJS

dnolen22:07:48

so we have a thing to do that filtering and it’s good enough for now as far as I’m concerned

dnolen22:07:10

so next step would just be getting feature parity w/ master while killing that yucky missing-js-modules thing 🙂

dnolen22:07:03

@mfikes these issues aside, I have to say this array checking is pretty cool

dnolen22:07:17

@anmonteiro naming stuff on your patch is fine - I would consider all this work internal details for now

juhoteperi22:07:20

https://github.com/Deraen/clojurescript/commit/f9f01552bc2a143fe5ccdbeb381dfbc55fe11494 This should replace missing-modules with looking at this "top-level index" and then using module-deps.js to build full list of files for Closure

anmonteiro22:07:00

@juhoteperi doesn’t seem right to delete that line concerning ups-foreign-libs

anmonteiro22:07:09

deps.cljs files may have :npm-deps

anmonteiro22:07:41

or is that handled by handle-js-modules?

juhoteperi22:07:44

Those deps will be installed by maybe-install-node-deps! and after they are installed, they will be included in the index

anmonteiro22:07:02

you’re right

anmonteiro22:07:52

your patch looks OK to me

anmonteiro22:07:04

but that’s just replacing :missing-js-modules

dnolen22:07:15

yeah it’s OK to get master going

anmonteiro22:07:29

only missing the Node.js js/require emission now I suppose!

dnolen22:07:38

but the node_modules index should probably be pushed into js-deps/js-dependency-index

dnolen22:07:11

that’s really what’s needed to js/require to work

anmonteiro22:07:33

yeah, so we can resolve them in cljs.analyzer/resolve later

dnolen22:07:55

@juhoteperi but if you want to supply this patch so we can test master and kick this around - that’s cool too

dnolen22:07:25

whatever we do next won’t be all that different from this, just maybe slightly different order

dnolen22:07:03

I’m happy to take on the next step, unless someone else is really excited about it

juhoteperi22:07:45

Go ahead, I should be going to sleep anyway

juhoteperi22:07:21

I'll hope to update my js-globals patch tomorrow, I already wrote some explanation for that but need work on it a bit more

dnolen22:07:17

2212 passed tests, applied to master

juhoteperi22:07:56

:npm-deps is now only used to automatically install deps, so using Node packages will work even if the packages are installed manually

juhoteperi22:07:16

Which is cool

dnolen22:07:43

way cool 😎

dnolen22:07:32

last big thing

juhoteperi22:07:47

Do you plan to directly emit require call when requiring Node package?

juhoteperi22:07:07

How will that work with :refer etc?

dnolen22:07:16

and we assign it to a name

dnolen22:07:27

we don’t need to do anything special

juhoteperi22:07:35

Maybe :js-global could work the same way, instead of creating wrapper Closure modules

juhoteperi22:07:52

instead of require call, just assign window.Global to a name

dnolen22:07:36

oh yeah, that’s a really good idea - should have thought about that before 😛

dnolen22:07:47

you don’t need :js-global for this

juhoteperi22:07:47

And to a question some days ago: Yes, I think we can get quite far with single :js-global name, all Node packages will only export single object or function.

dnolen22:07:54

we don’t need anything

juhoteperi22:07:11

There will be JS libs that export multiple globals, but not those that are also built for Node

dnolen22:07:13

I see you need to know what to assign to what name

dnolen22:07:29

we know the left-hand-side

dnolen22:07:33

but not the right-hand side

dnolen22:07:57

no what I mean is this

juhoteperi22:07:57

or what do you mean

dnolen22:07:05

we don’t need :js-global for Node.js target

dnolen22:07:10

because we can make up a local var

dnolen22:07:18

and assign the result of require

dnolen22:07:33

we know the left and right hand side

dnolen22:07:50

but for foreign libs we don’t know what they export

dnolen22:07:58

and we can’t assign anything

juhoteperi22:07:13

How is this different? require returns object or function, window.Global is object or function

dnolen22:07:41

but in Node.js this won’t be global

dnolen22:07:49

var localReact = require("react"); is how it’s going to work

juhoteperi22:07:14

Yeah and then code using (react/fooBar) is going to be compiled into localReact.fooBar()

dnolen22:07:23

but foreign lib case is different

dnolen22:07:41

so OK I (finally) get the need for :js-global but need to think about it some more

juhoteperi22:07:50

I think it should work

juhoteperi22:07:04

This is quite similar to what I did with Closure modules

dnolen22:07:31

what will work?

juhoteperi22:07:42

module$react = window.React and Cljs code using (:require [react :as react]) was compiled into module$react.fooBar()

juhoteperi22:07:01

var localReact should work the same?

dnolen22:07:07

:js-global let’s call it something a bit more clear - :global-exports ?

dnolen22:07:41

I wonder if this is sufficient

juhoteperi22:07:16

Works for CommonJS but I'm not sure about ES6

dnolen22:07:16

{:global-exports '{cljsjs.react React cljsjs.react/dom-server ReactDOMServer}}

anmonteiro22:07:18

:global-name?

dnolen22:07:36

we need to be able to map a provide to a specific global name

juhoteperi22:07:01

Is there benefit in doing this in options instead of foreign-lib map?

dnolen22:07:19

it would be in foreign-lib map

dnolen22:07:23

not suggesting compiler option for this

juhoteperi22:07:51

Okay, but then it can be just a single name

juhoteperi22:07:05

I presume we are going to rename Cljsjs package provides so that the names match Node names

anmonteiro22:07:13

I agree we don’t need a map, since we can make one with provides

dnolen22:07:35

I think we should do it since we don’t know how people will bundle their stuff

dnolen22:07:48

this at least opens the door for people bundle a huge crazy thing and give it sensible exports

juhoteperi22:07:58

Hmm, single file that exports multiple things?

juhoteperi22:07:06

Yeah, I guess that is posssible with webpack and such

dnolen22:07:11

yes because they built something themselves with webpack

dnolen22:07:19

and this means they can now get sensible requires

juhoteperi22:07:29

Okay, I see the need now

dnolen22:07:18

ok new blocker

dnolen22:07:21

but I think a pretty easy one

dnolen22:07:33

doesn’t really depend on anything else

juhoteperi22:07:32

If you implement the Node require side, I can work on this, unless someone else has time do it first

dnolen22:07:59

alright gotta run

darwin23:07:32

@mfikes new progress: canary successfully triggered an external build of cljs-devtools project: https://travis-ci.org/cljs-oss/canary/builds/252617181 https://travis-ci.org/binaryage/cljs-devtools/builds/252618407 the project script[1] uses token stored in encrypted env var[2] turned out that there will be more work needed on writing decent travis API client library, unfortunately the response from the trigger request does not give me back a build-id I could immediately use for generating a report page with status badges and build pages urls, I will have to poll travis API to wait for builds to start and give me list of build-ids actually triggered from the request (a single commit/request can in theory trigger multiple builds in matrix, possibly after a while when they get processed from queue). [1] https://github.com/cljs-oss/canary/blob/master/runner/src/cljs_oss/projects/binaryage.clj#L5 [2] https://github.com/cljs-oss/canary/blob/jobs/.travis.yml#L6

mfikes23:07:57

@darwin Cool! Curious, how is the token decrypted?

darwin23:07:42

I would guess they hold a private key and their cli tool encrypts it with their public key

mfikes23:07:59

Ahh. Good enough.

darwin23:07:59

the tricky part is not to leak those env vars to logs

darwin23:07:54

was looking for some travis v3 client api library, but nothing exists for java or clojure, it is too new