This page is not created by, affiliated with, or supported by Slack Technologies, Inc.
2017-07-11
Channels
- # aws (15)
- # beginners (55)
- # boot (116)
- # bristol-clojurians (2)
- # cider (4)
- # cljs-dev (439)
- # cljsrn (14)
- # clojure (135)
- # clojure-argentina (3)
- # clojure-czech (4)
- # clojure-italy (60)
- # clojure-russia (1)
- # clojure-spec (48)
- # clojure-uk (42)
- # clojurescript (170)
- # cloverage (11)
- # core-async (19)
- # cursive (13)
- # datomic (48)
- # emacs (2)
- # graphql (3)
- # hoplon (8)
- # jobs (1)
- # jobs-discuss (5)
- # klipse (11)
- # luminus (5)
- # lumo (5)
- # mount (48)
- # off-topic (96)
- # om (17)
- # onyx (14)
- # parinfer (30)
- # protorepl (1)
- # re-frame (90)
- # reagent (2)
- # remote-jobs (1)
- # spacemacs (12)
- # specter (20)
- # uncomplicate (1)
- # untangled (65)
- # vim (2)
- # yada (8)
A fallout of the recent aget
work is that it will be very easy to add type inference checks for runtime function invokes (I think currently most of that kind of stuff is done for macros).
cljs.user=> (defn foo [^boolean x] (bit-count x))
WARNING: cljs.core/bit-count is being passed [boolean] at line 1 <cljs repl>
#'cljs.user/foo
cljs.user=> (bit-count "abc")
WARNING: cljs.core/bit-count is being passed [string] in file <cljs repl>
0
^ Trivial to addHere’s how, by adding a symbol and a validator to a map of existing validators, along with the warning type to emit:
{,,,
'cljs.core/bit-count
{:valid? #(every? numeric-type? %)
:warning-type :generic-type-error}}
expanded on my JavaScript modules blog post to account for previous feedback: https://github.com/clojure/clojurescript-site/pull/104
^ looking for final feedback
@martinklepsch @dnolen I’d also refer to a recipe project which demonstrates how one can use code splitting in a web app for example with routing
I think small recipe projects is a good thing in general
@dnolen just released tools.reader 1.0.2 with set duplicate checks fixed and error messages chagnes in, if you want to give it a go in cljs
@roman01la routing is just not in scope for talking about code splitting on the website
@anmonteiro checking it out
@dnolen Yes, sure. I’m just thinking about recipe projects for different compiler features more or less in a context of web apps. Maybe will find time to work on this.
Quick thought in passing: what about getting a guest post from someone like the re-frame guys showing how it could be used (or replace an existing technique).
@U051MTYAB I have no authority at all related to this but does it sound like something you'd be interested in contributing?
What do I have to do to make the tests run again? I'm getting a ClassNotFoundException: com.google.javascript.jscomp.Es6RewriteModules
when running ./scripts/test
on master
@rauh ./script/bootstrap
should download new Closure
@juhoteperi Thanks! That worked.
@mfikes I reviewed the patch, let’s clean it up a bit more then we can look at the double warning issue
Are you referring to the general ability to warn when we see thing being wrong statically?
Cool. The double warning does very much appear to be related to the last bit of macroexpansion.
@mfikes yes but it can be a bit fancier for everything except really higher order stuff like map
Right. I think Colin was dreaming of doing the same essentially in the static analysis Cursive does. Perhaps he needs to add type inference. But the ClojureScript compiler is half way there 🙂
I’ll start putting together a patch that addresses the last two comments you have in the ticket @dnolen in case the double analysis ends up being treated as a different issue. (Hopefully a perf gain if we find it.)
I’ve slapped analyzed
on lots of different things around there, but can’t seem to nail it.
@mfikes the only reason I’m somewhat skeptical about macros -> double-warning is why it doesn’t affect other things. Do you have a hypothesis about that?
I don’t suppose @ambrosebs has looked at fancier type checking based on specs? 🙂
Perhaps the only guess we haven’t seen it before is that +
turns into js*
and the warnings are on js*
. Here we have a new pass that looks and sees checked-aget
twice.
At least with the stack diff attached in the ticket we can see the extra bit is right at the last line of analyze-seq
ok what happens is that parse-invoke
analyzes the fn expression once - so you get your first warning
@mfikes if we get this wrapped up, maybe we should go with your post on Thursday? The post itself I think is in a really good state except date / var names.
@dnolen I’ve got the revised patch apart from the bit above, which is giving me a little grief when starting the REPL
Yeah, there is a bit in the post that refers to “in the future” adding this stuff. That small bit could be revised / expanded a little.
@mfikes if for some reason you cannot make this work, lets get a patch with my suggested change and I can take a look
I can’t make it work. I’m starting to wonder if you patch is probably good “in spirit”, but perhaps in this case my code is going down the other branch.
I have my other stuff running unit tests now, and I’ll attach a revised patch without the double warning issue stuff.
@mfikes no worries, I suspect it’ll end up being a pretty simple issue - I’m still pretty confident that this has to do w/ invoke optimization based on the stack trace - but perhaps we missing a detail somewhere
Should the args get an analyzed tag?:
bindings (cond-> []
bind-args? (into (interleave arg-syms args))
bind-f-expr? (conj f-sym (analyzed f)))
Cool—patch is now in https://dev.clojure.org/jira/browse/CLJS-2198 so anyone can mess with it
If we solve the double-warning issue, we can revise the tests that are actually checking for this warning to go back to looking for a sequence containing one warning. You can see the patch revises it to look at the first
warning.
The new aget
warnings trigger for core.async
FWIW. Here is one
---- Compiler Warning on target/ios/cljs/core/async.cljs line:248 column:6 ----
cljs.core/aset, arguments must be an array, followed by numeric indices, followed by a value, got [objects number number] instead
One thing I need to figure out: Using this stuff on a real project, for some reason it appears to be failing when loading/compiling stuff in libs until I do a require
:reload
, and that’s when the triggers occur. One troubling thing is that the checked-aget
aliasing doesn’t seem to appear in the JS for libs when I would expect it to. So, probably some real-world corner cases to sort through.
@mfikes for the former thing, you mean you don’t see checks emitted until you reload a namespace?
TL;DR seems to be a problem with initial compile (I’m using downstream tooling)
What I see occur: I set :warnings true
. Then lein clean
, lein figwheel
. My code gets compiled by Figwheel when it starts up, but no warnings appear. If I look at the JavaScript in my out directory, I see the {:invalid-array-access-warning-enabled true}
build-affecting option recorded, but I don’t see checked-aget
in the JavaScript. If I then go in the REPL and use :reload
on a namespace, I then see warnings appear in the REPL console, and if I go back and look at the JavaScript in out, the unchecked return (({"foo": (1)})["foo"])
code snippets are replaced with code using checked_aget
: return cljs.core.checked_aget.call(null,({"foo": (1)}),"foo");
The only think I’m suspecting is *unchecked-array-access*
leaking to other namespaces, but have no reason to believe that this is really happening.
Yes, a bug.
Everywhere *unchecked-if*
had a binding, I added one for *unchecked-array-access*
(otherwise you are right, it throws if you try to set!
)
@dnolen thanks for the feedback on my draft. I might not have time to work on the node module indexing until Friday/weekend
Another high level observation about the feature: If you enable :invalid-array-access
, it is nice that you get the static analysis checks, but the runtime checks then essentially make it so you can’t run your app if there are enough in your way. (In other words, instead of it acting like a lint, you quickly find you have to turn it back off in order to continue developing.)
Just setting expectations
@anmonteiro understood, I might give it a poke - probably the only thing I need to understand, as not obvious from module_dep.js
how do you find react/dom-server
? you said it cannot be reached from just indexing top-level - sounds like you need to go one node_modules
down or something?
So you always need to go theough the project's node_modules
. react-dom
will be provided by 1. The package.json's main in the react-dom folder 2. Or the index file in that folder in the absence of a main entry
dnolen module_dep.js
is passed JS file with with e.g. require("react-dom"); require("react");
and it will resolve dependencies of these requires
currently missing JS modules are included in this JS file
react-dom/server
is the server.js
file in that directory OR the server/index.js
file
@juhoteperi yeah that's how we do it now but I think David was asking about the new approach
If we need to implement this ourselves, we probably could just implement the whole indexing in Clojure
I do have one version of such code somewhere
@mfikes that is a very good point actually! we probably need a knob for that so people can transition
@juhoteperi that's a good point actually
Would make me feel much better about npm-deps also, using node modules wouldn't be as tied with having Cljs call npm
binary then
We were only doing it in JS to get the transitive closure of dependencies from a set of requires
But that would work for Node
For Closure I think we don't want to blindly pass every file
No but we can look at Cljs dependency graph to see which modules need to passed into Closure
I'll try to find time before the end of the week
This is exciting
seems way better - the only tricky part is the warning - but I described a simple plan - you may have better ideas
@dnolen The meaning of the third bullet is not clear to me
* add checked-aget'
, checked-aset'
which allocate an Error to get the raw stacktrace
@mfikes printing that something went wrong without knowing the bad caller isn’t helping anyone 🙂
Ahh, cool. Yeah, figure something like that might be needed to actually implement it all.
@mfikes How would I trigger the double warning when all I have is master? I tried adding something to a test but didn't get any warnings
@rauh You’d have to
1. Apply the latest patch in https://dev.clojure.org/jira/browse/CLJS-2198
2. Either set :warnings true
or :warnings {:invalid-array-access true}
3. Try evaluating (aget #js {:foo 1} "foo")
Or perhaps even easier, script/noderepljs
, but first make a minor tweak to that script right before the last )
to throw in :warnings true
Oh I didn't know about these. Still a noob when it comes to much of the codebase 🙂 That worked!
I’m secretly hoping that every macroexpansion results in a form that is analyzed twice, and if we find it, another huge compiler perf win.
^ this would mean so much for bootstrap
@mfikes which would solve https://dev.clojure.org/jira/browse/CLJS-1913
@mfikes having troubles running planck tests inside canary’s travis job: https://travis-ci.org/cljs-oss/canary/builds/252479615 not sure what is going wrong, I suspect the travis machine runs out of memory or something like that it works for me on local machine (it runs inside docker for mac)
the actual script is here: https://github.com/cljs-oss/canary/blob/master/runner/src/cljs_oss/projects/planck.sh
@darwin Right. Planck builds themselves have been timing out on Linux https://travis-ci.org/mfikes/planck/builds
I checked, and it builds fine on Ubuntu locally for me in a VM, so I think Travis is having some sort of issue with Ubuntu that just cropped up over the past few hours
ok. good to know, I don’t see it as canary issue at this moment, I want ideally all tests to run externally on their own CI anyways, this was just a test if embedding whole test suite was possible
@mfikes now it completed: https://travis-ci.org/cljs-oss/canary/builds/252541373#L1192 just having issue with port 123 inside the docker container, same as on my local machine (inside mac docker)
By the way, the latest ClojureScript commit on master led to Planck failing—a canary
run would have caught that. I caught it on my own and fixed the Planck issues.
Lumo is failing too
but we know the fix
@dnolen Yes, it was that Planck resorts to looking at the strings in the exceptions thrown by tools.reader
. A fragile approach, but necessary.
oh yeah I don’t know that we can do anything about tests like that, we have to do the same thing
Right. tools.reader
is actually throwing rich ex-info
objects, but the data doesn’t have the stuff that Planck needed to react appriately. All cool. In this case, the breakage is clearly Planck’s fault
Actually, I blamed Russ Olsen and he was happy about it https://twitter.com/russolsen/status/884863183256784896
@mfikes if there's any data missing from the ex-info
that tools.reader
is throwing that you'd find useful, let me know
@bronsa If you are curious, Planck does a read and then attempts to deduce if the thrown exception is due to an EOF condition
Yes. I haven’t looked closely, but I think if you do a read-string on something that results in one of the EOF conditions, as opposed to trying to read a malformed number, like 34f
, then something exactly like you propose above @bronsa would be sufficient for Planck (and Lumo) to avoid depending on the string message
@bronsa The use case for a REPL is the user types [1 2
and hits return. You want to attempt to read that, and when it throws, you want to react differently than if the user had typed 34f
and hit return. Currently they both have the same ex-data
, but the first is an EOF (where the REPL will just go to a new line and let the user type more input), and for the second, you need to display to the user that they typed an incorrect form.
Reading ""
and "[1 2"
both result in exceptions that are fortuitously prefixed with the string "Unexpected EOF"
, with different trailing suffixes
@darwin For the Canary stuff, I’m wondering how you get projects to build with a specific version of ClojureScript (while at the same time not changing those project’s .travis.yml
files)
@mfikes for projects with travis CI, I’d like to trigger travis build and passing a special env var, like you did
OK, that answers it. I suppose the projects that participate would need to support some way of specifyig the verision
I’m wondering if we should specify just a version, and clojurescript jar will be downloaded from maven, or maybe full jar url?
wanted to ask you guys what would be preferred, also we should bounce the idea off the boot guys, what is their preferred way
canary as a first step given a git url (or sha) will build the jar and then has to put it somewhere so that the projects could consume it conveniently
if we could do this override on maven-layer, not touching project.clj or boot files would be best IMO
I guess if the ClojureScript JAR is in a local maven repo, all maven-based projects would pick it up regardless
but we would have to know what version of clojurescript jar project.clj want to consume, if we wanted to replace it under them
I was thinking: Shove a specific version into the local repo, and tell the oss project to use that version
I think we could provide a shell script which they could put into their .travis.yml, in before_install section, and that would do the deploy of that jar
I was thinking about some “magical” solution just through maven, but that is probably not a good idea anyways, sometimes projects want to test different clojurescript versions and have complex deps
The easiest: Deploy every snapshot to Clojars 2nd easiest: Deploy to local maven repo (somehow) 3rd: Deploy to a network repo that Canary “owns”, and tell projects to use that repo
I wonder if you can (for lein
at least), go use a local profile to override a dep. Hmm.
I like the idea that everything is just in one repo, so no additional setup of S3 or some other machinery
or dirty would be to have another branch and commit jars there using some naming scheme, so we get hosting that way
I think reading the version from env (like in Planck project.clj) is good way.
Committing jars to repo is bad way. Cloning the repo would get VERY SLOW fast.
Github releases would work, but each release needs have a tag, but that is probably not a problem.
for github release we would need to interact with the github api, which is more work ATM
Hmph, even when cloning can be told not to fetch everything, committing large files directly to git is not good idea. But if we want to do that, we could use Git-LFS, but I'm not sure how much storage Github offers for free.
let’s do the first iteration via git, and we will see, we can scrap it every month or so, and if it will be problem we can find a better solution
Hhm, okay, if the each release would be commited to it's own branch and the branches removed at some point, I think the files would be removed from index during GC.
the idea is to commit those into a separate branch using some well-defined naming convention, so we can reach them via raw github urls
btw. I have some other private repos with lots of binary data, not using git lfs and it hasn’t been a problem, except for initial clone
@darwin In the Planck run (https://travis-ci.org/cljs-oss/canary/builds/252541373#L1192), would something like Planck be the only think inside that job/VM? Is that VM set up by Planck’s .travis.yml
file?
@dnolen hrm… turns out I have most of the node module indexing done
I’ll clean it up and attach a patch this evening
@mfikes canary runs bunch of script in paralllel inside a docker container: https://github.com/cljs-oss/canary/blob/master/docker/Dockerfile
I had to amend that docker env to include planck deps: https://github.com/cljs-oss/canary/blob/master/docker/Dockerfile#L28-L29
but I used planck just as a complex example of embedding tests inside canary, that won’t be recommended way: https://github.com/cljs-oss/canary/blob/master/runner/src/cljs_oss/projects/planck.sh#L5-L6
canary should just launch a battery of relatively lightweight scripts/functions, which will trigger the CI work somewhere else
btw re: canary I remembered Node.js has this https://www.npmjs.com/package/citgm
> The Node.js project uses citgm to smoketest our releases and controversial changes.
@anmonteiro cool, someone had the same ideas 😉
didn’t see this one, btw. the name is subject to change, ideas welcome, I just had to name it somehow
I don’t really care about the name, as long as it does what we want
I was considering using lumo or planck for building the runner
tool, but I had some code snippets for clojure already in some other projects, so I took the clojure path
@darwin thank you for the work you’re putting into doing this
I wonder roughly how many projects we would get to participate? I suspect if 20–50 large mainstream ones participated that would be sufficient to meet the overall goal of catching subtle regressions.
agreed
more could also become unmaintainable
the goal here is to set it up and forget, we will see how this will play out in practice
also the idea of triggering the build directly on participating project repos will hopefully buy attention of the project maintainers, because they will be gently notified that their build were failing (not by us, but by travis ;)
In my experience with Planck, I think it breaks once a month or so, half the time a bug in Planck, half the time something good it caught in ClojureScript
Oh, that is interesting. So, using Planck as an example, I’d somehow give a Travis token to cljs-oss
that it could use to trigger mfikes/planck
?
That might have some good properties, in that each project maintains a single copy of its own stuff.
The mental model for a participant: Today, Travis builds whenever I push to my project. If I participate, it can also build my project whenever ClojureScript has a commit.
hmm, I don’t 100% follow, I was thinking that to trigger mfikes/planck build via API you would have to give me an auth-token and this token has nothing to do with cljs-oss, it just grants permission to trigger builds to anyone who has it
so my idea was to store those tokens in .travis.yml as encrypted env vars, for canary scripts to use
By the way, when you use the API to trigger a build, you can, in the message, put a “fake” commit message. That way, if you look at these builds https://travis-ci.org/cljs-oss/planck/builds you can see that some of them are triggered builds for specific ClojureScript versions, while others are conventional builds that result from commits to the project itself.
In concrete terms body='{"request": {"branch":"master", "message": "cljs-oss ClojureScript 1.9.671", "config": {"env": {"CLOJURESCRIPT_VERSION": "1.9.671"}}}}'
so indexing a node_modules
installation in plain Clojure: https://dev.clojure.org/jira/browse/CLJS-2211
cc @juhoteperi @dnolen
Nice!
we never go down a node_modules/module/node_modules
thing
I don’t think it makes sense to do it
@anmonteiro and that works for react/dom-server
?
The js files have provides with and without .js
, is the .js
extension used in Node requires?
@dnolen yeah, check tests
@dnolen specifically:
(closure/maybe-install-node-deps! {:npm-deps {:react "15.6.1"
:react-dom "15.6.1"}})
(let [modules (closure/index-node-modules-dir)]
(is (true? (some (fn [module]
(= module {:module-type :commonjs
:file (.getAbsolutePath (io/file "node_modules/react/react.js"))
:provides ["react"]}))
modules)))
(is (true? (some (fn [module]
(= module {:module-type :commonjs
:file (.getAbsolutePath (io/file "node_modules/react/lib/React.js"))
:provides ["react/lib/React.js" "react/lib/React"]}))
modules)))
(is (true? (some (fn [module]
(= module {:module-type :commonjs
:file (.getAbsolutePath (io/file "node_modules/react-dom/server.js"))
:provides ["react-dom/server.js" "react-dom/server"]}))
modules))))
small snippet
Does Closure module processing work correctly if the dependencies are not indexed?
@mfikes the issue is quite simple - we analyze
the macro expansion, but we analyze
ed the macro - so the checking passing run twice on the same AST
@juhoteperi yeah you can require a file with extension in NOde
require('react-dom/server.js')
works so I added that to the provides too
@mfikes all you need to do is call analyzed
on the return value of check-invoke-arg-types
, and not check if the ast is already analyzed?
I think that function covers all the use cases I could think of
if you think something is not handled, do let me know 🙂
Next part would be probably to call this index in handle-js-modules
and look at js-sources
to see what is required by Cljs sources
@anmonteiro applying & testing
this patch is just adding a function that’s not used yet
next steps are of course applying it 🙂
@juhoteperi we still need to handle 1 case
we need to keep using module_deps.js
when targeting the browser
because its resolve function respects the browser
setting in package.json
that’s the only caveat I can think of
@anmonteiro but I think taking the next step to do the right thing automatically for Node.js target will be trivial after this
but we can definitely use this one when target is node
This should be used always
We can't simplify the dependency logic if we need to use module_deps.js
@dnolen also feel free to tweak function naming
@juhoteperi you still need module_deps.js
though
what do you mean?
I think so
is that a problem?
https://groups.google.com/d/msg/closure-compiler-discuss/fr4UFQYM4dk/cc3Dj7VPGwAJ @anmonteiro question I asked in January
yeah it’s the same thing I linked
dependency mode STRICT
Supporting package.json browser field should not be hard?
@anmonteiro applied and pushed to master
This doesn't work with module-processing because Closure needs to know about all the dependencies (fbjs etc.)
@juhoteperi yes I think you need both things for sure
I don’t understand why you’re saying this doesn’t work
is it because it doesn’t index things of the form node_modules/xxx/node_modules
?
Yes, module processing needs to know about those
those are fairly uncommon though
that only happens when you have conflicting dependency versions
there will still be one dependency of the same name at the top level
just a different version
Hmh, maybe if I had new npm
With old one the top-level only has the deps that are on my package.json
what are we talking about here - if you use both kinds of indexing this will always work? 🙂
what @anmonteiro is just did is about one thing and one thing only
@dnolen I don’t know if --dependency_mode=STRICT
helps our use case anyway
> --dependency_mode=STRICT
only keeps files that goog.provide a symbol.
from the Closure wiki
but we wanna filter noise out before processing them
we won’t have any goog.provide
s if modules are commonJS
so we have a thing to do that filtering and it’s good enough for now as far as I’m concerned
agreed
so next step would just be getting feature parity w/ master while killing that yucky missing-js-modules thing 🙂
@anmonteiro naming stuff on your patch is fine - I would consider all this work internal details for now
https://github.com/Deraen/clojurescript/commit/f9f01552bc2a143fe5ccdbeb381dfbc55fe11494 This should replace missing-modules with looking at this "top-level index" and then using module-deps.js to build full list of files for Closure
@juhoteperi doesn’t seem right to delete that line concerning ups-foreign-libs
deps.cljs
files may have :npm-deps
or is that handled by handle-js-modules
?
Those deps will be installed by maybe-install-node-deps!
and after they are installed, they will be included in the index
you’re right
your patch looks OK to me
but that’s just replacing :missing-js-modules
only missing the Node.js js/require
emission now I suppose!
yeah, so we can resolve them in cljs.analyzer/resolve
later
@juhoteperi but if you want to supply this patch so we can test master and kick this around - that’s cool too
whatever we do next won’t be all that different from this, just maybe slightly different order
@juhoteperi thanks!
Go ahead, I should be going to sleep anyway
I'll hope to update my js-globals patch tomorrow, I already wrote some explanation for that but need work on it a bit more
:npm-deps
is now only used to automatically install deps, so using Node packages will work even if the packages are installed manually
Which is cool
Do you plan to directly emit require
call when requiring Node package?
How will that work with :refer
etc?
Maybe :js-global
could work the same way, instead of creating wrapper Closure modules
instead of require
call, just assign window.Global
to a name
And to a question some days ago: Yes, I think we can get quite far with single :js-global
name, all Node packages will only export single object or function.
There will be JS libs that export multiple globals, but not those that are also built for Node
for :refer
?
or what do you mean
How is this different? require
returns object or function, window.Global
is object or function
Yeah and then code using (react/fooBar) is going to be compiled into localReact.fooBar()
I think it should work
This is quite similar to what I did with Closure modules
module$react = window.React
and Cljs code using (:require [react :as react])
was compiled into module$react.fooBar()
var localReact
should work the same?
Works for CommonJS but I'm not sure about ES6
:global-name
?
Is there benefit in doing this in options instead of foreign-lib map?
Okay, but then it can be just a single name
https://github.com/cljsjs/packages/commit/abe28006c7eb6f37e7eab2716fe464e7ff89c699 this is what I tested my code with
I presume we are going to rename Cljsjs package provides so that the names match Node names
I agree we don’t need a map, since we can make one with provides
this at least opens the door for people bundle a huge crazy thing and give it sensible exports
Hmm, single file that exports multiple things?
Yeah, I guess that is posssible with webpack and such
Okay, I see the need now
If you implement the Node require side, I can work on this, unless someone else has time do it first
@mfikes new progress: canary successfully triggered an external build of cljs-devtools project: https://travis-ci.org/cljs-oss/canary/builds/252617181 https://travis-ci.org/binaryage/cljs-devtools/builds/252618407 the project script[1] uses token stored in encrypted env var[2] turned out that there will be more work needed on writing decent travis API client library, unfortunately the response from the trigger request does not give me back a build-id I could immediately use for generating a report page with status badges and build pages urls, I will have to poll travis API to wait for builds to start and give me list of build-ids actually triggered from the request (a single commit/request can in theory trigger multiple builds in matrix, possibly after a while when they get processed from queue). [1] https://github.com/cljs-oss/canary/blob/master/runner/src/cljs_oss/projects/binaryage.clj#L5 [2] https://github.com/cljs-oss/canary/blob/jobs/.travis.yml#L6
I would guess they hold a private key and their cli tool encrypts it with their public key
they do good job on their part: https://travis-ci.org/cljs-oss/canary/builds/252617181#L509
we have to be careful in our code: https://github.com/cljs-oss/canary/blob/master/runner/src/cljs_oss/tools/support.clj#L59