This page is not created by, affiliated with, or supported by Slack Technologies, Inc.
2023-11-15
Channels
- # aleph (2)
- # babashka (35)
- # beginners (31)
- # biff (6)
- # calva (6)
- # cider (5)
- # clojure (61)
- # clojure-android (1)
- # clojure-dev (12)
- # clojure-europe (22)
- # clojure-norway (7)
- # clojure-uk (4)
- # clojurescript (19)
- # datomic (5)
- # events (3)
- # fulcro (15)
- # graalvm (41)
- # guix (2)
- # honeysql (2)
- # hoplon (8)
- # hyperfiddle (10)
- # jobs (1)
- # off-topic (29)
- # overtone (5)
- # podcasts-discuss (1)
- # remote-jobs (1)
- # sci (30)
- # shadow-cljs (186)
- # specter (2)
- # squint (22)
hello, following https://shadow-cljs.github.io/docs/UsersGuide.html#target-node-test , how can I execute a specific test or test file / suite?
don't know what you mean. whereever you run node the-output.js
it should show the println
If I switch :runtime :node
to :runtime :browser
in my ESM project (since I want to import the .js
into a index.html file for testing), shadow begins to complain about:
Module Entry "cljs.pprint" was moved out of module ":cljs.pprint".
It was moved to ":compiler" and used by #{:cljs.pprint :squint_tests :node.nrepl_server :compiler}.
I have moved cljs.pprint to its own module since I don't want to have it in the main .js modulethe error simply means that cljs.pprint is required in multiple modules and the configured hierarchy doesn't work
yes, it likely is because of the devtools injected into the browser. it does a bunch more stuff than the node one (e.g. the hud)
FWIW :runtime :node
basically also just disabled this since that doesn't exist still 😛
:browser
actually exists, so that just enabled it. when I eventually get to actually implementing :runtime :node
it'll break as well since the devtools themselves require cljs.pprint
I'm still dealing with the root problem though, I'm getting some kind of warning about "fs" in the browser which I cannot copy paste. I'm pretty sure this part of my code doesn't use "fs"
I pushed a branch here:
https://github.com/squint-cljs/squint/tree/module-in-browser
Run bb dev
(or the equivalent shadow command for this bb task)
And then host a web server in the root http-server --dir .
on port 8090
And then visit localhost:8090
(or change the port in index.html
(defn watch-squint []
(fs/create-dirs ".work")
(fs/delete-tree ".shadow-cljs/builds/squint/dev/ana/squint")
(spit ".work/config-merge.edn" (shadow-extra-test-config))
(bump-core-vars)
(shell "npx shadow-cljs --aliases :dev --config-merge .work/config-merge.edn watch squint"))
{:deps {:aliases [:cljs]}
:dev-http {3001 "."}
:builds
{:squint
{:js-options {;; don't bundle any npm libs
:js-provider :import}
:compiler-options {:infer-externs :auto}
:target :esm
:runtime :browser
:output-dir "lib"
:devtools {:enabled false}
:modules
{:compiler {:exports
{compileString squint.compiler/compile-string}}
:compiler.sci {:depends-on #{:compiler :compiler.node}
:init-fn squint.compiler.sci/init}
:compiler.node {:depends-on #{:compiler}
:exports
{compileFile squint.compiler.node/compile-file-js
compileString squint.compiler.node/compile-string-js}}
:cljs.pprint {:entries [cljs.pprint]
:depends-on #{:compiler}}
:node.nrepl_server {:depends-on #{:compiler.node :cljs.pprint}
:exports {startServer squint.repl.nrepl-server/start-server}}
:cli {:depends-on #{:compiler :compiler.node}
:init-fn squint.internal.cli/init}}
:build-hooks [(shadow.cljs.build-report/hook
{:output-to "report.html"})]}}}
what might be relevant here and what I don't have because of no bb
is this (bump-core-vars)
thing
bump-core-vars checks squint/core.js for what exports are defined there and dumps it in an edn file
@U05224H0W Sure enough, I got it working when I make a fresh clone. In my regular checkout, no matter what I do: delete .shadow-cljs, .work, the "fs" problem keeps coming back. Like, wtf :)
I'll just swap the fresh clone with my regular dev folder checkout and be done with it
crap, no, I made a mistake, my index.html was loading my non-local version. problem also persists with the fresh checkout
maybe it's due to this:
$ rg 'from "fs"'
cljs-runtime/shadow.esm.esm_import$fs.js
3:import * as esm_import$fs from "fs";
commenting out:
// import "./cljs-runtime/shadow.esm.esm_import$fs.js";
// SHADOW_ENV.setLoaded("shadow.esm.esm_import$fs.js");
// import "./cljs-runtime/shadow.esm.esm_import$path.js";
// SHADOW_ENV.setLoaded("shadow.esm.esm_import$path.js");
in lib/compiler.js
makes the thing work, but not idea why those are in thereI suspect this is due to the test module also being compiled and fs + path (which are used in there) are maybe pushed to the (most common) compiler module
it is moved to compiler because :compiler.node
and :squint_tests
require it and their only common other dependency is :compiler
you should be able to add a :node {:entries [] :depends-on #{:compiler}}
module and have the node things depend on that to catch all the node things
it isn't necessary for shadow to move these deps anywhere since they are built-in to node right?
unfortunately still have to work arround the closure compiler and hiding any traces of ESM 😛
just use :entries []
. that should be enough. entries is allowed to be empty in this case yes
In a similar project, I ran into the same "fs" issue. I tried to introduce a :node module, but this gives me:
Module Entry "clojure.string" was moved out of module ":clojure.string".
It was moved to ":cljs.core" and used by #{:cljs.core :clojure.string :compiler :cli}.
Note sure how to deal with that.
This is the shadow-cljs.edn (in the branch playground).
https://github.com/squint-cljs/cherry/blob/playground/shadow-cljs.edn#L36npx shadow-cljs watch cherry
will give a repro (no bb scripts involved to make it easier)you are trying to force clojure.string
into a separate module, but the module :depends-on
structure doesn't allow it
in the other build the :node
module was used as a collection spot where the namespace "mover" can put stuff
is there something I can do to still have the clojure.string module but not make this complain in watch? it's only a watch problem
this is being added https://github.com/thheller/shadow-cljs/blob/master/src/main/shadow/cljs/devtools/client/console.cljs for watch via preloads
I mean would it make more sense to get an output like :npm-module
would produce? i.e. a .js file per CLJS ns?
but what if there was a mode for :esm
where you didn't need to specify :exports
and instead just give a list of namespaces you want ESM-ified
ie as in :entries [cljs.core clojure.string clojure.walk ...]
and you get a ESM cljs.core.js
, clojure.string
, etc
I mean :npm-module
has the rule of needing ^:export
which you can't add to those namespaces above
not sure how useful that all would be in the end, but to me that is what you are going for there with that config setup
that's true, I'm exporting core, string, walk, etc because cherry is going to import that as its standard library via import
(ESM). Exporting all public vars from a namespace is something that could be handy but not sure how many more I'll add to this project
now I'm back to the "fs" problem with cherry too btw. inserting the node module as I did with squint doesn't help, I'm seeing:
import "./cljs-runtime/shadow.esm.esm_import$fs.js";
SHADOW_ENV.setLoaded("shadow.esm.esm_import$fs.js");
in lib/compiler.js
I don't know how to explain it further. any JS require is basically a namespace in how shadow-cljs is designed
every namespace can only be in one module, so your namespace require structure decide what goes into which module
introducing the extra :node
module just gives the compiler a place to put stuff, that still satisfies the dependency graph
the :node
name here has no special naming for the compiler, it is just an extra node in the graph
so in cherry you either need to make whatever module also depend on :node
, or change your require structure
That’s what I figured too, all clear on that. The compiler module has no fs require and also no transitive fs require. Still I get this issue. I understand that node isn’t a special name, just an extra place for moving stuff to
thats the wrong way at looking at it, the compiler module doesn't need a fs require. the compiler module is the first common dependency module it can be moved to, thus it ends up there
Can I lock fs to the node module so I can debug what other modules must depend on node? (AFAIK I already have this currently, just not the locking down of fs).
sure, just :entries ["fs"]
, at least I think that should work. never actually tried 😛
I think I tried that yesterday but I’ll try in an hour or so. Thank you for your patience :-)
if you have a repro I'm happy to take a look, maybe I can think of a better error description. that error message has always been less then helpful :P
alright, I have :node {:entries ["fs"] :depends-on #{:compiler :cljs.core}}
which isn't complained about, but also I still get the "fs"
error message in the browser console 😆 I'll try to make a repro
This is what node.js
looks like:
import "./compiler.js";
import "./cljs.core.js";
import "./cljs-runtime/shadow.module.node.prepend.js";
SHADOW_ENV.setLoaded("shadow.module.node.prepend.js");
import "./cljs-runtime/shadow.module.node.append.js";
SHADOW_ENV.setLoaded("shadow.module.node.append.js");
ah when I change "fs"
to fs
I get:
Module Entry "fs" was moved out of module ":node".
It was moved to ":compiler" and used by #{:cherry.tests :node :cli}.
Now we're talkingWe’re moving our front-end app deployment to Github Actions.. and as such I’m refactoring how it all works.. presently we inject a few environment variables with some babashka tasks before we call npx shadow-cljs release :target, but I’m wondering if there’s a mechanism for passing parameters into the build, then injecting via hooks to simplify our tooling.. is there a facility for passing params into a release and getting at them with compile hooks to handle the injection before compile step? Thanks in advance!
You can do this with a macro possibly, or perhaps goog-define will work too? Or do you change the shadow config using an env var?
> but you should consider using --config-merge
instead.
Gotcha, that's what I also use in combination with bb
https://shadow-cljs.github.io/docs/UsersGuide.html#_release_specific_vs_development_configuration
many options to do pretty much anything, without more specific details of what you actually need I cannot say what would be best
build hooks are definitely NOT recommended. I regret adding them since they are used for so much stuff they shouldn't be used for.
trying to configure a few endpoint URLs at build time, and were using a very convoluted way to doing it involving babashka, S3, dynamically building the index file.. all sorts of stuff.. trying to get it way simpler to make the whole building-in-a-container thing a bit less complicated 🙂
anything like endpoint URLs are what I call runtime configuration in general I keep out of the build config at all
and instead often load it via html, or have the runtime fetch some json/edn/transit or whatever on startup
Our deployment story is several single-tenant instances of the front-end app, and we don’t want to deal with the client changing which tenant its pointed at.. but I mean your approach makes sense
well if the client wants to change something you are not stopping them via some build constants 😛
env.clj
(defmacro env [s]
(System/getenv s))
app.cljs
(:require-macros [env :refer [env]])
(js/fetch (env "CLIENT_URL"))
this is a bad idea, do not do this. caching is not aware of this, thus will not recognize that something may need recompilation and give you cached output
I agree with Thomas, but if you're not able to do it at runtime, that's also an option.
Yeah, this is true about caching, but you can disable that for specific namespaces or you can bust stuff in .shadow-cljs
related to that thing ;)
yeah just don't. if you want environment variables use what is provided. otherwise your build output will be unreliable and annoying to debug.
all the links I linked above. yes, for environment variables goog-define + shadow/env
there is not such thing as a "macro usage cache". there is a namespace cache, based on a few things the compiler decides to either use the cache (not expand macros at all) or recompile (actually expand macros)
but given how many people run CI systems caches are often wiped completely after a build, so that point may be irrelevant
btw FWIW I also injected these configs into the HTML when I worked on an app the needed this kind of config
so many things to consider… this is all great food-for-thought.. we already leverage some env stuff in the index file directly.. which is at least easy to sanity check with view-source.. and getting them there via goog-define seems like a good way to go.. thank you both for all your insights.. my job would be no fun at all without bb and shadow-cljs 🙂
Hey folks. I've added an nginx reverse proxy in front of shadow's local file serving, as well as some local BE services, to handle self-signed TLS for me. Everything's peachy except for shadow's websocket connection, which is failing on startup. Based on my combing the docs, I'm not seeing anything regarding special handling of secure websockets through a reverse proxy, but perhaps some of you have done a similar thing and have worked around this. Error message included in 🧵. My understanding of reverse proxying here is that shadow shouldn't have to care if TLS is being used on the front end of the proxy, since that's gone by the time it gets to shadow. Perhaps that's different with websockets, though. To be fair, the proxying of files from shadow is working well, so this is indeed websocket specific and I'm likely missing something.
I suppose it's worth verifying that shadow connects its websocket to the same port as the file serving, but on the /api/ws
path, correct?
In case it's useful, I'm proxying using something like this:
# Front-end.
server {
listen 8000 ssl;
http2 on;
include /etc/nginx/snippets/ssl-params.conf;
include /etc/nginx/snippets/self-signed.conf;
location / {
proxy_pass ;
proxy_http_version 1.1;
proxy_set_header Host ;
proxy_redirect ;
proxy_redirect ;
}
location /api/ws {
proxy_pass ;
proxy_http_version 1.1;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection "upgrade";
}
access_log on;
error_log /var/log/nginx/error.log debug;
}
you can set :devtools {:use-document-host false}
in your build config to disable that and make it connect to localhost instead
Hey! Thanks so much for the response. I was thinking that the screenshot was the actual error. Just an exception in attempt-connect!
. There is no other info around that.
I'll explore with/without proxy configurations to ensure everything still works when it's set to false
.
Again, thanks for your time.
see https://shadow-cljs.github.io/docs/UsersGuide.html#proxy-support btw, if you really want to proxy the websocket over your nginx
I originally looked at that, but I don't think I interpreted it correctly. Now knowing the issue, I suppose you're saying that if I set :devtools-url
that I don't need to set :use-document-host
?