This page is not created by, affiliated with, or supported by Slack Technologies, Inc.
2019-10-18
Channels
- # aws (10)
- # beginners (43)
- # calva (1)
- # cider (7)
- # cljs-dev (83)
- # clojure (132)
- # clojure-dev (20)
- # clojure-europe (6)
- # clojure-greece (4)
- # clojure-italy (2)
- # clojure-nl (6)
- # clojure-spec (21)
- # clojure-sweden (16)
- # clojure-uk (21)
- # clojuredesign-podcast (16)
- # clojurescript (74)
- # cursive (41)
- # datomic (7)
- # emacs (3)
- # fulcro (30)
- # graalvm (3)
- # graphql (2)
- # instaparse (1)
- # jobs (1)
- # joker (13)
- # kaocha (14)
- # off-topic (118)
- # pathom (13)
- # re-frame (5)
- # reagent (22)
- # shadow-cljs (67)
- # spacemacs (7)
- # sydney (1)
- # testing (1)
- # tools-deps (82)
- # vim (4)
- # xtdb (1)
In Chrome DevTools this can be measured by comparing two heap snapshots, where first doesn't have anything and second one includes a value of interest assigned to a global variable so it doesn't get GCed
Comparing two snapshots shows allocations diff
Also can be measured in node https://github.com/roman01la/js-memory-usage
there's some caveats to this approach though
depending on the data structure and the way it is operated upon, it can suffer either optimizations/deoptimizations
like this
You mean different runtimes right? That’s true. I imagine numbers can be different between browser JS VMs as well
in that case, using the same runtime but changing a for loop slightly reduced memory usage by 30 to 70%
it didn't affect node 10 but did affect node 12
this was the change
for (let prop in obj) {
-> for (let prop of Object.keys(obj)) {
hehe, reminded me about this talk https://www.youtube.com/watch?v=r-TLSBdHe1A
I haven't seen it, but this sounds like it's just up my alley 😄
there’s a funny moment he describes how having a shorter user name on his machine caused GCC to produce 100x faster code 😄
you know what, that doesn't surprise me at all
webpack had a similar problem around two years ago
it's similar insofar as names and paths would be concatenated indefinitely
so the longer your file paths were, the quicker you'd hit the memory limit
This is awesome. Thank you both! What I need this for is to measure the relative size as a data structure grows, so I think Roman's suggestions should suffice.
I have a toJS
function exposed here to make working with CLJS values easier from JS. It also converts MetaFns to normal functions so they are easier to work with:
https://github.com/borkdude/sci/blob/e895c4524a1a43568e4919315364978ea61df607/src/sci/impl/js.cljs#L36
Does it make sense to add that part to clj->js
proper?
(defprotocol IEncodeJS
(-clj->js [x] "Recursively transforms clj values to JavaScript")
(-key->js [x] "Transforms map keys to valid JavaScript keys. Arbitrary keys are
encoded to their string representation via (pr-str x)"))
@thheller I'm interested in the bundle size visualisation to see how much of several parts of sci occupy in the final bundle. I remember vaguely you had something for this in shadow-cljs. Any tutorial how to set this up just for the visualization? I have almost 0 experience with shadow.
@U04V15CAJ you can generate source maps and inspect with any bundler inspector from npm
thanks! @thheller already guided me to a solution with shadow-cljs which works quite nicely
do you build for the browser? the build reports currently only work for browser builds
if the library is written "correctly" all the code will be removed as dead code otherwise
Right. Also a lot of the core functions are pulled in by the lib, but might already be used the the app otherwise, so it wouldn't be 100% on the lib
Just to get going for now, I'm getting:
Can't find 'shadow.cljs.devtools.cli' as .class or .clj for lein run: please check the spelling.
I've set :lein true
in shadow-cljs.edn. Should I add a dep to project.clj?
acceptable size though. so probably best to set up an actual example with the common uses
yeah, it was the point of def clojure-core
to hold on to all those functions, because you don't know up front which ones the user is going to use in their program string.
When I compile sci to an npm library (advanced compiled JS) I end up with ~350KB unzipped
is it meant for npm consumption by js clients?
@UJVKWJTGE That's certainly one of the goals: https://www.npmjs.com/package/@borkdude/sci
You can see an example of it here: https://github.com/borkdude/sci-birch
350kb is a fair bit but for nodejs consumers it's not the worst thing in the world
especially if it's a single lib
typescript, for instance, is 7mb in a single file
and TS ships several versions of the compiler, to cater for different consumers
not quite
in your case you want to interpret cljs
but there are plenty of libs that want to interpret ts
like ts-node
or that require TS to be compiled with, like Angular
TS is a total of 47mb on a clean install
@alexmiller hey I built a Google Closure Library artifact today - could we get a release?
Sure, do I just need to press the button on the build box?
that dimly rings a bell
so you did the build, I just need to do the sonatype part?
I see two staging repos, one for google-closure-library-third-party and one for google-closure-library. I assume these both need to be released?
released both, I assume will take a bit to show up
google-closure-library 0.0-20191016-6ae1f72f and org/clojure/google-closure-library-third-party 0.0-20191016-6ae1f72f are out there now
the third party jar is ~50% the size of the last release. not sure if that's weird
@alexmiller yeah both need to be released