Fork me on GitHub
#shadow-cljs
<
2023-09-21
>
Pragyan Tripathi03:09:01

Shadow-CLJS: 2.25.5 MacOS Ventura Recently I have started facing issues during compilation. I ran watch command with --verbose flag. Here’s the console output:

<- Cache write: com/vadelabs/studio/client/pages/workspace/plans.cljs (9598 ms)
<- Compile CLJS: com/vadelabs/studio/client/pages/project/playground.cljs (15550 ms)
-> Cache write: com/vadelabs/studio/client/pages/project/playground.cljs
<- Compile CLJS: com/vadelabs/studio/client/pages/project.cljs (16470 ms)
-> Cache write: com/vadelabs/studio/client/pages/project.cljs
<- Cache write: com/vadelabs/studio/client/pages/workspace/billing.cljs (31370 ms)
-> Cache read: com/vadelabs/studio/client.cljs
<- Compile CLJS: com/vadelabs/studio/client/ui.cljs (112930 ms)
<- Cache write: com/vadelabs/studio/client/pages/workspace/setting.cljs (126621 ms)
<- Cache write: com/vadelabs/studio/client/components/card.cljs (183913 ms)
<- Cache read: com/vadelabs/studio/client.cljs (389796 ms)
-> Compile CLJS: com/vadelabs/studio/client.cljs
<- Cache write: com/vadelabs/studio/client/pages/workspace/profile.cljs (511586 ms)
<- Cache write: com/vadelabs/studio/client/pages/workspace/templates.cljs (679704 ms)
These files were compiling pretty fast around a month ago. Wondering what could be the possible reason for Cache Write to take so much time?

thheller06:09:03

yikes. your disk dying or just (close to) full?

thheller06:09:44

compile also takes super long. this must be something hardware related. never seen times like those.

Pragyan Tripathi06:09:44

Disk is fine:

Filesystem       Size   Used  Avail Capacity  iused      ifree %iused  Mounted on
/dev/disk1s1s1  932Gi  8.5Gi  470Gi     2%   356050 4291076982    0%   /
I am trying to trace back the changes and see if I could get to the root cause for this.

thheller06:09:12

I definitely made no changes that would 100.000x regular compile times 😛

thheller06:09:27

at least I would have noticed something 😛

thheller06:09:01

could be your OS doing weird stuff on disk access?

thheller06:09:02

maybe check what else your system is doing while compiling?

thheller06:09:14

I have my doubts that its anything within shadow-cljs itself

👍 1
Pragyan Tripathi06:09:43

I am just wondering where to look for debugging these kind of issues. For sure it’s my environment… Whenever I am running shadow-cljs - the java process CPU usage spikes to 200 - 300 % range. Takes up around 3GB of RAM.

thheller06:09:44

jvisualvm is useful for inspecting the JVM and getting a thread dump

👍 1
thheller06:09:55

or just the jstack CLI tool

thheller06:09:42

it could still be the disk though. check the system information for smart errors or so

👍 1
thheller06:09:34

I guess it could be something if you added some kind of profiler/inspection tool that inspects every var and does some logging or whatever?

thheller06:09:39

no clue really

thheller06:09:25

my disk of my mac mini died a couple months ago, it got seriously slow like this too before completely dying

Pragyan Tripathi06:09:51

Let me try different mac - to validate this hypothesis. Thanks a lot.

Pragyan Tripathi09:09:54

Phew! I think I was able to resolve it. So here’s the stupid thing I was doing that led to this: I have written a side-effecting macro - That macro updates the global atom whenever it is called. Based on the configuration stored in the atom - I generate CSS styles. I wrote a code snippet to watch the atom - and write the CSS file whenever the atom watch saw any changes. Things were working fine till the usage of the macro was lower - But as we started using it more and more, the compile time kept increasing. I don’t understand about atom watcher enough to explain this behavior. But we are back to previous compilation time when I removed the atom watch logic and moved it to shadow-cljs build hooks.

thheller09:09:17

yikes, ok good to know

bherrmann00:09:33

Interesting thing to keep in mind.... if compile times are changing - look for a macro that is misbehaving - since macros run during compile time.

thheller08:09:06

yeah, that wasn't my first thought since they don't run when writing cache. I guess they created so much write pressure for the disk that it just couldn't keep up