Fork me on GitHub

Hey! :) have you tried rebuilding in case this is intermittent

Karol Wójcik08:07:47

How can I do it?


There’s a hidden button in the top right, next to the GitHub link. Unfortunately it looks like this heap space issue seems persistent, I even restarted cljdoc. Is there anything unusual about your repo? It looks like other builds work fine

Karol Wójcik12:07:45

Still don't know where the button is. Hmm I don't know I did pretty standard release.


So v0.2.3 failed, but v0.2.2 worked. I just hit the rebuild button on v0.2.2 and it has now failed.


Seems to be failing on the clone? I can take a look at this sometime later today if nobody else beats me to it.


Ok, I’ll try to reproduce locally on my dev box shortly.


It ingested fine on my dev box, but I have 32gb of RAM. @U050TNB9F If I understand cljdoc infrastructure correctly,


I guess something interesting about the holy-lambda repo is that it has some larger files. I’ll restrict RAM on my test locally and see if I can trigger the same error.


Ya, I can trigger a similar error if I run cljdoc ingest with -J-Xmx128m.


I’ll have a look-see at what cljdoc/jgit are up to.

Karol Wójcik18:07:30

Hmm maybe it looks in modules folder?


Might be easiest/reasonable just to bump up RAM of cljdoc server? I dunno who pays for that though. @U050TNB9F?


Or… we could switch to spawning out to a git executable instead of using jgit, and see if that helps. Clojure tools deps has switched from jgit to git executable.


Or… we could think more about other options. simple_smile


@UJ1339K2B did would you be ok with creating a Probably a better spot to record findings and options.

Karol Wójcik19:07:06

I'm having a babashka runtime in zip file maybe that's the cause


I’m open to upgrade to a bigger box but it’s been a while I set this all up so it’ll take some time. In the meantime maybe not packaging babashka as part of your jar is also an option?

Karol Wójcik12:07:26

Yeah. I can move it to releases.


I think the issue is triggered by the repo and not the jar?


oh yeah sorry, meant the repo


cool, thanks

Karol Wójcik14:07:45

@U050TNB9F @UE21H2HHD Just took a second look. I have only one file which weights like 80mb. I think that cljdoc should not fetch all the branches and full git history. What is rationale behind fetching full history? Isn't something like this (or some variant which doesn't clones repo with full history) an option?

git clone --depth 1


good question, before my time, would have to dig in to understand


full history might be to support fetching all tags?


dunno why all branches...


maybe because a sha can point to anything in repo?


I think that’s roughly the reason yeah


There might be other ways of getting at the info cljdoc needs without consuming as much RAM for repos with large blobs. But I suppose we’d be optimizing for RAM usage (and as a side effect disk usage, I suppose). If that is worth it (we don’t want to bump up RAM on cljdoc server), we can explore.


Maybe we could use git’s ls-remote, for example. And not checkout on clone, and then checkout specifically only what we need? Would all be experimental to see if it used less RAM in the end.