This page is not created by, affiliated with, or supported by Slack Technologies, Inc.
2019-03-04
Channels
- # adventofcode (6)
- # announcements (1)
- # aws (18)
- # beginners (104)
- # boot (11)
- # cljsrn (31)
- # clojure (49)
- # clojure-dev (16)
- # clojure-europe (2)
- # clojure-greece (9)
- # clojure-houston (1)
- # clojure-italy (12)
- # clojure-nl (3)
- # clojure-spec (46)
- # clojure-uk (148)
- # clojurescript (12)
- # community-development (13)
- # core-async (7)
- # cursive (35)
- # data-science (13)
- # datomic (70)
- # events (1)
- # fulcro (22)
- # hyperfiddle (1)
- # jobs-discuss (10)
- # kaocha (3)
- # off-topic (7)
- # om (2)
- # other-languages (32)
- # parinfer (1)
- # portkey (4)
- # re-frame (3)
- # reitit (12)
- # shadow-cljs (49)
- # spacemacs (1)
- # specter (6)
- # sql (5)
- # tools-deps (58)
@cfleming GraalVM added HTTPS support a few months ago, but it looks like there’s another issue here https://github.com/cognitect-labs/aws-api/blob/master/src/cognitect/aws/util.clj#L286 preventing native image generation:
Error: unbalanced monitors: mismatch at monitorexit, 96|LoadField#lockee__5699__auto__ != 3|LoadField#lockee__5699__auto__
Call path from entry point to cognitect.aws.util$dynaload$fn__637.invoke():
at cognitect.aws.util$dynaload$fn__637.invoke(util.clj:265)
at clojure.lang.AFn.applyToHelper(AFn.java:152)
at clojure.lang.Ref.applyTo(Ref.java:366)
at aws_api_cli.core.main(Unknown Source)
at com.oracle.svm.core.JavaMainWrapper.run(JavaMainWrapper.java:152)
IIRC there was a ticket/patch for this in Clojure? I tried with 1.9 and 1.10 but got the same error (w/diff stack traces)https://github.com/ghadishayban/clojure/commit/8acb995853761bc48b62190fe7005b70da692510 this is the patch I was thinking of, and this ticket isn’t the one I remember but seems relevant https://dev.clojure.org/jira/browse/CLJ-1472
FWIW I was able to build a native image after replacing that dynaload
with a patched version, but then it fails at run-time with Cannot find resource cognitect/aws/s3/service.edn. {}
(because I used S3 as an example call). I might take a deeper look at this if I have time later this week :man-shrugging:
@U3DAE8HMG Thanks! I’d appreciate any info you can provide after digging a bit.
1472 is the relevant ticket
that service.edn is from one of the other deps that needs to be included as a resource on the classpath (not sure what graal does about stuff like that)
thinking that resources looked up at runtime are out of scope of the static analysis that Graal native images performs
Not of immediate gain regarding graal/aws-api, but @U3E46Q1DG did something a bit similar with bytecode analysis in the #portkey project, in which we had an option to specify keeps for resources/classes that were dynamically loaded. I think some forms can be captured by static analysis, say .getResource("path/to/resources")
, where the resource is a compile time literal
https://github.com/ghadishayban/clojure/commit/8acb995853761bc48b62190fe7005b70da692510 clj-1472
@cfleming I presume CLJS is not an option? I have a couple of CLJS/Shadow lambdas that do AWS calls and cold start in a couple of seconds
they are not in a VPC which dramatically improves cold-start. The VPC ENI cold start can add up to 8 secs to a cold start. I suspect a Graal based Lambda would suffer that as well if VPC is required in your case
@steveb8n Yes, I’m currently using CLJS for my lambdas. But I like the new Cognitect API and AFAIK that’s not available for CLJS yet. And I would dearly love to leave all the funky async bit behind me forever (promesa helps, but it’s still not as nice as blocking)
I agree with that. The new AWS lib is much nicer. I’ll be interested to hear how fast you can make the cold start work. Although nothing we can do about ENI - for that we (keep on) wait for AWS
Also interesting is that Graal is 50% slower than JVM once loaded (was mentioned at ClojuTre last year) so by using Graal we are favouring cold starts over warmed up invocations. I suppose 50% slower is ok if total time is some 100's of ms, users won’t notice that
hum, isn’t jaotc is running on the c2/hotspot and the graalvm throughput is probably going to be better in the future?
was remembering this 🙂 https://www.graalvm.org/docs/reference-manual/aot-compilation/ > What is the typical performance profile on the SVM? > Right now peak performance is a bit worse than HotSpot, but we don’t want to advertise that (and we want to fix it of course).