This page is not created by, affiliated with, or supported by Slack Technologies, Inc.
2020-07-06
Channels
- # aws-lambda (6)
- # babashka (1)
- # beginners (204)
- # calva (10)
- # chlorine-clover (17)
- # cider (57)
- # cljs-dev (3)
- # cljsrn (3)
- # clojure (148)
- # clojure-bangladesh (1)
- # clojure-berlin (3)
- # clojure-europe (30)
- # clojure-france (1)
- # clojure-italy (4)
- # clojure-nl (5)
- # clojure-spec (4)
- # clojure-uk (14)
- # clojurescript (15)
- # code-reviews (8)
- # conjure (27)
- # data-science (9)
- # datomic (38)
- # duct (6)
- # figwheel-main (11)
- # fulcro (78)
- # helix (11)
- # jobs (1)
- # malli (18)
- # meander (22)
- # mount (4)
- # nrepl (3)
- # off-topic (93)
- # pathom (2)
- # pedestal (4)
- # re-frame (5)
- # reagent (6)
- # reitit (1)
- # ring-swagger (1)
- # sci (1)
- # shadow-cljs (19)
- # spacemacs (1)
- # sql (1)
- # tools-deps (76)
- # unrepl (1)
- # vim (5)
- # xtdb (8)
That is so helpful! Thanks so much @chris441!
Would you please do a docs treatment on libpython-clj
as well?
libpython-clj is a bit harder as some of the namespaces are based directly off of python modules and requiring them will fail if a loadable python or numpy aren't installed. I will look into it a bit and see what happens.
Ah... interesting.
One specific question: How do you handle named args to function calls with libpython-clj
?
You can do (py/call-kw f [] {:foo "bar"})
(the vector is for positional args)
Thanks again!
how do you guys share models across AI applications? i have dockerized applications that i need to scale, and i was hoping for a solution without the network layer. if possible, maybe a way to shared loaded gpu models