This page is not created by, affiliated with, or supported by Slack Technologies, Inc.
2019-12-24
Channels
- # adventofcode (6)
- # announcements (4)
- # aws (21)
- # babashka (36)
- # beginners (58)
- # calva (3)
- # cider (2)
- # clj-kondo (21)
- # clojars (3)
- # clojure (35)
- # clojure-dev (4)
- # clojure-europe (5)
- # clojure-nl (8)
- # clojure-uk (8)
- # clojuredesign-podcast (7)
- # clojurescript (10)
- # core-async (3)
- # data-science (2)
- # datomic (2)
- # defnpodcast (11)
- # duct (4)
- # figwheel-main (1)
- # fulcro (34)
- # graalvm (12)
- # graphql (4)
- # joker (14)
- # kaocha (1)
- # midje (1)
- # off-topic (5)
- # pedestal (1)
- # re-frame (3)
- # reagent (4)
- # reitit (1)
- # shadow-cljs (4)
- # testing (12)
@dominicm no, just http. I'm not concerned about the cost of crawling the repo via http. We do publish a list of all jars in the repo, updated daily. So that could be used to pull down a copy of the repo, then later check for jars that aren't in the local copy and pull just those. Though that process would also have to have smarts around knowing that maven-metadata.xml
files would need to be re-downloaded as well.