This page is not created by, affiliated with, or supported by Slack Technologies, Inc.
- # announcements (30)
- # aws (7)
- # babashka (7)
- # beginners (64)
- # calva (39)
- # cherry (17)
- # cider (1)
- # clj-on-windows (6)
- # clojure (30)
- # clojure-austin (12)
- # clojure-europe (25)
- # clojure-france (1)
- # clojure-nl (2)
- # clojure-norway (23)
- # clojure-spec (23)
- # clojure-uk (6)
- # clojurescript (20)
- # cursive (18)
- # datahike (3)
- # datalevin (12)
- # datomic (9)
- # etaoin (5)
- # graalvm (45)
- # instaparse (2)
- # interceptors (11)
- # kaocha (1)
- # lsp (102)
- # meander (6)
- # nbb (16)
- # off-topic (30)
- # pathom (83)
- # pedestal (6)
- # portal (5)
- # re-frame (12)
- # reitit (5)
- # rewrite-clj (10)
- # scittle (35)
- # shadow-cljs (49)
- # spacemacs (10)
- # vim (14)
I just spent like 45 minutes writing a function to walk a parse tree and produce a sequence before realizing all I needed was
What is the idiomatic way of mapping a set of functions to a list of maps and triggering side effects? I have got
(dorun (map #(do (f1 (:key1 m)) (f2 (:key2 m))) mymap))
First, to create a function which "projects" a bunch of functions:
(apply juxt fs) will return a new function
Then you want to map it to a bunch of maps. If you care about the results use
mapv, if not,
I'd like to call 3 different external APIs, wait for the results (or swap in a default result on timeout), Then call another API with the combined data (or default values) from the first 3. Http-kit client calls return a promise, so would that be enough to hold the 4th call untill the first 3 reply? Or would I need to wrap the first three calls in a promise or something else that delays the 4th call until ready. Background: this is done inside a handler function of a Clojure service using reitit & http-kit server, in response to an external post request.
Either bring in promesa or copy the implementation of
i'd do something like
(run! deref list-of-httpkit-futures) and then
if youre passing data to the fourth call maybe:
(->> list-of-httpkit-futures (map deref) (dorun) (fourth-call))
or return then whole thing in a promise like Ben said, this would be bad for long running things.
I would put a timeout on all API calls, http-kit client has a timeout key, for a couple of seconds. Then if it times out I'd use a default value instead of what ever a timeout call returns.
yeah the thing is would you be okay holding onto the client connection while this chain runs?
I have a very generous 5 seconds to respond from what I understand, so hanging on to the initial request that triggers all these calls shouldn’t be an issue (although that will be tested 🙂 )
Somewhat related to the above (and apologies in advance for the question length):
I have a question I have more or less answered for myself in the java world: "when is it worth it to do reactive programming". Reactive in this context refers to the pattern of programming where you use async/await, completablefuture, reactor/javarx in java, js promises etc. My conclusion in the java world is more or less in line to the one this presentation (which is great by the way) by Tomasz Nurkiewicz reaches:
. I.e. if you are not netflix-scale, the drawbacks usually outweigh the benefits by a pretty wide margin. Much more complex code base, way higher cognitive load which means you need a sharper level of resource to work with it, debugging context is pretty much totally lost (your code is probably not even in the stack trace and where do you go from there?), etc etc.
Would you say that this still holds true in clojure land or are there features or patterns that somehow alleviate some of the pain points?
ps. I lied. 🙂 HLL-specific things like: • Clojure/Common Lisp macros; • first class functions; and • dynamic/special variables... ...can do wonders when it comes to hiding wiring/boilerplate. But JS does not have macros or special variables and Matrix/JS turned out OK, so maybe I only fibbed. Java, OTOH, with strong static-typing and -- well, how is it doing on first class functions? I hear it has finally gone there. Watching that video now. I am. concerned already. :)
Thank you @U0PUGPSFR Very informative. I will dig into the links and meditate over this.
@U0PUGPSFR reading through the matrix docs. I’m especially looking forwards to reading the origin story of cl cells : ) Given that matrix is somewhat geared towards solving the ui problem, how good a match would you say it is for the backend-server-at-scale case? (incidentally the case the yt video linked above deals with) I.e where you use reactive to be able to handle more requests per second etc…
Should be noted that in the server-side java-world, the complexity and cognitive load of the code is only one part of the problem. When shit hits the fan and you get an exception, it is quite normal to lose most if not all of the context and for example end up with a stack trace where your code is nowhere in the stack trace...i.e. all you see is library code and there is no way to see what part of your code actually blew up or caused the issue.
UI is simply the poster child for the actual Matrix use case: "any application involving an interesting amount of long-lived state and a stream of unpredictable inputs." I did apply Matrix usefully to a RoboCup simulation league client, which simply gets a complete game view every 100ms. The unpredictability there came from the other players also acting on the simulated game state. @U4VDXB2TU Meanwhile, nice catch on the origin story! https://tilton.medium.com/the-making-of-cells-5ab873d1e6c7 I almost mentioned that, very early on in the evolution of Matrix (aka Cells, ne Semaphor), you will see we make an effort to hide the wiring. We also leave the programmer freedom to handle the same property different ways, without resorting to subclassing. The point being that a powerful developer D/X comes from deliberate design. At the time I had had already a lot of experience with UI frameworks that were absolutely brutal to make dance, so I was on guard at every turn to keep things simple for the developer.
also on a totally separate track - project loom and virtual threads might be one solution to the server side scalability issue
@U0495TEG9C1 Ah, thanks, I was not aware of Svelte3. But why did they write a compiler to make
count += 1 reactive? Matrix/JS uses
defineProperty. Ah, here it is:
...but in the preceding paragraph they said:
Importantly, we can do all this without the overhead and complexity of using proxies or accessors.
Sounds like overhead -- we can run but not hide! Meanwhile we have a preprocessing step. Ewww! 🙂 Anyway, kudos for their attention to D/X!
Since we're a compiler, we can do that by instrumenting assignments behind the scenes:
Well, I would invoke Occam. In Lisp we say never to use a macro where a function would suffice. I think the same goes for preprocessing. Ha. Come to think of, they are the same thing! 🙂
@U4VDXB2TU I watched most of the video. I agree with the last slide: think before using reactive. I have done a lot of ETL lately and never considered reactive: it fails the "interesting amount of long-lived state" criterion. As for managing a heavy request load, it is not clear to me how reactive was helping, so not sure what to say. I made good use of core.async for massive ETL, fwiw. I agree somewhat about the stacktrace problem, wherein we cannot see the whole process that was active when an exception is thrown, but that seems to be true of any system that has communicating subprocesses passing things around in queues. This is why we log, right? The exception shows the immediate local failure, the log lets us see the larger context? Mostly I go back to my first point: what reactive mechanism are we discoussing? Mr. Nurkiewicz twice says he is describing all reactive mechanisms. Hmmm....
@U0PUGPSFR on phone so limited bandwidth - in javaland the default behavior (at least until project loom and virtual threads) is for each incoming request to be served by a java thread which in turn has a one-to-one relationship with os threads. Thus if you are Netflix and have 10 000 concurrent connections to your server process, you will need 10 000 Java threads and 10 000 os threads and 10 000 sockets - which in turn is a problem for the os to handle (see c10k problem) So then you need to either buy a second machine to run a second server process or you use reactive to make fewer java/os threads do the same work by parking the thread when it is waiting for io or otherwise blocking and allowing the java thread to serve another request while that happens. Again, from a phone, not perfect but captures the gist.
Thx, @U4VDXB2TU. What I am hearing is that Java has issues because of that 1-1 relationship with a scarce OS resource, and that there is sth about some reactive tool that lets NetFlix work around that. Not sure, but I think this just drives home the ambiguity of "reactive". I guess Matrix could be used to the same end, with one thread serving many customers because each customer was behind its own Matrix cell...OK, now I see why HLL comes up. Would Erlang sneeze at the C10k problem? I am no expert, but methinks Clojure core.async has the same lightweight process "win" as Erlang.
Adding to the previous answers, with the Clojure CLI/deps.edn there is a trend (at least to some extent) to switch from maven to git as the delivery mechanism, so, if you use that, you might also consider your favorite code hosting service as a package server.
On that note, I saw that https://github.com/technomancy/leiningen moved from GitHub to https://codeberg.org/leiningen/leiningen, which I applaud heartily. Is there also a trend to move away from GitHub in Clojure generally, or was that a one off? (I am also leaving GitHub as fast as my feet can carry me)
no, but Clojure use GitHub only as a public repository. issues and ci delegated to over tools like Jira
also Clojure deps.edn doesn't care about actual hosting of the sources. it just must be available via git protocol
deps.edn features though favor Github over the others, e.g. the reverse-notation syntax for defining coordinates, and I don't think there is a request to support more hosts/make it more generic (let alone a plan).
https://clojure.org/reference/deps_and_cli lists shorthand for GitHub, gitlab, bitbucket, beanstalk, and one more I don’t recognize. I believe Alex adds in these as the community desires. Don’t think it’s particularly favored towards GitHub although that is surely the most popular
Alex is extremely responsive to community requests/questions! I just meant that nobody asked for the less popular ones, which makes sense by definition. 🙂 > beanstalk, and one more Those must be recent additions because last I checked (months ago) I only saw a couple (github, gitlab and bitbucket I think). It's great that support is improving. By the way the one you don't recognize is https://sourcehut.org/!
I'm looking for a persistent async task scheduler (like sidekiq or delayed_job for Ruby). Is there anything like that for Clojure?
Is it bad to use
for for impure functions? I want to read a series of files and transform the data into a map and return it. A sketch
What's the idiomatic way for doing this?
(defn files->map [file-paths] (for [file-path file-paths ls (line-seq (io/reader file-paths))] ;;does stuff to generate a map from the contents of ls ))
Ensure it is evaluates strictly, using something to realize the entire lazy computation like doall or into
As an alternative to
for, is using
map multiple times better or an abomination? I can change it to
mapv to make it eager
(defn files->map! [file-paths] (let [line-seqs (->> file-paths (map io/reader) (map line-seq)) delimeters (->> line-seqs (map second) (map detect-delimeter)) headers (->> line-seqs (map first) (map transform-header)) rows (->> line-seqs (map rest) (map split-row delimeters))] (map transform-header-rows->map headers rows)))
map is particularly good at hiding side effects from someone reading your code, because of the implication of its meaning.
mapv would at least bring me pause, and in fact make me suspicious of side effects, because "Why else would you not just use map?" 👍
I would consider this a good spot for https://clojure.org/reference/transducers. You can separate out the "what we're doing" by combining your calls to map into a single transducer, which gives you the benefit of only doing a single pass over your data instead of multiple. You can then use the transducer with either
into for eager evaluation or
sequence for lazy.