This page is not created by, affiliated with, or supported by Slack Technologies, Inc.
2023-01-16
Channels
- # announcements (2)
- # babashka (51)
- # beginners (165)
- # biff (39)
- # clara (1)
- # clj-kondo (20)
- # cljsrn (6)
- # clojure (64)
- # clojure-belgium (11)
- # clojure-conj (2)
- # clojure-europe (12)
- # clojure-nl (3)
- # clojure-norway (7)
- # clojure-uk (6)
- # clojurescript (11)
- # conf-proposals (1)
- # conjure (1)
- # core-async (19)
- # cursive (6)
- # data-science (16)
- # datomic (6)
- # deps-new (4)
- # fulcro (60)
- # funcool (3)
- # graalvm (9)
- # helix (14)
- # introduce-yourself (4)
- # jobs-discuss (13)
- # joyride (1)
- # kaocha (2)
- # malli (12)
- # off-topic (25)
- # polylith (9)
- # portal (3)
- # practicalli (1)
- # rdf (43)
- # re-frame (7)
- # reagent (5)
- # releases (5)
- # remote-jobs (8)
- # sci (5)
- # shadow-cljs (42)
- # squint (6)
- # xtdb (5)
I don't hear often from people that prefer not to use tools like Spec or Malli. If this is you, what are your reasons for preferring not to use these things?
I prefer not to use them, because I got disheartened after going all-in on spec and getting rugpulled a little, and stayed away from Clojure for a while, coincidentally, and didn't know about Malli. I'm writing my first Malli schema at the moment
i had some rough experiences with long debug times writing code in cljs, compared to how productive i was being in typescript, which i barely know, so i figured i'd give the whole approach another chance
I'm not sure there are many who don't. But most people only use it at the boundaries, at least I do. So my code isn't using spec or Malli, but input and output external to my program is checked with them. So what I write to the database, what I receive as input to an API or return as output to an API, etc. And sometimes a few key additional places in the code as well. For small programs I won't bother either, like definitely in scripts I don't really use them.
It's not that I prefer not to use spec/malli, but I do think some teams over-spec or prematurely spec their system, or make their specs overly complex. Specs can be great but I like them more as an optional and intentionally patchwork contract layer rather than a pseudo-type system where the team feels compelled to enumerate every piece of information.
@U0K064KQV I have been trying to spec the boundaries of my system, but I find that I'm I can use m/validate
easily, but I almost never want to use that. I want to use m/explain
so I actually can throw an error or respond why this violates the spec.
But then I need to parse that error with a function. And I feel like I've undone the point of using spec validation. If the spec changes, I might have to update my parser of the errors. So it's kinda like "Why aren't I just doing this with my own function to begin with?"
So for instance, form state might be a different shape from the map (schema) it will eventually get transformed and validated. Once that validation is done, I basically have to transform it back to something more appropriate for the form state so I can display it on the web page. That seems like Spec is complicating more than anything
Normally what I do is, I take input, transform it to my internal Clojure representation, than validate that with spec. And similarly, take the Clojure representation, validate it with spec, and than convert it to the DB representation. And I just make sure my conversion code back/forth is well tested. But for user readable validation error messages, Spec is not good
I actually have mostly done service to service stuff, where I can return the Clojure explain to the calling service, since it's just another dev looking at it. But if I had to do user errors, I think this is where Malli and Plumatic Schema are better. And I believe there are some Clojure libs that can do it with Spec as well, like this one: https://github.com/igrishaev/soothe or that one: https://github.com/alexanderkiel/phrase
But, there's nothing wrong with doing it in your own functions, Spec is basically just a kind of DSL for validation, if you don't find it clearer or faster to define the validation with Spec, it's also fine.
I guess generally a function is even more difficult to like reverse engineer into what are the rules around the shape and values of the data
To clarify, I'm not sure why Malli would be better. I'm actually doing something similar to what you're doing using malli. I have a form state that can get converted to the Clojure interpretation before validation. The issue I ran into for user errors is that if your form state is not in the exact same shape of the schema, there's not really a way to match what the errors are to the form state without doing it manually. Maybe your Clojure interpretation is a nested object but your form state is a flat list. So you have to make two functions to turn it to and from the representation you need. That's not inherently difficult, it just means that Malli is not really doing anything except documenting what the shape is. And if the schema changes, both your transformations change, and maybe it's just better to leave Malli out entirely at that point
I'm curious if your well tested conversion functions also validate between before and after conversions or if you do that manually
Or if your tests involve using the Malli schema
Well, I was thinking Malli would be better because it can do coercion, and I thought it already had a feature for user friendly errors. But I see your issue. In my opinion, I would have two schemas for that. One for the form, and one for the service whose supposed to receive it in a different structure. It won't be completely independent schemas, because spec/Malli do per-field specs. So you can reuse the field specs, but have a different container spec, one could say the field are meant to be in a flat map, the other in a deep nested structure. That said, I don't have this problem because I'm not dealing with user forms, but remote service calls. So my APIs can just take the same shape (but in JSON) as what I'd expect for my Clojure representation. So I do believe you bring a good issue, maybe someone with more experience with user forms might need to chime in. My last thought is, even if you've got a separate validation on the user form side of things, and that can be done with just normal functions or whatever, I think on the backend a strong and precise API validation is still important, probably more so than the user validation, because errors past the backend are going to cause more issues and could also corrupt database state and all kind of second order breakage. Also you might eventually use the same API from more than one frontend view, etc. And it's important just for people integrating with it to know what is the expected payload that needs to be sent. Especially if it differs from the data captured by the form.
All good advice, thank you
I want to do some concurrency, with the following constraints: • make ~200 requests to S3, where I PUT a file in each request • maximum time budget for this operation is about 5 seconds (a bit arbitrary, but it's good if it's short) • itt's not critical that they all succeed • there's no per-request timeout options in my http client • I need to wait for them to complete locally, because my process terminates when the function returns Any recommendations on approaches?
If you have resources to fire those requests simultaneously, then the simplest approach is probably to create a vector of futures and then simply Thread/sleep
for 5000. Or use deref
sequentially on that vector with its timeout argument if you care about the result (of course, the timeout will be dynamic since the second future will have "5 seconds - the time it took for the first future to complete" total seconds).
Agree with advice so far. JVM can easily handle 200 futures/threads (they'll mostly be blocked on IO rather than CPU anyway). Futures are fine or you can use claypoole
:
[com.climate.claypoole.lazycom.climate.claypoole.lazy :as cpl]
(count (cpl/upmap 200 run-s3-request collection-of-s3-requests))
If you want to detect if a given request has taken more than 5 seconds, you can launch it in a future and use deref with 3 args:
(let[f (future (Thread/sleep 100))]
(deref f 10 :timeout))
You can then try to cancel the future or call https://docs.oracle.com/javase/7/docs/api/java/lang/Thread.html on the underlying thread objectJava 19 has virtual threads which sounds perfect for this. I didn’t try myself yet, but I hear a lot of good opinions.
Yeah, depending on the task, regular threads java are often fine with 20,000 threads - so I think the virtual threads start becoming more important past this kind of number.
AFAIK virtual threads on the first place have good performance for IO blocking operations, it is a little deeper than N threads
You can also use an async http client, like https://github.com/babashka/http-client with {:async true}
and then just fire and forget or just use java.net.http
+ .sendAsync
directly.
I figured out that my http library had timeouts, so I started out with a core async implementation since I'm a little bit familiar with golangs CSP implementation:
(defn concurrent-puts
[bucket-name transformed-files]
(let [file-count (count transformed-files)
s3-responses (chan file-count)]
;; fan out 1 go routine per file / put request
(doseq [{:keys [file key]} transformed-files]
(go
(>! s3-responses (put-file timeout bucket-name key file))))
;; gather the results by doing a blocking loop that in turn does a blocking take from the response channel
(loop [i file-count
result []]
(if (zero? i)
result
(let [response (<!! s3-responses)]
(recur (dec i)
(conj result response)))))))
I'm a little bit fuzzy on how the "gather the results" part of the function would look in that case
https://clojuredocs.org/clojure.core.async/pipeline-blocking - try if this is what you want
Just use the non-blocking client: https://sdk.amazonaws.com/java/api/latest/software/amazon/awssdk/services/s3/S3AsyncClient.html#putObject(java.util.function.Consumer,java.nio.file.Path)
Then you just fire 200 putObject which each return a CompletableFuture, and if you want to wait for them all to be done you can do:
(mapv #(.join %) listOfCompletablefutures)
Or if you don't want to use the official client, then like @U04V15CAJ said you'll need a http-client that can do non-blocking multiplexed http requests.
Otherwise you'll need to use blocking threads, so either future like others have said, or pipeline-blocking with core.async, or some executor service.
Remark that your core.async code might be wrong if put-files
is blocking. If that's the case you're going to starve the go threads.
Unlike in Go-lang, Clojure's core.async does not automatically park on blocking IO and manage the IO threads for you.
If you want to try with futures:
(->> (mapv #(future (put-files ... %)) files)
(mapv deref))
This would call put-files concurrently each in a separate thread for every file in files, and wait for all of them to complete, returning a vector of their results when they are all done.Is it correct to say that Clj-kondo + LSP is the de facto standard for providing custom syntax support (such as warnings and code actions)? Are there other popular solutions?
I think so. For linting there is also https://github.com/jonase/eastwood. And for altering code there is also https://github.com/clojure-emacs/clj-refactor.el.
I suggest Clojure LSP (which uses clj-kondo) is the defacto standard for 'live linting', i.e. syntax support whist typing or otherwise viewing in an editor. Also the same for refactoring. clj-refactor has been around a lot longer, but lost a maintainer for a while, but seems to be back. However, if already using Clojure LSP I am not clear if using clj-refactor has additional benefits (apart from familiarity to those that have been using it for a while)
For continuous integration the tool choice seems a bit more open. I use https://github.com/marketplace/actions/setup-clojure that includes clj-kondo for syntax checking and cljstyle for format checks/fixes. This action also includes zprint which can also be used for formatting clojure code. There is also a https://github.com/marketplace/actions/setup-clojure-lsp although I havent tried that (curious to know what I could do over and above the setup-clojure)
https://github.com/jonase/eastwood and https://github.com/jonase/kibit provide a good command line experience or can be used as Leiningen plugin I used to use https://github.com/jonase/kibit to help write more idiomatic code, although clj-kondo does much of that now as I type. It could be useful to run kibit as a pre-commit check to see if clj-kondo missed something.
The most common alternative is Cider + clj-refactor + clj-kondo. I would actually think this might still be the most popular setup. Another very common alternative is Cursive on IntelliJ + clj-kondo. I'd say these two and the option you mentioned are all really on-par with each other and good in different ways.
Afterthought: I love clj-kondo and its instant feedback in my editor. I also very much appreciate eastwood and its ability to report on such things as usages of deprecated Java things.
Getting a reflection warning where I'm not expecting one.
(defn- code-point-at [^CharSequence s n]
(Character/codePointAt s n))
;;=> Reflection warning, com/colinphill/extra_special/unicode.clj:46:3 - call to static method codePointAt on java.lang.Character can't be resolved (argument types: java.lang.CharSequence, unknown).
The overloads of that method are (char[], int)
, (char[], int, int)
, and (CharSequence, int)
, so afaict the type hint I provided should fully disambiguate. Anyone know what's happening here?Hinting n
as ^long
fixes it, but it doesn't seem like that should be necessary.
>
can't be resolved (argument types: java.lang.CharSequence, unknown).
The message indicates that the reflection is caused by n
being unknown
.Please reread my last message.
Yes. I see you found a solution. But my point is that the messsage already indicated that the reflection was not caused by failure to recognize s
, as your original post indicated you were confused about.
That is not what my original post indicated. > afaict the type hint I provided should fully disambiguate This is still true. Only one overload has a CharSequence in that position.
openjdk 17.0.4 2022-07-19 hm, I'm a couple versions behind latest, I should change that
I've had a couple of these when upgrading to a newer Java when there were extra methods, but that doesn't seem to be the case here
Yeah, the overloads I listed above are as of Java 19
It was briefly discussed recently, Alex suggested to create an Ask if you think it's a bug: https://clojurians.slack.com/archives/C03S1KBA2/p1672746703205099
I'm considering enabling a linter that warns about un-initialized vars in clj-kondo by default. I've been bitten by this myself today with:
(def ^:dynamic *reload*)
(when *reload* (do-expensive-reload))
Without me realizing, the "expensive reload" was always happening and has been happening for months, because an un-initialized var is always truthy in JVM Clojure. I could have known this because SCI works in the exact same way, but it's an easy mistake to make.
If there are reasons to not initialize a var, please leave some feedback here:
https://github.com/clj-kondo/clj-kondo/issues/1954You then cannot distinguish between unitialized vars and vars initialized to nil
.
I think a similar concept is discussed in https://www.cambridge.org/core/books/lisp-in-small-pieces/66FD2BE3EDDDC68CA87D652C82CF849E (notably page 60) but I forgot the details.
Sure but have you ever needed to make that distinction in real programs? I haven’t
I think I saw the compiler using this in some way but it didn't look extremely important