Fork me on GitHub
#clojure
<
2021-03-01
>
richiardiandrea04:03:58

Does anyone print logging data in edn directly? The goal would be to copy paste directly to the repl. Edit: of course only in Dev mode 😃

richiardiandrea20:03:22

Yes I was thinking about that but it would require more work

richiardiandrea20:03:07

TIL there also is https://clojure.github.io/tools.logging/#clojure.tools.logging.readable that does print with pr-str wrapping 😄 That's cool stuff. Thanks to the maintainers!

hiredman04:03:21

Pedestal at one time logged in edn to some degree

richiardiandrea05:03:54

Thank you! that's what this code base is actually using, will check the link

hiredman04:03:32

https://gist.github.com/hiredman/64bc7ee3e89dbdb3bb2d92c6bddf1ff6 is a little library for using java util logging to log in edn

Noah Bogart18:03:49

this looks super cool. do you have any examples of usage?

Noah Bogart19:03:15

awesome, thank you!

dpsutton05:03:38

i almost linked to that. i use it constantly now

👍 3
hiredman05:03:46

People get excited about macros writing macros, but what about non-macros writing macros

🤯 6
richiardiandrea05:03:50

A bit magical indeed 😃

jeongsoolee0906:03:14

I am unsure that's possible.

p-himik08:03:21

> non-macros writing macros Oh, I am one! :)

aw_yeah 3
pez11:03:47

What impresses me the most are those non-macros that write macros that write macros.

borkdude12:03:12

You mean functions that emit code as a string / .clj file? Legit.

hiredman18:03:53

I didn't mean for this to be enigmatic, if you look back in main chat there is a gist I posted of some code, and it generates macros by doseq'ing over a list, interning some functions, then call the setMacro method on the var

pez11:03:56

Is there some kind of pfilter around? Like pmap. with its nice interface similarity with map. The lack of pfilter in the core library makes me think I might not be reasoning correctly about the problem… (Which is to filter a sequence of integers as fast as I possibly can. 😄 )

pez11:03:06

I might add that the filter predicate is fast, afaik. So this note on pmap seems to tell me I should be looking for other ways to speed the process up: > Only useful for computationally intensive functions where the time of f dominates the coordination overhead.

dharrigan11:03:25

Although it doesn't have a pfilter, it works with transducers so you can supply a filter

borkdude11:03:47

If your predicate is fast, why do you need pmap at all?

borkdude11:03:04

because the collection is huge?

pez11:03:13

Thanks @U11EL3P9U! I’ll have a look!

borkdude11:03:17

in this case you might be better off with reducers perhaps

pez11:03:14

Yes, the collection can potentially be huge, and then I want it to go much quicker than it does today.

pez11:03:20

So, I filter 500K in 20ms and imagine that if all 6 cores of my machine took a slice each it would be done in less than 4ms. 😃

borkdude11:03:44

@U0ETXRFEW I don't think pmap will buy you anything here. Take a look at clojure.core.reducers

borkdude11:03:53

reducers will slice the collection in multiple parts and then do the work on each slice in separate threads and then concat the result

borkdude11:03:57

this is not how pmap works

pez11:03:08

I will. Interestingly @U11EL3P9U linked to pfold form that parallell lib. 😃

vemv11:03:40

I tend to want pfilter from time to time, but always procastinate implementing one (that also suits my sensibilities) my usual workaround is to run the predicate through pmap and then use a vanilla filter identity as the next step (which won't be parallel, but can be assumed to be fast since identity is a simple pred)

pez11:03:52

Also interesting that in that beginner’s guide to Clojure I am writing, yesterday I wrote “I won’t be going into reducers here”. 😃

borkdude11:03:22

That approach only helps if the predicate itself is slow

raspasov11:03:48

Have you considered the core.async pipeline utils?

pez11:03:51

My predicate is an index lookup in a boolean array.

vemv11:03:08

ah whoops, didn't read I might add that the filter predicate is fast

raspasov11:03:35

They are quite powerful and nice to use in my experience https://clojuredocs.org/clojure.core.async/pipeline

p-himik11:03:38

If you do a lot of number crunching, perhaps using Neanderthal would be worth it. Map/reduce tutorial section: https://neanderthal.uncomplicate.org/articles/tutorial_native.html#fast-mapping-and-reducing

pez11:03:58

Not considered core.async, @raspasov, I started to think about the option to parallellize this some minutes before I asked the question and hadn’t found pfilter in the core library.

pez11:03:14

I’ll have a look at that. Even if the number crunching is done for the particular task. It takes .3 ms and then filtering out the results take 20ms. Very frustrating!

raspasov11:03:44

@U0ETXRFEW pipeline would allow you to write a transducer like (filter my-fn) and then just give it “n” (defonce p1 (pipeline 21 to-ch (filter my-fn) from-ch))

dharrigan11:03:44

....`does the database dance`...

dharrigan11:03:54

🙂 From neanderthal 🙂

p-himik11:03:17

Do note that Neanderthal is IIRC hundreds of MBs because it requires BLAS and/or MKL.

raspasov11:03:21

Then simply start put! -ing elements onto to-ch

pez11:03:07

Sounds nice!

pez11:03:33

(pipeline, not hundreds o MBs 😄 )

raspasov11:03:37

… and receive the filtered result onto ‘from-ch’

raspasov11:03:51

actually…. reverse

raspasov11:03:05

start with ‘from-ch’

raspasov11:03:09

receive in ‘to-ch’

borkdude11:03:16

Hm, if these results are coming from a database, you might be able to do this work inside the database instead (dharrigan's database word triggered that thought)

raspasov11:03:20

Hopefully that was clear 🙂

raspasov11:03:18

clojure docs has some nice examples of pipeline

vemv11:03:35

Wondering if using transducers and no parallelization would result in a noticeable speedup (at the very least it tends to be more memory-efficient)

raspasov12:03:42

@U45T93RA6 It depends on what you’re doing… it can be significant but rarely an order of magnitude improvement (just switching from collections to transducers without something like pipeline)

raspasov12:03:41

(pipeline …) really shines if you have a big server with many real cores and a bunch of tasks that you need to get done in parallel and they require minimal coordination (for example, web scraping)

raspasov12:03:22

I’ve launched a server on AWS with 32+ cores and used pipeline… it’s pretty neat

pez12:03:04

The easiest one to test was transduce. It gained me 10%. Next thing to try is reducers, I think. But later, my lunch break is over. 😃

pez17:03:39

The gain from reducers is a tad better, but still nothing major. I don’t quite understand why. Next experiement will be pipeline but I expect it to not help too much either, because I suspect I have not analyzed the problem correctly.

pez22:03:41

With pipeline things go about 200 times slower :thinking_face:

pez22:03:18

I think pipeline might not be suited for parallellizing things that go fast.

raspasov04:03:50

@U0ETXRFEW hmm that sounds quite strange (esp the 200x)… what kind of code are you running and how many cores is the machine? What’s the (pipeline n …) number?

raspasov04:03:40

Is it pure functions or is there IO in the code?

pez06:03:39

I’ll make a repro.

pez07:03:37

I think this is similar enough to what I do in my program, where I have a boolean-array where some positions are set, and some are not. Most are not. Then I want the indexes of the set bits of every odd position from the array, as a sequence.

(let [n 1000000
      ba (boolean-array n)
      every-other (range 1 n 2)
      prob 0.15
      sample-size (long (* n prob))
      to-ch (a/chan 1)
      from-ch (a/to-chan every-other)]
  (doseq [i (take sample-size
                  (random-sample prob (range n)))]
    (aset ba i true))

  (println "filter")
  (time
   (count
    (filter #(aget ba %) every-other)))

  (println "transduce")
  (time
   (count
    (transduce
     (filter #(aget ba %))
     conj
     every-other)))

  (println "core.reducers/filter")
  (time
   (count
    (into [] (r/filter #(aget ba %) every-other))))

  (println "core.async/pipeline")
  (time
   (do
     (a/pipeline 4 to-ch (filter #(aget ba %)) from-ch)
     (count
      (a/<!! (a/into [] to-ch))))))

pez07:03:04

Output:

filter
"Elapsed time: 24.779003 msecs"
transduce
"Elapsed time: 23.854725 msecs"
core.reducers/filter
"Elapsed time: 23.759016 msecs"
core.async/pipeline
"Elapsed time: 3535.955765 msecs"
=> 74885

pez08:03:37

If I use criterium/quick-bench instead, the transduce and reducers wins are a bit more appearant:

filter
Evaluation count : 30 in 6 samples of 5 calls.
             Execution time mean : 23,654346 ms
    Execution time std-deviation : 301,776435 µs
   Execution time lower quantile : 23,352585 ms ( 2,5%)
   Execution time upper quantile : 24,050820 ms (97,5%)
                   Overhead used : 14,507923 ns
transduce
Evaluation count : 36 in 6 samples of 6 calls.
             Execution time mean : 20,129352 ms
    Execution time std-deviation : 595,084459 µs
   Execution time lower quantile : 19,646010 ms ( 2,5%)
   Execution time upper quantile : 21,079716 ms (97,5%)
                   Overhead used : 14,507923 ns
core.reducers/filter
Evaluation count : 36 in 6 samples of 6 calls.
             Execution time mean : 17,903643 ms
    Execution time std-deviation : 186,971423 µs
   Execution time lower quantile : 17,675913 ms ( 2,5%)
   Execution time upper quantile : 18,138291 ms (97,5%)
                   Overhead used : 14,507923 ns
core.async/pipeline
Evaluation count : 22230 in 6 samples of 3705 calls.
             Execution time mean : 27,089463 µs
    Execution time std-deviation : 136,898899 ns
   Execution time lower quantile : 26,919838 µs ( 2,5%)
   Execution time upper quantile : 27,291471 µs (97,5%)
                   Overhead used : 14,507923 ns
(For some reason, it fails to measure the pipeline code. It doesn’t in my real code.)

pez08:03:05

Interestingly (to me, at least 😄 ) for performs on par with transduce with this task:

(println "for")
    (quick-bench #_time
                 (count
                  (for [i every-other
                        :when (aget ba i)]
                    i)))

for
Evaluation count : 36 in 6 samples of 6 calls.
             Execution time mean : 19,456574 ms
    Execution time std-deviation : 100,364503 µs
   Execution time lower quantile : 19,370228 ms ( 2,5%)
   Execution time upper quantile : 19,618312 ms (97,5%)
                   Overhead used : 14,507923 ns

p-himik09:03:07

Out of curiosity - what about a plain loop?

raspasov09:03:01

@U0ETXRFEW I see… Ok, I think the biggest gains from pipeline are to be had when the pipeline transducer is CPU intensive (think like parsing HTML into data, file compression, etc); here you have a pretty straightforward xf (filter #(aget ba %)) Also, I think 1,000,000 samples is not that much really, so (pipeline …) would be suffering from all the channel, etc overhead of passing the data around;

raspasov09:03:51

Also, a sidenote, (time …) is almost never a good benchmark strategy (but quick-bench is); I’ve seen cases where a simple (time …) benchmark would be “slow” but quick-bench would actually show a huge improvement since the JVM does its JIT magic and code really speeds up after a few iterations in some cases;

raspasov09:03:27

I think that’s a good idea @U2FRKM4TW (loop []…)

raspasov09:03:59

That’s probably the fastest thing you can get in terms of raw single thread perf… pretty much Java speed;

pez10:03:36

BOOM

(println "loop")
(quick-bench #_time
             (count
              (loop [res []
                     i 1]
                (if (<= i n)
                  (recur (if (aget ba i)
                           (conj res i)
                           res)
                         (+ i 2))
                  res))))

loop
Evaluation count : 84 in 6 samples of 14 calls.
             Execution time mean : 7,518441 ms
filter
Evaluation count : 30 in 6 samples of 5 calls.
             Execution time mean : 23,020098 ms
transduce
Evaluation count : 36 in 6 samples of 6 calls.
             Execution time mean : 19,090405 ms
core.reducers/filter
Evaluation count : 42 in 6 samples of 7 calls.
             Execution time mean : 16,328693 ms
for
Evaluation count : 36 in 6 samples of 6 calls.
             Execution time mean : 19,678977 ms

raspasov10:03:27

Yup, loop is the king 🙂

raspasov10:03:48

If you really care about perf. I highly recommend YourKit

raspasov10:03:18

I bet it will help you gain 50% in no time

raspasov10:03:44

I’ve used it, it’s like magic; the gains will come from a place you least expect… some reflection call that’s using 50% of your CPU time

p-himik10:03:47

@U0ETXRFEW Now try making res a transient. :)

pez10:03:26

transient, huh? Doin’ it!

raspasov10:03:51

Try also unchecked-math 🙂

3
p-himik10:03:26

In my previous adventure with single-threaded high perf, I ended up writing a Java class. :D All my data consisted of integers and Clojure doesn't really like them.

raspasov10:03:31

Also, http://clojure-goes-fast.com (various ideas how to go fast)

pez10:03:25

I’ll be trying YourKit too. Though only out of curiosity really. I don’t have performance tasks often. This is a little toy challenge I have, mainly to learn more about Clojure. I profile it with tufte right now, which is pretty nice.

pez10:03:11

Seems like I should be able to parallelize the loop, no?

p-himik10:03:25

Absolutely, your problem is a textbook map(filter)/reduce problem.

pez18:03:35

transient shaves some more of the time, as hinted at 😃

loop
             Execution time mean : 7,704050 ms
loop-transient
             Execution time mean : 5,017702 ms
filter
             Execution time mean : 24,047486 ms
transduce
             Execution time mean : 19,687393 ms
core.reducers/filter
             Execution time mean : 17,303117 ms
for
             Execution time mean : 21,142251 ms

👍 3
pez19:03:42

Unchecked math doesn’t seem to make much of a difference for the particular problem.

p-himik19:03:10

I think that's because there's only a single math operation there, and its arguments' types are well known by the compiler. If you really want to pursue it further, I would try to get the bytecode for that code and see if there's something fishy going on. I've had some success with https://github.com/gtrak/no.disassemble/ and https://github.com/clojure-goes-fast/clj-java-decompiler before.

pez20:03:40

Unchecked doesn’t attract me so much. I would rather to figure out how to parallellize it. I can’t immediately see how:

(quick-bench #_time
                 (count
                  (loop [res []
                         i 1]
                    (if (<= i n)
                      (recur (if (aget ba i)
                               (conj res i)
                               res)
                             (+ i 2))
                      res))))

p-himik20:03:10

- Split ba into N chunks - For each chunk, run a thread that creates its own res - Combine the resulting collection of res vectors in a single vector, preserving the order

p-himik20:03:51

Just out of interest - why (+ i 2)? Does ba store something unrelated at even indices?

pez21:03:13

Yes, I am only interested in the odd indices. ba contains the results of an eratosthenes sieve, where I have skipped sieving even numbers, b/c we all know there’s only one even prime number. 😃

pez21:03:46

I was hoping there was some reducer or something that would do all those steps for me.

p-himik21:03:10

Oh, is that code just to find prime numbers up to n? If so, then even constructing the sieve could be made parallel. And I'm 95% certain there's already a Java library that does it. :)

pez21:03:36

Haha, I’m in this to learn about Clojure. 😃

pez21:03:24

That code is only to pick out the prime numbers I have found up to n.

pez21:03:49

Here’s the full thing, using loop and transient:

(defn pez-ba-loop-transient-sieve [^long n]
  (let [primes (boolean-array (inc n) true)
        sqrt-n (int (Math/ceil (Math/sqrt n)))]
    (if (< n 2)
      '()
      (loop [p 3]
        (if (< sqrt-n p)
          (loop [res (transient [])
                 i 3]
            (if (<= i n)
              (recur (if (aget primes i)
                       (conj! res i)
                       res)
                     (+ i 2))
              (concat [2] (persistent! res))))
          (do
            (when (aget primes p)
              (loop [i (* p p)]
                (when (<= i n)
                  (aset primes i false)
                  (recur (+ i p p)))))
            (recur  (+ p 2))))))))

pez21:03:42

I haven’t ventured into how to speed up the sieving (beyond the obvious optimizations) b/c most of the time has been spent in picking out the indicies from the sieve.

pez21:03:50

I’m trying to figure out how to parallelize the work with converting my byte-array to indexes. Parallelizing with filter was so easy that I get surprised by how much things grow when I try to do it with the loop. I have this so far.

(comment
  (let [n 1000000
        ba (boolean-array n)
        prob 0.15
        sample-size (long (* n prob))]
    
    (doseq [i (take sample-size
                    (random-sample prob (range n)))]
      (aset ba i true))
    
    (let [^ExecutorService
          service (Executors/newFixedThreadPool 6)
          ^Callable
          mk-collector (fn [^long start ^long end]
                         (fn []
                           (loop [res (transient [])
                                  i start]
                             (if (<= i end)
                               (recur (if (aget ba i)
                                        (conj! res i)
                                        res)
                                      (+ i 2))
                               (persistent! res)))))
          num-slices 10
          slice-size (/ n num-slices)]
      (doseq [[start end] (partition
                           2
                           (interleave
                            (range 1 (inc n) slice-size)
                            (range slice-size (inc n) slice-size)))
              :let [f (.submit service (mk-collector start end))]]
        @f))))
There are two unsolved things here: 1. My future f contains nil even though I know that the collector I create with mk-collector produces the collection I want. 2. I don’t know how to combine my slices in the order I start the threads. And also. This is slower than my single thread solution. Not by very much, but anyway. Am I even on the right track?

p-himik21:03:28

Things that I notice immediately: - ^Callable there marks mk-collector and not the result of calling (mk-collector ...). And that, I think, it useless because the compiler already knows that mk-collector is callable. If you want to say "mk-collector returns a callable", you have to tag its arguments list. - Don't deref within doseq - this way, you start a thread and immediately wait for its completion, then start the second one, and so on. Instead, create a vector of futures and only then deref all of them in order. And that will be the exact order in which you have created them. In fact, you can deref them in order in reduce - it even optimizes it further, albeit not substantially since all threads have roughly the same amount of work in your case. - You start 6 threads but create 10 slices - why? Choose the number of threads you want to have and create the same amount of slices, one per thread. - That (partition ...) form makes my head spin. I have a strong feeling that whatever it does could be rewritten in a much simpler way in the overall context. I might be wrong though.

pez21:03:21

Thanks. About the partitition… I’m sure you are right. I had it hard coded at first and then just translated the way I hard coded it. 😃

pez21:03:46

Threads vs slices. I tried using 10 for both, but it didn’t make a difference. I have six cores on my machine so went for that, but my partition blows up with 6 slices. Haha.

p-himik21:03:22

> partition blows up What exactly does that mean? It was working just fine with 1 huge slice after all.

p-himik21:03:47

It makes a difference in the overall code - you won't need any explicit executor, you would be able to just use future.

p-himik21:03:16

And it also should make some performance difference as well. It might not be noticeable in this context, but in general it should exist.

pez21:03:37

Blows up means that if my start, end , indices get out of whack and I get index out of bounds errors. I didn’t want to focus on this before I have got the basic infrastructure right.

p-himik21:03:37

Ah, it just means that your partition incantation is incorrect. :) It has nothing to do with threads.

pez21:03:23

Yeah, nothing to do with threads. I just didn’t succeed with this naive partition to create 6 slices for my 6 threads. But it won’t matter if I don’t need the executor service, anyway.

p-himik21:03:23

You should end up with something like this:

(let [n-partitions 6
      ;; Notice how it says `mapv` and not `map` - this is important.
      ;; You want to be eager to start all the futures right away.
      futures (mapv (fn [partition]
                      (future
                        %magic%))
                    (range n-partitions))]
  (into []
        (mapcat deref)
        futures))

❤️ 3
pez22:03:38

Yes. now it runs about 2X faster than the non-future version. And, even produces the right result.

pez22:03:15

(let [mk-collector (fn [^long start ^long end]
                           (fn []
                             (loop [res (transient [])
                                    i start]
                               (if (<= i end)
                                 (recur (if (aget ba i)
                                          (conj! res i)
                                          res)
                                        (+ i 2))
                                 (persistent! res)))))
            num-slices 10
            slice-size (/ n num-slices)
            slices (partition
                    2
                    (interleave
                     (range 1 (inc n) slice-size)
                     (range slice-size (inc n) slice-size)))
            futures (mapv (fn [[start end]]
                            (future
                              ((mk-collector start end))))
                          slices)]
        (into []
              (mapcat deref)
              futures))

p-himik22:03:20

Great! Although I would personally inline mk-collector. Using (( is a hint to that. And you can do all the partition work inside future, thus making it parallel as well.

pez22:03:41

The partition work takes zero time though?

pez22:03:48

Interesting that you suggest inlining mk-collector. I thought the same, but it then started to take 3X more time…

p-himik22:03:12

Depends on its inputs. But moving it inside futures will make the code much simpler.

pez22:03:20

With the above I have Execution time mean : 3,113634 ms Inlining:

(let [num-slices 10
            slice-size (/ n num-slices)
            slices (partition
                    2
                    (interleave
                     (range 1 (inc n) slice-size)
                     (range slice-size (inc n) slice-size)))
            futures (mapv (fn [[start end]]
                            (future
                              (loop [res (transient [])
                                     i start]
                                (if (<= i end)
                                  (recur (if (aget ba i)
                                           (conj! res i)
                                           res)
                                         (+ i 2))
                                  (persistent! res)))))
                          slices)]
        (into []
              (mapcat deref)
              futures))))
Execution time mean : 10,080578 ms

p-himik22:03:17

How about this? I haven't tested it, might not even work:

(let [num-slices 10
        slice-size (int (/ n num-slices))
        offset 2
        _ (assert (zero? (mod slice-size offset))
                  "Dealing with slices that have fractional chunks would be too complicated.")
        futures (mapv (fn [slice-idx]
                        (future
                          (let [start (* slice-idx slice-size)
                                end (if (= slice-idx (dec num-slices))
                                      n
                                      (+ start slice-size))]
                            (loop [res (transient [])
                                   i start]
                              (if (< i end)
                                (recur (cond-> res
                                         (aget ba i) (conj! i))
                                       (+ i offset))
                                (persistent! res))))))
                      (range num-slices))]
    (into []
          (mapcat deref)
          futures))

p-himik22:03:14

The difference in your code is not only inlining but also the lack of type hints. Try adding ^long wherever necessary. To help further analyze such issues, always do this:

(set! *unchecked-math* :warn-on-boxed)
(set! *warn-on-reflection* true) ;; Doubt it will be useful here, but it's useful in general.

pez22:03:40

I simplified the ranges similar to what you suggest here. Did try throwing in type hints, but didn’t seem to bite. Will look closer at where you suggest they should be…

pez23:03:28

Unfortunately it doesn’t gain me the slightest in with my prime number sieve. 😃 But this was very, very good for me to investigate and get to know a bit about, so I am good and happy. Many thanks for the guidance!

p-himik23:03:03

Sure thing. I'm actually quite curious for why extracting that fn makes the code faster.

pez23:03:31

Oh, it didn't in the end. Setting those warning levels helped me find where I lost the time. Using quot instead of / fixed it.

👍 3
restenb12:03:23

so I have this chain of async events where I need to wait on a status conditions for each step in order to proceed to the next. struggling a bit with how to structure this code. currently looks like this:

restenb12:03:27

looking for input on how to handle this sort of thing. gets pretty ugly when we're talking about a chain of 8-10 steps.

borkdude12:03:49

This is called callback hell. You might be able to structure this better using core async

borkdude12:03:09

or some monadic library like promesa maybe (never tried it)

borkdude12:03:53

The way you write your code right now it seems you're not doing it async btw, it seems like a series of sync operations

borkdude12:03:38

Recently someone showed me how he used https://github.com/adambard/failjure to solve this kind of problem

restenb12:03:41

yeah it is. each wait-for hides a loop polling some HTTP API for a specific status

borkdude12:03:28

You might also be able to use an async http lib like httpkit or a java 11 based one

restenb12:03:50

the whole thing is essentially one big sync operation in that each step absolutely needs to wait for the next before proceeding

restenb12:03:07

but consisting of async HTTP calls underneath

borkdude12:03:39

then handle the error/success in the http callback

borkdude12:03:14

you either wait, or you async, there's no waiting + async

borkdude12:03:03

unless you are waiting for a promise that gets delivered by an async request for example

restenb12:03:13

hm. the whole point of this was to wait different amounts of time for each step before issuing a timeout to the client, but iirc you can perhaps do something like

restenb12:03:00

the http client (clj-http) returns futures if you tell it to (which I hadn't)

borkdude12:03:33

it might be better to set the timeout on the request though, if possible

restenb12:03:48

nah the request doesn't time out. you get a response with current status from the API. so this won't really work either

restenb13:03:31

so it's essentially a daisy chain of polling loops

raspasov13:03:46

I feel like I’m having a core.async day 🙂 (already talked about pipeline elsewhere) @restenb https://clojuredocs.org/clojure.core.async/pipeline-async might be helpful for your case

restenb13:03:31

@raspasov i'll take a look, thanks

Elso16:03:39

lately I got massive hangups when retrieving libs from central, using leiningen 2.9.5 - is that a lein problem or maven?

Elso16:03:42

always hangs on different libs and restarting deps a few times eventually succeeds

dpsutton21:03:24

i'm surprised i've never made this silent but terrible error before [{:keys [a :as thing]}]. thing here is not the whole object being destructured. my eyes glanced right over it for a while

🙂 3
Alex Miller (Clojure team)21:03:24

syntactically, that's all legal :)

Alex Miller (Clojure team)21:03:33

but :as in a :keys list seems like something you could lint

borkdude21:03:31

@dpsutton @alexmiller clj-kondo will already kind of make you notice by saying that :as is an unused binding

❤️ 3
devn01:03:30

it also does the nice thing and will ignore thing if it is prepended with _, like _thing to signal unused

dpsutton21:03:41

yeah. i guess i missed it in the font-locking for :as as a keyword

borkdude21:03:33

Speaking about keywords, I'd like some input on this proposal for an :invalid-ident linter which will warn about things like :1.10.2: https://github.com/clj-kondo/clj-kondo/issues/1179

borkdude21:03:30

The reason :1.10.2 is problematic is that when you take the name and convert it to a symbol and try to read that as EDN, it will fail for example. We ran into this issue when outputting keywords in the analysis.