Fork me on GitHub
#clojure
<
2021-07-23
>
seancorfield02:07:47

Yeah, I remember an impassioned talk at Clojure/West about the (poor) state of web security in the Clojure world... by Aaron Bedra back in 2014 https://www.youtube.com/watch?v=CBL59w7fXw4

👀 4
emccue02:07:45

I guess honeysql is the answer to 20:00

seancorfield02:07:45

A comment about SQL injection?

seancorfield02:07:39

I would have thought clojure.java.jdbc/`next.jdbc` were the answer to that security issue since that's where parameterized statements happen but, yeah, HoneySQL probably contributes to making that easier too...

emccue02:07:59

it was kinda a tangent he went off on

emccue02:07:45

where he said i guess at 21:00

emccue02:07:36

supplanting 2014 korma as the "sql abstraction"

seancorfield02:07:43

Korma is... ORM-ish... so I have always recommended against it.

seancorfield02:07:21

I don't remember when we first started using HoneySQL but a former colleague gave a talk about it at Clojure/West... I want to say in 2015?

danielglauser13:07:53

I remember Aaron’s talk, that was a good one.

West03:07:46

(def all-root-dirs
  (for [path (java.io.File/listRoots)]
    path))

(= (take 10 (map file-seq all-root-dirs))
   (take 10 (file-seq (first all-root-dirs))))
Why is this expression false? (There is only one root, /, as I’m on MacOS.) I get my entire filesystem when I run (take 10 (map file-seq all-root-dirs)) but I get only 10 liked I asked when I run (take 10 (file-seq (first all-root-dirs))). Is this an edge case where take doesn’t work on lazy sequences?

lsenjov03:07:43

What does (count (map file-seq all-root-dirs)) return?

seancorfield03:07:41

@c.westrom Since file-seq returns a sequence, map file-seq is going to return a sequence of sequences. That's why the results are different.

seancorfield03:07:06

Try (mapcat file-seq all-root-dirs) and see if that gives you what you want.

FiVo08:07:51

Hey, I am using core.cache and was wondering why the creation of lru cache with large threshold and no intial data was taking so long.

(require '[clojure.core.cache.wrapped :as cw])
(time (cw/lu-cache-factory {} :threshold 10000000))
The bottleneck seems to be this line https://github.com/clojure/core.cache/blob/master/src/main/clojure/clojure/core/cache.clj#L210 where the lru list gets filled with dummy values. Why is this necessary? I looked at the LRUCache code and don't see any reason for the lru list to not grow organically and only initialize it with the base. Maybe I am missing something and someone can enlighten me.

FiVo08:07:28

cc @seancorfield. Ccing you as you are maintainer of the lib.

seancorfield17:07:03

Can you write this up on http://ask.clojure.org please and tag it with core.cache?

seancorfield17:07:03

It'll be a while before I get time to cycle around to that lib in my OSS time and I didn't write that part (@U050WRF8X did) so I'll have to figure it out in detail -- or maybe Fogus will see this and answer here...

FiVo21:07:03

Will write it up. Can also try to do a patch if you want, but maybe @U050WRF8X can chime in first to confirm the issue.

seancorfield21:07:25

@UL638RXE2 There's a signed CA on file for you?

FiVo21:07:41

No not yet I think, but would love to sign it.

FiVo11:07:47

I signed the CA this morning. I don't know if you have the authority to add me to JIRA so I can propose a patch.

seancorfield14:07:34

I don't, I'm afraid, and I know Alex is not around much for the next week or two so I think you'll just have to be patient, until that happens. It'll take me a while to analyze the problem anyway before I can even look at a patch, and I may have to defer to Fogus since he wrote the LRU and LU cache implementations...

fogus (Clojure Team)13:07:14

I'll look into the CA and Jira thing this morning. Alex will be back a couple of days this week so I'll make sure to get his eyes on it if I don't happen to have the keys to that kingdom. 🙂

West11:07:46

@seancorfield Ah, that makes sense. Thank you. I finally have a use for mapcat now lol.

Jim Newton12:07:02

I find the diagnostic messages from clojure.test confusing. For example the following assertion fails and gives the message below:

(is (= (rte/canonicalize-pattern-once '(:cat ::x  ::y))
       '(:cat (:cat String (:* String)) (:cat Double (:* Double)))))
message
expected: (:cat Long (:* Long) Double (:* Double))            

  actual: (:cat String (:* String) Double (:* Double))          
Why does (is (= think that the lhs is the expected value, and the rhs is the generated value. Isn't that really arbitrary since = is a symmetric relation?

borkdude12:07:36

it's a convention I think

Jim Newton12:07:11

I always find it confusing. in fact I'd find it confusing either way. to me it'd be better to say rhs and lhs as it very well might be that both values are generated, and I just want to assert that they're the same, but not assert which one is the correct value

jsn12:07:11

it says nothing about which one is the correct value, it just says that after seeing the lhs it expected to see it again, but sees rhs instead

Jim Newton12:07:54

ah ha. I read the message as. this is the "expected" value as opposed to this actual value.

Jim Newton12:07:08

I wonder if it was written by a native English speaker?

Jim Newton12:07:47

perhaps it is a silly argument, because I can always just put a message string of my own there, and everyone will be happy.

jsn12:07:21

it is the expected value, after you've seen it in the left hand side 🙂

Jim Newton12:07:11

not convinced.

Jim Newton12:07:22

clearly whichever one is the constant is the expected value.

p-himik12:07:07

Which one is the constant here? (is (= (map identity (range 10)) (range 10)))

vemv13:07:08

range 10 is the constant hint: (class (range 10))

p-himik13:07:39

Uhm, class on any value will return you something. Because you already have that value. Replace (range 10) in my code with the same (map identity (range 10)) - which is the constant now?

vemv13:07:15

> Uhm, `class` on any value will return you something.

vemv13:07:54

you missed the point of what I said. a clojure.lang.LongRange is a value per se, so range 10 can be considered a constant

p-himik13:07:30

Alright. What do you consider to be a constant?

vemv13:07:46

in the context of the conversation, the thing that needs the least computation is the "constant". The actual terminology is expected/actual and can reasonably apply in 99% of cases

vemv13:07:05

there will be cases where it doesn't, which presumably is your point, but those are rare

Jim Newton13:07:33

exactly, if neither is constant, the neither is the expected value. it is just expected that the two generated values are equal.

p-himik13:07:47

> the thing that needs the least computation With that definition, can you expect anyone to actually implement that, given how little value-add it has and how much ambiguity in brings? :)

vemv13:07:55

I don't know. Does an informal definition given by @U45T93RA6 affect the validity of a practice that has been around for 20 years? Feel free to research

dpsutton13:07:07

@jason358 that’s a clever framing. I’ve never thought of it like that

3
dgb2313:07:50

This conventions is very common. For example if you see a function like assert_equals or similar in any language you’d expect it to be read from left to right, where left is the “expected” value.

3
p-himik13:07:20

> in any language Not in Python. :P unittest doesn't declare anything as expected - it's just first and second pytest treats the second argument in e.g. assert x == y as the expected value.

👍 6
dgb2313:07:04

ty for mentioning. Python is one of the mainstream languages I have almost never used.

dgb2313:07:27

sorry this has become #off-topic

cdpjenkins14:07:01

At a risk of contributing to the off-topic-ness, when I’m coding in Java (which is far too much of the time at the moment), I like to use an assertion library like Hamcrest or AssertJ because they make it really obvious which is the actual value, and which is the assertion. This discussion has reminded me of the bad old days when all we had was JUnit’s assertEquals() and I always felt that expected and actual were the wrong way round.

dgb2314:07:19

clojure often has a strong notion of left to right sequentiality as well. For example (or 1 2) => 1, (or (do (println 1) 1) (do (println 2) 2)) => 1 1

dgb2314:07:52

(= (do (println 1) 1) (do (println 1) 1) (do (println 2) 2) (do (println 2) 2)) => 1 1 2 2 false

dpsutton14:07:11

I doubt there is a language where or doesn't have this property. I know djikstra was big on random choice out of lists and some languages leave argument order undefined. but Clojure guarantees argument evaluation in order (edit: i believe. i see ghadi typing so perhaps i will be corrected)

Phil Shapiro15:07:07

Pretty sure there are a few languages that allow you to choose which behavior you want. I seem to recall Ada supports both short circuit and non-short circuit boolean forms.

dpsutton15:07:43

do you know if the non-short circuit forms are arbitrary element of the or statement and not left to right?

dpsutton15:07:33

non-short circuit could mean evaluating all the arguments left to right even after you have found a true one. doesn't scramble the order

dpsutton15:07:57

> In the absence of short-circuit forms, Ada does not provide a guarantee of the order of expression evaluation, nor does the language guarantee that evaluation of a relational expression is abandoned when it becomes clear that it evaluates to False (for and) or True (for or).

dpsutton15:07:03

nice. you are right

Phil Shapiro15:07:32

I think there are others where ordering isn’t defined but can’t think of them offhand. Probably there are other languages where OR/AND are implemented as functions rather than primitive language features, or macros as in clojure.

dpsutton15:07:15

haskell they are functions and laziness deals with the expression. i imagine anyone can write a version that shuffles the values

dgb2314:07:46

I fully agree with this. This is very expected under the assumption that we encode control.

ghadi14:07:30

it gets more interesting with maps:

{:a (do (println "a") :foo)
 :b (do (println "b") :bar)}

ghadi14:07:42

no one should rely on eval order here, but many do

dgb2315:07:17

interesting!

ghadi15:07:25

(side-effects in a map literal are gross enough on their own)

dgb2315:07:30

but = didn’t do what I expected

dpsutton15:07:47

what did you expect?

dgb2315:07:59

short circuiting on the first falsy value

dpsutton15:07:21

it can do that, but in order to do that it needs the values

ghadi15:07:24

function arguments are evaluated before the function

dgb2315:07:24

it doesn’t “need” to evaluate all expressions right

ghadi15:07:29

and = is a function

dpsutton15:07:31

it "has" to

dgb2315:07:39

applicative order

dgb2315:07:48

its a function yeah 🙂

dgb2315:07:40

(macroexpand '(or (do (println 1) 1) (do (println 2) 2)))

=>

(let*
 [or__5533__auto__ (do (println 1) 1)]
 (if or__5533__auto__ or__5533__auto__ (clojure.core/or (do (println 2) 2))))

Phil Shapiro15:07:24

If you wanted to play around with macros, you could write your own version of = that has the behavior you were expecting. There’s nothing special about or other than it’s written as a macro instead of a function, so it can control when its arguments are evaluated.

dgb2315:07:52

not entirely sure if it’s right! and I have to go 😮

(defmacro short=
  ([_] true)
  #_([x y]
   (= x y))
  ([x & args]
   `(if (not= ~x ~(first args))
      false
      (short= ~(first args) ~@(rest args)))))

dgb2315:07:58

seems ok at first glance:

(macroexpand-all '(short= 1 1 2))

=>

(if (clojure.core/not= 1 1) false (if (clojure.core/not= 1 2) false true))

ghadi15:07:56

you're double evaluating all the args

👍 3
ghadi15:07:12

need to let bind as in or

Jim Newton15:07:33

Does anyone know whether the clojure compiler does some optimization with map which might interfere with dynamic binding? I have a function that looks like the following:

(defn conversion-cat-99
  [self]
  (rte/create-cat (map canonicalize-pattern-once (operands self))))
and canonicalize-pattern-once is a dynamic variable. When I rebind, the variable, sometimes (it seems) the old value gets used inside conversion-cat-99 . When I try to debug this, the problem is reproducible until I redefine this function to the same thing as it already is, and the problem goes away. it is as if the compiler has inserted the function object into the code rather than the variable, so that new bindings are ignored. Perhaps the problem is elsewhere, probably a bug in my code, but I've been searching for days and my evidence points to this.

borkdude15:07:40

laziness is the problem

ghadi15:07:18

negative - it's the value inside the var #'canonicalize-pattern-once that is passed as the argument to map, so any rebinding you do after that doesn't apply

ghadi15:07:24

you could do (map #'canonicalize-pattern-once ....) and then rebindings will be seen, but then you're subject to the laziness problem that @borkdude mentions

Jim Newton15:07:32

ghadi, If you're right I could rewrite it as:

(defn conversion-cat-99
  [self]
  (rte/create-cat (map (fn [re] (canonicalize-pattern-once re)) (operands self))))
to fix the problem, right?

borkdude15:07:02

it's often better to pass the value of the dynamic var explicitly as an additional arg or part of a map

ghadi15:07:57

it's true you shouldn't mix dynvars and laziness

Jim Newton15:07:12

but laziness happens implicitly all over the place.

ghadi15:07:26

but there is a conceptual issue going on: the value inside the var is passed to map, not the var itself

Jim Newton15:07:15

and does that value extraction happen at compile time or at evaluation time?

Jim Newton15:07:25

if it happens at compile time then rebinding will have no effect.

ghadi15:07:48

it happens at evaluation time

ghadi16:07:14

the only thing that happens at compile time is macroexpansion

ghadi16:07:18

and compilation

Jim Newton16:07:30

WAIT A MINUTE!!!! maybe I see the problem.

Jim Newton16:07:23

My code basically looks like the following, very roughly

(declare f)
(defn conversion-cat-99  [self]
  (rte/create-cat (map f (operands self))))
(def :^dynamic f (fn ...)
(defn g []
  (binding f something-new)
  (conversion-cat-99 ...))

Jim Newton16:07:19

I believe that the #' optimization must have occurred because I used declare thus clojure knows its a var, but doesn't know it's dynamic. then when I redefine the function later, clojure knows its dynamic.

Jim Newton16:07:35

it's a theory. I can test it to see if that is the case

borkdude16:07:20

it's probably best if you made a repro that other people can actually run instead of guessing what you are doing

borkdude16:07:13

e.g. your pseudocode above doesn't actually show the usage of f anywhere

Jim Newton16:07:23

well I'm glad to discuss it here because I've been banking my head for days. and discussing it here, if I'm correct, it is solved in minutes.

ghadi16:07:28

it's best to avoid dynvars

☝️ 3
ghadi16:07:34

unless absolutely necessary

Jim Newton16:07:48

I'm using dynvars to avoid exhausting the java heap.

ghadi16:07:41

need to expand on that

borkdude16:07:51

turn f into an argument (of conversion-cat-99?) and enjoy the weekend :)

Jim Newton16:07:08

doesn't really work to turn f into the argument of conversion-cat-99.

hiredman16:07:14

mixing dynamic vars and laziness is always exciting

3
hiredman16:07:39

I should say non-strictness

Jim Newton16:07:46

I don't think the problem is laziness. I think it is forward declaration. Here is a test case:

(ns jimka-test
  (:require [clojure.pprint :refer [cl-format]]))

(declare f)

(defn g []
  (map f '(1 2 3)))

(def ^:dynamic f (fn [x] (* x x)))

(defn h []
  (assert (= (g) '(1 4 9)))
  (binding [f (fn [x] (+ x x))]
    (assert (= (g) '(2 4 6))
            (cl-format false "(g) returned ~A" (g)))))

(h)

hiredman16:07:20

guess which bindings are in place while guessing when a seq is realized

hiredman16:07:28

you need the dynamic on the declare

Jim Newton16:07:29

the second time g is called, in the second assertion, f has been rebound. However, g was compiled before the compiler knew f was a dynamic variable.

hiredman16:07:47

g doesn't know f is dynamic when it is compiled

Jim Newton16:07:20

obviously the compiled code looks very different if f is a dynamic variable or a lexical variable.

Ed16:07:02

(declare ^:dynamic f)
that works though, right?

Jim Newton16:07:23

I didn't know you could do that. But I'll give it a try.

Jim Newton16:07:52

do I need ^:dynamic on the declare and also the def ?

Ed16:07:16

yes I would think so

Ed16:07:46

but +1 on borkdude's comment - make the dynamic var an argument to conversion-cat-99 instead

Jim Newton16:07:46

oops my assertion was wrong. I need (2 4 6) not (1 2 6) blush

Jim Newton16:07:00

that seems to work, indeed.

Jim Newton16:07:22

@borkdude, hi Michiel .... that's a really interesting static check to make. a declared variable which is later decorated as dynamic.

Jim Newton16:07:39

no, it cannot be made an argument to conversion-cat-99. It only appears that way because I've reduced the test case for the example.

borkdude16:07:09

it's the first time in 10 years I've seen this problem. why are you forward declaring this dynamic var in the first place?

Jim Newton16:07:37

the values of dynamic variables are memoized versions of global functions. it is not a good idea to edit all the functions in all the possible call chains to add 20 extra parameters for all the memoized functions.

Jim Newton16:07:49

why is it dynamic? so that I can effectively do the following:

(binding [f (memoize f-implementation)]
   ...)
That way after the binding form finishes, all the memoized information about f is GC'ed.

Jim Newton16:07:00

this is great for running 1000s of test cases and avoiding filling up VM will all the memoized information.

ghadi16:07:59

memoization, now you got 3 problems

3
Jim Newton16:07:21

the pre-binding and post-binding of f in my case have the same semantics. just binding forces re-memoizing the values.

Ed16:07:29

sounds like it could be a map with a bunch of symbols->fns in? which could be lexically scoped and gc'd that way?

Jim Newton16:07:50

the 2 hardest problems in computer science are naming, caching, and off-by-1 errors.

😄 3
Jim Newton16:07:06

not sure what you mean by symbols->fns. but the functions are defined in different packages, thanks to the limitation that clojure only allows one package for UNIX file.

Ben Sless16:07:20

The suggestion is instead binding in a global environment, create a local mapping of names to functions and pass it as an argument

Jim Newton16:07:42

refactor 100s of functions to pass that argument around?

Jim Newton16:07:14

no, I think dynamic variables are in the language because they are useful. Embrace the power of lisp.

Jim Newton16:07:21

just as I've learned clojure lets your declare dynamic variables AFTER it is already assumed they are not dynamic, and it doesn't warn you. presumably a bug in the compiler.

Jim Newton16:07:32

but one I know about now and I can guard against.

Ben Sless16:07:49

Useful, yes, but mixed with laziness it is like gasoline in a highly pressurized, hot, environment. Powerful and useful when contained. Explosive otherwise.

Jim Newton16:07:17

indeed. it is an argument non-lispers use against many of the features of lisp. Oh that's dangerous. I've heard the argument for 30 years.

Ben Sless16:07:54

It also breaks referential transparency. While they're useful, it seems like Clojure users have drifted towards general avoidance of dynamic environment. If you look at older libs they make pervasive use of them. New ones, not so much

Jim Newton16:07:59

Paul Graham writes about it in Hackers and Painters

dpsutton16:07:04

i think you could do (def ^{:declared true :dynamic true} *f*)

dpsutton16:07:14

instead of (declare *f*)

Russell Mull16:07:14

Stylistic arguments aside: it is definitely the case that typical Clojure code uses dynamic vars infrequently. Far less than you might see in some Common Lisp or even Scheme codebases. It is also the case that you'll lose out on https://clojure.org/reference/compilation#directlinking, which is a pretty big help in production.

emccue16:07:08

Well, take for example next.jdbc

emccue16:07:18

I like passing the db manually

emccue16:07:28

Other codebases make it a global var

emccue16:07:18

The price of manually partialing all the functions with a dynamic var isnt nothing

emccue16:07:52

But for a library

emccue16:07:57

I think it's the better choice

Jim Newton16:07:19

@dpsutton what is :declared true :dynamic true intended to do? I don't understand the intriguing suggestion.

dpsutton16:07:12

actually it looks like you can just (declare ^:dynamic works?)

emccue16:07:21

And I like the general trend away from them in the ecosystem

dpsutton16:07:32

check the meta on that and it will have both :declared and :dynamic

dpsutton16:07:58

i was worried that declare wouldn't marshal along the metadata but it does

lassemaatta16:07:43

do correct me if I'm wrong, but doesn't this approach look a lot like with-redefs?

dpsutton16:07:13

But declare just emits (simplified) (def ^declared your-name). and you know that to mark something as dynamic you just add the appropriate :dynamic metadata. So combine the two

emccue16:07:49

Yeah, with redefs is kinda the tool for testing since it does the right thing across all threads

emccue16:07:15

Dynamic vars are more for twiddling a config type thing

Jim Newton16:07:24

does (declare ^:dynamic a b c) declare only a to be dynamic or a, b, and c?

emccue16:07:49

Because the metadata would only be attached to the first symbol

emccue16:07:58

If it isn't I would be very surprised

Jim Newton16:07:43

@lassemaatta I believe with-redefs redefines in all threads, which I want to avoid. if the test cases are run in different threads, I don't one one test redefining a function being used by another thread

emccue16:07:47

Ah parallel tests

emccue16:07:09

Now you are playing with portals

Jim Newton16:07:47

There's no reason to intentional make your code non-thread-safe unless there's a good reason to do so. right?

Russell Mull16:07:55

It's definitely more common / idiomatic to do this using with-redefs, if it's only done under test. The reason to do this is so the compiler can do a better job with your code, when not under test. But you're right to be cautious; it's a heavy hammer that can have unexpected side effects if you're not careful.

Jim Newton16:07:34

Great discussion guys (and ladies) I've been hitting my head against a wall for almost a week with this issue.

Jim Newton16:07:48

discussing it here solves the problem in minutes. bravo!

Ben Sless17:07:14

On the topic of dynamic scope, this was written by Stuart Sierra in 2013 https://stuartsierra.com/2013/03/29/perils-of-dynamic-scope

borkdude17:07:48

@jimka.issy I didn't ask why it was dynamic, I asked why you needed declare + (def ^:dynamic x). Why the declare?

borkdude17:07:39

but even then, I think it's worth refactoring those 100 functions. I've done it myself in a project where a datomic db was referenced using a dynvar. This didn't work, functions referenced the wrong as-of due to laziness.

Jim Newton17:07:53

ah why the declare. without declare you have to define functions in an illogical order. It is better, in my opinion, to define similar functions together, or functions would work on the same problem or which treat the same object. without declare you have to define functions AFTER their dependencies have been defined.

Jim Newton17:07:26

also you have to declare (as I understand) if you want mutually recursive functions or functions which call each other even in a non recursive way.

borkdude17:07:43

ok, thanks for clarifying

Jim Newton17:07:14

One thing I do wonder is, what is supposed to tell me that I have unnecessary declarations. I.e., declarations which might have been necessary at some time but may no longer be?

borkdude17:07:05

@jimka.issy If you're using clojure-lsp (recommended! #lsp) it will show you the number of references next to the var.

3
clojure-lsp 3
Jim Newton17:07:00

what is clojure-lsp ? is that an emacs mode?

borkdude17:07:25

there is an lsp package

borkdude17:07:37

but lsp = language server protocol and this also needs a server running

borkdude17:07:52

and this server is implemented to support a specific language

Jim Newton17:07:04

sounds complicated

Jim Newton17:07:25

how does it relate to cider?

borkdude17:07:44

it sounds complicated but it provides you features that you don't need a REPL for, such as navigation, renaming, reference count, etc. it is not related to CIDER at all

borkdude17:07:58

clojure-lsp uses clj-kondo for static analysis but provides additional tooling on top of this

ericdallo18:07:08

@jimka.issy this may help you understand it: https://emacs-lsp.github.io/lsp-mode/tutorials/clojure-guide/ It also explains that you can use CIDER together if you want it

👀 3
Jim Newton09:07:23

I'm tempted to try it. Question: how much of of the feature set of emacs-lsp (and of clojure-lsp in general) depend on adherence to idiomatic use of the clojure language, and how much really are based on correct language semantics? I quite often violate idiomatic usage, expecting the language to work as documented, and find myself fighting with the IDE because of it. For example, I tried out cursive/intelliJ at one point, but abandoned it because it seems cider understood the language better, and required less idiomatic programming. I admit I entered that short experiment already biased toward emacs, so my conclusions may be dubious.

Jim Newton09:07:16

I'm looking at the pages about getting started. and links lead me here https://clojure-lsp.github.io/clojure-lsp/building/

Jim Newton09:07:35

Do I really need to install GraalVM ?

Jim Newton09:07:41

I tried to install lsp-mode from emacs using M-x package-install lsp-mode, and I got the error: package-install-from-archive: http://melpa.org/packages/lv-20181110.1740.el: Not found

borkdude09:07:46

@jimka.issy You will have the same behavior as clj-kondo pretty much with clojure-lsp because it uses clj-kondo for static analysis

borkdude09:07:03

But that also means that if you configure clj-kondo correctly, clojure-lsp will also work better for you

borkdude09:07:23

You don't need to build clojure-lsp yourself, you can install or download a pre-compiled binary.

borkdude09:07:49

> package-install-from-archive: http://melpa.org/packages/lv-20181110.1740.el: Not found This may mean you have to run package-refresh-contents first

3
Jim Newton10:07:19

that did the trick. now installing lsp-mode installed hundreds of things.

Jim Newton10:07:11

I don't really understand the installation page. https://emacs-lsp.github.io/lsp-mode/tutorials/clojure-guide/. It is not clear what I need to install myself, and what M-x package-install lsp-mode takes care of for me.

borkdude10:07:36

@jimka.issy This is my personal config: https://github.com/borkdude/prelude/blob/2466381f2cc438f9c01fcb413fd73bd56903175e/personal/init.el#L357-L410 You have to require lsp-mode manually or using the tool you normally use, e.g. use-package or emacs prelude

Jim Newton10:07:56

yes. did that.

Jim Newton10:07:45

This is what I see in the *Messages* buffer.

LSP :: Download clojure-lsp started.
LSP :: Starting to download  to /Users/jnewton/.emacs.d/.cache/lsp/clojure/clojure-lsp.zip...
Contacting host: 
You can run the command 'lsp-install-server' with M-x l-i-se RET
Contacting host: 
LSP :: There are language server((clojure-lsp)) installation in progress.
The server(s) will be started in the buffer when it has finished.
Mark set
does that mean it is still in the process of installing, or has it finished installing?

Jim Newton10:07:52

or was there an installation error?

borkdude10:07:17

depending on your project and computer, it can take a while before it's finished indexing

borkdude10:07:28

does it say something in your buffer, like some sort of progress bar?

Jim Newton10:07:00

here is what I see on the window decoration:

borkdude10:07:37

oh that's interesting, it's installing clojure-lsp automatically. I've never seen that

Jim Newton10:07:23

the emacs gods are kind to me

Jim Newton10:07:31

I wonder how to know if it finished, or failed ???

Jim Newton10:07:29

and you downloaded the executable from where?

borkdude10:07:41

The line lsp-diagnostics-provider :none is atypical: it disables all clj-kondo linting. I do this because I run clj-kondo myself, most users just use it with clojure-lsp.

borkdude10:07:26

depending on your OS there might be package managers that can install it for you as well

borkdude10:07:35

I do this using brew

borkdude10:07:41

brew also works for linux nowadays

borkdude10:07:09

but that's another tool, just try to do it manually for now

Jim Newton10:07:18

brew install clojure-lsp ?

Jim Newton10:07:20

brew remove clojure-lsp

Jim Newton10:07:31

brew install clojure-lsp/brew/clojure-lsp-native

Jim Newton10:07:13

seems to be working.

Jim Newton10:07:01

I see something intriguing:

Jim Newton10:07:22

why does it think there are tests for some functions but not others? How has it determined that I have a test or not?

Jim Newton10:07:29

Also I see it has problems understanding my code: 😞

Jim Newton10:07:07

it says one one line that gns/or? has 0 references, and just below where gns/or? is referenced it says unresolved var.

borkdude11:07:13

You are probably using a non-standard def here?

borkdude11:07:22

usually def does not take a fully qualified symbol

Jim Newton11:07:32

no it is the clojure def.

Jim Newton11:07:57

higher in the file I have

(alias 'gns 'clojure-rte.genus)

Jim Newton11:07:37

no, def can take qualified or unqualified names. defn only takes unqualified names.

Jim Newton11:07:13

and obviously defmethod takes qualified or unqualified names, else it would be impossible for other applications to add methods to a multimethod

borkdude11:07:31

this is not very typical Clojure code, I've never seen this before :)

borkdude11:07:48

for defmethod I agree, but for def, first time I see this

borkdude11:07:22

user=> (def clojure.core/dude 1)
Syntax error compiling def at (REPL:1:1).
Can't refer to qualified var that doesn't exist

Jim Newton11:07:46

I had a long discussion about this somewhere. maybe closure verse, maybe clojurians. I don't understand why defn explicitly disallows such names, when defn is just a macro expanding to def which DOES allow them.

borkdude11:07:42

it may allow it, but it's probably not intended to allow it, but relying on an implementation detail that the var already existed before you ran this

Jim Newton11:07:09

not sure why you get that error. works fine for me.

borkdude11:07:34

$ clj
Clojure 1.11.0-alpha1
user=> (def clojure.core/dude 1)
Syntax error compiling def at (REPL:1:1).
Can't refer to qualified var that doesn't exist
This is my full REPL output

borkdude11:07:01

The way you would usually do this is using intern

borkdude11:07:17

user=> (intern 'clojure.core 'dude 1)
#'clojure.core/dude

Jim Newton11:07:41

which is the default namespace of the repl?

Jim Newton11:07:56

try it with (def user/dude 1)

borkdude11:07:21

so gns is an alias to the current namespace? then why are you using that alias?

Jim Newton11:07:53

because I have other functions of the same name in other name spaces, and I never want to accidentally confuse them. even when I refactor and move code around

borkdude11:07:11

I guess we could teach clj-kondo about it, although in my opinion this is very much a niche use case

Jim Newton11:07:21

As I mentioned before, my code sometimes depends on the semantics of the language despite commonly used idioms. I know maintaining a tool is difficult. However, in my opinion a tool like clj-kondo should implement the language semantics as much as possible/practical, not try to create a better language.

borkdude11:07:38

I agree with that, but there is a backlog, you know, and this is not my paid full time job, although I would very much like it to be. You're welcome to post an issue about it and eventually it will be solved.

❤️ 3
borkdude11:07:16

There is a way to work around this btw.

borkdude11:07:20

using hooks

borkdude11:07:26

let me try something

Jim Newton11:07:56

is that the article I should be reading about how to handle macros? Or is there a better resource?

borkdude11:07:46

@jimka.issy I've got a workaround

borkdude11:07:07

The config:

{:hooks {:analyze-call {clojure.core/def def-hook/transform-def}}}

borkdude11:07:14

The hook:

(ns def-hook
  (:require [clj-kondo.hooks-api :as api]))

(defn transform-def [{:keys [:node]}]
  (let [[name-node & arg-nodes] (rest (:children node))
        name-sym (api/sexpr name-node)]
    (when-not (simple-symbol? name-sym)
      (let [new-node (with-meta
                       (api/list-node
                        (list*
                         (api/token-node 'def)
                         (api/token-node (symbol (name name-sym)))
                         arg-nodes))
                       (meta node))]
        {:node new-node}))))

borkdude11:07:23

For now it just ignores the prefix in def

borkdude11:07:01

Place this in a file called def_hook.clj in your .clj-kondo dir

borkdude11:07:07

and the config should go into .clj-kondo/config.edn

Jim Newton11:07:10

so do I need to create a .clj-kondo directory somewhere?

borkdude11:07:28

yes, in the root of your project, in the same dir as project.clj or deps.edn

Jim Newton11:07:04

done. now do I need to restart something?

borkdude11:07:24

try touching the file you had before, just type something

borkdude11:07:08

if that doesn't work, remove .lsp/sqlite.db and .clj-kondo/.cache and run lsp-workspace-restart

Jim Newton11:07:08

hmm. seems to work at first glance!

Jim Newton11:07:29

question: at the top of the buffer emacs is now displaying some sort of path to the cursor. Sometimes is is underlined in a squiggly green line but sometimes in a squiggly red line. There doesn't seem to be any hover text telling me what this means. Any idea what it's trying to tell me?

borkdude11:07:41

I think this means there are some warnings or errors in the code. Personally I've turned this off

Jim Newton11:07:50

BTW I was trying to create an issue for this problem. But the new-issue template asks me lots of questions which I don't know the answer to. For example. the kondo version. I don't know how to find this. when I type the suggested clj-kondo --version at the shell, the command is not found.

Jim Newton11:07:13

also I don't know which editor plug-in I am using.

borkdude11:07:48

ok, in this case, just mention the clojure-lsp plugin version + lsp-mode emacs

borkdude11:07:01

then I can backtrack which clj-kondo version it's using

borkdude11:07:39

for this issue it's not really important though

borkdude11:07:57

the version is mostly for reminding people that they probably should upgrade if they have an old version

borkdude11:07:07

sometimes the issues they report is already fixed in a newer one

Jim Newton11:07:15

Thanks for the quick workaround.

Jim Newton11:07:50

sorry, but I didn't understand your response about the latest-and-greatest macros documentation.

borkdude11:07:52

usually here is an easier way to configure macros using :lint-as but :hooks can be used for more advanced macros that have no counterpart syntax-wise

Jim Newton12:07:47

reading the section in that file. It is not 100% clear. If I have a macro named xyzzy it seems I need to create a file hooks/xyzzy.clj in the .clj-kondo directory. and in that file define a namespace hooks.xyzzy and use defn to create a function named xyzzy . is that correct?

Jim Newton12:07:06

and then register the hook. with {:hooks {:analyze-call {my-lib/xyzzy hooks.xyzzy/xyzzy}}} where exactly?

borkdude12:07:23

actually, the name of the file and the name of the function in the file doesn't have to correspond to the macro

borkdude12:07:34

but in .clj-kondo/config.edn you need to make the correct mapping

Jim Newton12:07:15

and this code must be able to run WITHOUT loading my project. correct?

borkdude12:07:18

so {:analyze-call {foo.bar/baz my-hooks/hook1}} is a valid config, but you have to name the file accordingly to this config

borkdude12:07:42

the code runs in an interpreter, isolated from your project

borkdude12:07:58

you cannot use any project dependencies in these hooks, it must be pure clojure (for now)

borkdude12:07:27

remember, you don't need a REPL to use this tooling

Jim Newton12:07:33

so can I just copy the macro code there and return macro-expand blah blah blah ?

borkdude12:07:36

it all works independently from your REPL state or classpath

Jim Newton12:07:58

well it looks like the function must return some sort of wrapper {:node insert-expansion-here}

Jim Newton12:07:23

but the argument of :node, is it just plain old clojure expression? or is it somehow decorated ?

borkdude12:07:32

no, you cannot just copy the macro code there from your original macro

borkdude12:07:46

these are nodes which is a richer format than s-expressions

borkdude12:07:01

they retain more information, more specifically, location information

borkdude12:07:14

so you have to transform the incoming node into another node

borkdude12:07:23

into a syntax that clj-kondo understands

borkdude12:07:34

it doesn't necessarily have to be the macroexpansion of your macro

borkdude12:07:52

as long as you re-use most of the incoming nodes and re-arrange them syntactically in a way that makes sense

Jim Newton12:07:56

but isn't there already a kondo function which takes such an sexpression and returns such a data structure?

borkdude12:07:52

kind of, but clj-kondo really needs the location information as metadata on the nodes in order to produce useful diagnostics

borkdude12:07:11

and transforming nodes into s-expressions is lossy

Jim Newton12:07:24

ok, I admit that I don't yet see the final solution. but shouldn't there be an approximation function where I can just return the macro expansion? and let all the annotation information simply go onto the macro-name in the user code, which clj-kondo already knows the location of?

borkdude12:07:11

that question comes up more often. I have tried this when I implemented these hooks, but it breaks down rather quickly.

Jim Newton12:07:12

am I thinking to naively?

borkdude12:07:32

(defmacro foo [x] x)
(inc (foo 1))
This macro does nothing but return its argument. When calling (inc (foo :foo)) one would like to have a type mismatch warning that you can't call inc with a keyword.

Jim Newton12:07:38

I had another idea. try this out. The first time clj-kondo encounters a new use site for a given macro. it flags it as unknown macro usage. Then provide the user with a way to just do a macro expansion. Then clj-kondo could register that expansion statically with the sexpression being expanded. if it ever finds the exact same macro usage, it uses the same expansion. it would need to save the correspondance between in/out somewhere.

borkdude12:07:36

But when transforming the node that represents (foo :foo) to a s-expression, there isn't a way to hold on location information anymore for the keyword, as keywords don't take metadata. Thus, the transformation of nodes into s-exprs is lossy and doesn't fit well with how the static analysis works.

borkdude12:07:48

This is only a small example, but for macros that take a body representing some function, it becomes more problematic.

Jim Newton12:07:27

I don't doubt that there is a subtle and difficult problem. just it is hard for someone who doesn't understand the code to understand the problem.

borkdude12:07:32

Of course one could try to "repair" the transformed s-expression into a node and try to detect which location corresponds to an original node, but this is not trivial.

Jim Newton12:07:31

when trying to re-calculate the location, why not just assume it is exactly at the position of the macro name. foo in this case, even if the macro call is 100 lines long?

Jim Newton12:07:40

isn't that better than giving false errors?

borkdude12:07:49

The node representation is based on rewrite-clj. https://github.com/clj-commons/rewrite-clj Anyone who knows a bit about this library understands how to write hooks. I agree that direct macro-expansion is more ideal, but this wasn't possible without negative effects.

Jim Newton12:07:19

you mean if there are side effects in macro expansion?

borkdude12:07:04

Yes, one could attach all warnings to the original top level node, that could work, but is less precise. And yes, macro expansion would only work if it was pure Clojure, without any library code in the compilation phase.

Jim Newton12:07:47

that would work for 99% of my macros. maybe 100%

borkdude12:07:01

maybe it's worth revisiting this

Jim Newton12:07:05

well, 99%. I have one macro which is a phd thesis.

Jim Newton12:07:10

wouldn't work for that one 😞

borkdude12:07:26

You're welcome to experiment with this

Jim Newton12:07:12

in summary: give the user a hook where you pass him the macro body from the call site, let him expand it and return the expansion, and you annotate everything as if it is at the open-paren of the macro-usage.

Jim Newton12:07:21

I'll be your test case.

borkdude12:07:10

I think you would have to walk the expansion to annotate it with the location of the outer expression

borkdude12:07:44

I can make a branch in clj-kondo for experimentation

ericdallo12:07:54

I'm proud of borkdude knowing almost all LSP features :p

ericdallo12:07:40

BTW lsp-mode has a automatic installation of clojure-lsp feature that downloads latest release binary :)

borkdude12:07:56

I think that step failed for Jim

borkdude12:07:17

or at least he could not see that the process was finished

ericdallo12:07:58

Probably something to improve on lsp-mode side, that is a particularly new feature

ericdallo12:07:17

Opening a issue on lsp-mode would help a lot

borkdude12:07:44

brew is good enough for me

👍 3
borkdude12:07:02

@jimka.issy Can you give me one of your macros + one example call?

borkdude12:07:03

I tried to go with the def one but that case doesn't work, since a call to clojure.core/def expands into cljoure.core/def which will loop forever. This is also a problem that hooks solve: returning no node, just means that the original node will be processed instead.

borkdude12:07:19

I'll just make up another one

Jim Newton12:07:58

(defmacro defn-memoized
  [[public-name internal-name] docstring & body]
  (assert (string? docstring))
  `(let []
     (declare ~public-name) ;; so that the internal function can call the public function if necessary
     (defn ~internal-name ~@body)
     (def ~(with-meta public-name {:dynamic true}) ~docstring (gc-friendly-memoize ~internal-name))
     ))

Jim Newton12:07:32

(defn-memoized [sort-method-keys sort-method-keys-impl]
  "Given a multimethod object, return a list of method keys.
  The :primary method comes first in the return list and the :default
  method has been filtered away."
  [f]
  (cons :primary (remove #{:primary :default} (keys (methods f)))))

(defn-memoized [class-primary-flag class-primary-flag-impl]
  "Takes a class-name and returns either :abstract, :interface, :public, or :final,
  or throws an ex-info exception."
  [t]
  (let [c (find-class t)
        r (refl/type-reflect c)
        flags (:flags r)]
    (cond
      (= c Object)
      :abstract
      (contains? flags :interface)
      :interface
      (contains? flags :final)
      :final
      (contains? flags :abstract)
      :abstract
      (= flags #{:public})
      :public
      
      :else
      (throw (ex-info (format "disjoint? type %s flags %s not yet implemented" t flags)
                      {:error-type :invalid-type-flags
                       :a-type t
                       :flags flags})))))

Jim Newton12:07:47

here's an easier one

(defmacro exists
  "Test whether there exists an element of a sequence which matches a condition."
  [[var seq] & body]
  `(some (fn [~var]
           ~@body) ~seq))
and the callsite
(defn conversion-C3
  "(and A ( not A)) --> SEmpty, unit = STop, zero = SEmpty
   (or A ( not A)) --> STop, unit = SEmpty, zero = STop"
  [td]
  (if (exists [n (operands td)]
              (and (gns/not? n)
                   (member (operand n) (operands td))))
    (zero td)
    td))

borkdude15:07:36

@jimka.issy I have a prototype working now

borkdude15:07:46

I added an API function macroexpand to which you can pass a macro and a node

borkdude15:07:56

and it will expand into a node you can return in the hook

borkdude15:07:47

@jimka.issy Let me know if you are interested in testing this

3
borkdude17:07:25

also you can use https://github.com/borkdude/carve to detect unused vars

seancorfield17:07:44

@jimka.issy If you're struggling with reading clojure.test's (is (= <expected> <actual>)), you might prefer expectations.clojure.test which is 100% compatible with clojure.test but lets you write (expect <expected> <actual>) instead -- and also lets you write things like (expect ::my-spec <actual>) to validation against a Spec or (expect <predicate> <actual>) to validate using a predicate -- https://cljdoc.org/d/com.github.seancorfield/expectations/

Jim Newton17:07:58

VOILA! finally after many weeks, all my tests pass!!!

🎉 12
dgb2320:07:02

Perfect start into weekend

3
Jim Newton17:07:54

funny thing. I had a test which was taking a long time to run. I spend some time investigating why the function being tested was so slow. After a while, I looked at the actual test, and it had a repeat loop generating 100000 samples. It really needed 1% of that amount. sometimes the bug is not where you think it is.

💯 3
Ben Sless19:07:49

Helped a colleague optimize some algorithm he implemented in Clojure once. While I did speed it up significantly, an equal amount of speedup was gained by moving computation from run-time to when the data structure was built.

🙂 3
Jim Newton13:07:55

That's a great advantage of having the full power of the language at compile time.

Ben Sless13:07:50

This specifically was just done by shoving computation after reading configuration, so it was still in the running application and not "compile time", but yes, having entire language at compile time, and compile time being all the time is very powerful

kristof23:07:02

has anybody here tried clojure with the new zgc collector on Linux? Any aberrations or unexpected memory explosions?

kristof23:07:59

yeah I was wondering what the difference was... shenandoah is still single-generation mark sweep, but zgc does crazy things with memory regions (each memory cycles as from-space -> to-space) and tagging pointers

kristof23:07:15

linux only however. not sure why that is but i suspect if I reread the docs it would be enlightening

deleted23:07:36

> It scales from a few hundred MB to TB-size Java heaps, while consistently maintaining very low pause times—typically within 2 ms

kristof23:07:06

the most important part though is your worst case pauses (see: the tail at scale, a google paper). zgc and shenandoah are looking at like... 10ms, maybe 20ms at most worst case pauses, and those are your 99th percentile pauses

kristof23:07:56

which part is what - that heap size obliviousness?

kristof23:07:03

or the pause times

kristof23:07:48

the heap stuff is a reason I'm interested, I found out recently that Go has some pathologically bad behavior for small heap sizes... and big ones!

kristof23:07:16

so making it scale up and down like that... that's a hell of a trick

kristof23:07:50

the way all concurrent collectors do it is to pace the collector thread(s) with the allocs. so the more you alloc, the more you collect, and one kind of follows the other. the disadvantage (with any concurrent collector) is that you lose throughput. but that's the tradeoff for being able to satisfy those soft realtime constraints

kristof23:07:06

the most important part though is your worst case pauses (see: the tail at scale, a google paper). zgc and shenandoah are looking at like... 10ms, maybe 20ms at most worst case pauses, and those are your 99th percentile pauses

kristof23:07:01

> having a GC per "process" is just such a cute hack super agree. Google tried to do something similar to this and the generational hypothesis where they were like "most garbage is scoped to the lifetime of a request" and tried to do something they were calling "request oriented garbage collection". but then it turned out it slowed down most of their go programs so they killed the idea

didibus16:07:57

I think the best I saw was a person trying to hardware accelerate the GC, I thought that was an interesting avenue

kristof23:07:00

I bet for a highly concurrent clojure application though, with true independence between requests, and because data is immutable, you would get a LOT of mileage out of per-request nurseries. fast bump allocs, good cache coherence and you have a better idea of when the objects are going to die (end of the request). would be an interesting experiment

kristof23:07:11

ok I'm going way off topic, sorry all. again, if anyone is using zgc with clojure, let me know your experience... I want to know if it interacts poorly with clojure's "create a ton of objects and never look back" behavior

partywombat 2
Ben Sless08:07:28

I need to finish my stress testing project, it should provide some insight on that question