Fork me on GitHub
#beginners
<
2021-07-28
>
ChillPillzKillzBillz13:07:44

Noob question... can someone explain the '::' notation in clojure? I understand the single colon for the keys in a hash-map.... never understood the double version. What can one do with it?

manutter5113:07:26

That means “define a namespaced keyword in the current namespace.”

manutter5113:07:49

So if you’re in the app.utility namespace, and you say ::this-key, you’re defining a key whose fully-qualified name is :app.utility/this-key

🙏 3
3
Rob Haisfield14:07:21

Any rules of thumb for what makes higher / lower performance Clojure programs?

emccue14:07:01

minimizing unneeded intermediate data structures, i would assume

emccue14:07:17

so using transducers and an into versus a chain or maps

emccue14:07:48

or using a reducible "stream" into the final shape you want like w/ next.jdbc

dgb2315:07:15

As with any language/system: measuring and analysing before optimising. There are many useful techniques that enable optimisation in some way or another. A notable one that is not mentioned yet are transient data structures (good examples of their usage in the core lib). Transients are idiomatic for their use-case and not necessarily some arcane thing you do “just” for optimisation if that makes sense. Maybe the best rule of thumb I think would be to write idiomatic code (sometimes that means choosing the right data structures, functions and techniques, which are assumed to be optimised) and only do additional optimisations manually when and where you need them.

dgb2315:07:14

Hand optimised code that doesn’t need to be gets ugly and hard to change from what I’ve seen and might also not really do anything useful. Also I think one shouldn’t underestimate how crazy strong the JVM is.

dgb2315:07:50

Look here for a bit of advice about java interop and optimization: https://clojure.org/reference/java_interop#primitives

fubar15:07:31

Do Clojure programmers using Vim style editors also use paredit? I’m having trouble getting paredit to play nicely with Vim in Calva, and wondering if I will be missing out on the Clojure experience without paredit.

dpsutton15:07:16

I think you would be missing quite a bit of developer productivity which i consider a necessity. My former coworkers at my last spot were all vimmers and used paredit in vim all the time. They work conceptually. I don't know what issues you are having

zackteo15:07:29

There's other perhaps less standard options of course, like https://github.com/abo-abo/lispy . But it is an Emacs package. Basically Paredit but in a sort of modal editing way

❤️ 3
schmee15:07:04

I’ve used https://github.com/eraserhd/parinfer-rust for a long time and I wouldn’t edit Clojure code without it. I would definitely recommend some sort of assistance for all the parenthesis, be it parinfer or paredit or something else

👍 3
fubar21:07:32

Thanks I couldn’t get Calva Paredit mode to play nicely with Vim so I ended up trying this and it seems good so far!

👍 3
dpsutton15:07:09

I think they use Calva so emacs packages are a non-starter

ahungry15:07:14

evil mode + lispy + lispyville = where it's at

👍 3
Jakub Šťastný16:07:43

I really like lispy, but I found it confusing for an EVIL user: I for instance might think I'm in EVIL normal mode and do / for search. But I often find I was mistaken and I was in Lispy mode and it did lispy-splice and hence edited the current expression. I also had to disable evil-collection, I believe this is the same bug as I was experiencing https://github.com/emacs-evil/evil-collection/issues/116 Which is a bummer, evil-collection is very handy. For a CJ developer, what's relevant is that [ and ] is used for navigation (mapped in Emacs mode, not in EVIL insert mode though), so when in Emacs mode, one cannot write [ and ] without changing the mapping. Lispy works with Emacs mode, so things like m mark words using the Emacs way, rather than the EVIL visual mode, which for me is very confusing. I'm yet to really discover how to properly configure lispyville, as of now I'm not sure what is and what isn't possible. Lispyville without any configuration doesn't do much, it only make sure things like dd or D won't unbalance parenthesis. Overall it's been pretty tough up to now, but I'm getting used to it now, slowly but surely, and it really helps a lot. I would not like to have all the weird keyboard shortcuts like paredit does, this way of working really ain't for me. So I probably just make some more clear indicator of whether I'm in lispy mode or EVIL mode and then I should be fine. It's a shame though that the keybindings aren't more Vim-style.

Andrew Berrien21:07:21

I'm having some trouble using loop/recur. My goal is basically to reduce over a list of commands (storing key/values), but one of the commands is to "jump backwards", repeating previous commands. So I figured a traditional reduce would not be enough and went for loop/recur, passing todo and done sequences. The code works on some small tests, but when I load up the full set of commands, I get a StackOverflowError!

Andrew Berrien21:07:44

Can anyone tell my why tail call optimization is not working in my code? https://gist.github.com/APB9785/e24af137b8207ff7fe73376ae5783cdf

Andrew Berrien21:07:27

(I'm new to Clojure so if it looks un-idiomatic please feel free to tell me my mistakes)

sova-soars-the-sora21:07:33

The most immediate thing to me is that there are multiple recur statements and I think you could get away with one, and use if statements to provide the "return values" For example, since all the recur statements use the same 2 values in the first and second loop args,

(case (first (first todo))
       "cpy"
       (recur (rest todo)
              (cons (first todo) done)
              (run-cpy (first todo) state)) ;unique run-cpy

       "jnz"
       (let [[new-todo new-done new-state] (run-jnz todo done state)]
         (recur new-todo new-done new-state)) ;unique ... the whole let

       "inc"
       (recur (rest todo)
              (cons (first todo) done)
              (run-inc (first todo) state)) ;unique run-inc

       "dec"
       (recur (rest todo)
              (cons (first todo) done)
              (run-dec (first todo) state)))))) ;unique run-dec
could become something like
(recur (rest todo)
       (cons (first todo) done)
       (case (first (first todo))
        "cpy" (run-cpy (first todo) state)
        "inc" (run-inc " " )
        "dec" (run-dec " ") )
Although I don't know if that gets at the root cause of an excellent stack overflow... sometimes there is too much excellence. But yeah, that is another way of writing it and might make it more clear what is happening in the recur So, typically a StackOverflow happens when there's too many things stacked up... Do you have an exit condition for your loop? I think that might be the bug we're after... You can check if something about the args you pass in to the next loop state are equal to some value or greater than some number... and simply return a result

emccue21:07:40

@andrewpberrien I haven't parsed it fully, but this

emccue21:07:44

(concat (reverse (take offset todo)) done) is my suspect

💯 2
Russell Mull21:07:09

I'm not 100% sure, but I've managed to run your code (against the advent of code 2016 test data, right?) and the stack trace looks like this:

at clojure.lang.LazySeq.seq(LazySeq.java:51)
        at clojure.lang.RT.seq(RT.java:535)
        at clojure.core$seq__5402.invokeStatic(core.clj:137)
        at clojure.core$drop$step__5925.invoke(core.clj:2927)
        at clojure.core$drop$fn__5928.invoke(core.clj:2932)
        at clojure.lang.LazySeq.sval(LazySeq.java:42)
        at clojure.lang.LazySeq.seq(LazySeq.java:51)
        at clojure.lang.RT.seq(RT.java:535)
        at clojure.core$seq__5402.invokeStatic(core.clj:137)
        at clojure.core$drop$step__5925.invoke(core.clj:2927)
        at clojure.core$drop$fn__5928.invoke(core.clj:2932)
        at clojure.lang.LazySeq.sval(LazySeq.java:42)
        at clojure.lang.LazySeq.seq(LazySeq.java:51)
        at clojure.lang.RT.seq(RT.java:535)
        at clojure.core$seq__5402.invokeStatic(core.clj:137)
        at clojure.core$drop$step__5925.invoke(core.clj:2927)
        at clojure.core$drop$fn__5928.invoke(core.clj:2932)
        at clojure.lang.LazySeq.sval(LazySeq.java:42)
        at clojure.lang.LazySeq.seq(LazySeq.java:51)
        at clojure.lang.RT.seq(RT.java:535)
        at clojure.core$seq__5402.invokeStatic(core.clj:137)
        at clojure.core$drop$step__5925.invoke(core.clj:2927)
        at clojure.core$drop$fn__5928.invoke(core.clj:2932)
        at clojure.lang.LazySeq.sval(LazySeq.java:42)
        at clojure.lang.LazySeq.seq(LazySeq.java:51)
        at clojure.lang.RT.seq(RT.java:535)
        at clojure.core$seq__5402.invokeStatic(core.clj:137)
        at clojure.core$drop$step__5925.invoke(core.clj:2927)

emccue21:07:13

concat is real prone to stack overflows when you have recursive concats

emccue21:07:54

based on the stack trace, now this

emccue21:07:56

(concat (reverse (take (- 0 offset) done)) todo) (drop (- 0 offset) done)

emccue21:07:12

just make all those concats eager with a doall

Russell Mull21:07:16

My hunch would be that you're stacking up a big old chain of lazy sequences as you run through this thing, and when you finally evaluate the result, actually evaluating them is too much for the stack

emccue21:07:18

see if that makes a difference

Andrew Berrien21:07:49

I heard about doall but I don't think I tried it on the concats. I'll try it.

emccue21:07:59

idk if it will work on concats tbh

emccue21:07:02

but its a start

Andrew Berrien21:07:14

I wasn't sure what the best way is to basically merge two sequences like that

Russell Mull21:07:18

Think of it this way: every time you do a lazy sequence operation, you're essentially adding a node in a linked list that's going to be recursively processed somewhere down the line. If the size of that list depends on your input, then a large input is going to make you stack overflow later on.

Andrew Berrien21:07:02

I still get the StackOverflowError with (doall (concat ...))

Andrew Berrien21:07:27

Oh but if I put it around the concats AND the drops... now it's running

Russell Mull21:07:08

you probably want to do this once every time through your loop

Russell Mull21:07:05

or just making all your operations eager could be reasonable as well

Andrew Berrien21:07:16

What would it look like to make all my operations eager?

Russell Mull21:07:06

doall , what you've done

Russell Mull21:07:58

a way to do this at the loop point:

Russell Mull21:07:00

(defn run-commands [todo state]
  (loop [todo todo
         done []
         state state]
    (let [state (doall state)
          done (doall done)
          todo (doall todo)]
      (if (= 0 (count todo))
        state
        (case (first (first todo))
          "cpy"
          (recur (rest todo)
                 (cons (first todo) done)
                 (run-cpy (first todo) state))

          "jnz"
          (let [[new-todo new-done new-state] (run-jnz todo done state)]
            (recur new-todo new-done new-state))

          "inc"
          (recur (rest todo)
                 (cons (first todo) done)
                 (run-inc (first todo) state))

          "dec"
          (recur (rest todo)
                 (cons (first todo) done)
                 (run-dec (first todo) state)))))))

Russell Mull22:07:27

It's between useless and harmful to do this on 'todo'

Russell Mull22:07:59

OH, I see what you're doing here. It's a little VM, and you're keeping a list of the upcoming instructions, which is updated with when you do 'run-jnz', is that right?

Russell Mull22:07:23

I'd suggest an alternate approach: put a program counter in your state.

Russell Mull22:07:50

If you load then input program into a vector, then you can index into it directly

Andrew Berrien22:07:19

@russell.mull Oh, that is a good idea! You are right that it is a simple VM and I am using todo and done for the instructions, which is straightforward except for when jnz comes up

Russell Mull22:07:42

@andrewpberrien I hacked up a version that works that way, if you'd like to take a look at it. Totally understand if you don't.

Russell Mull22:07:24

(But it doesn't terminate on the input_12 file.. 😓 must have broken something in jnz)

Andrew Berrien22:07:05

Sure! I'm working on my own version of it right now too. If you get it working, I'd love to compare it to mine

Russell Mull22:07:46

some salient points:

Russell Mull22:07:26

• I normalized the input in to [:instr arg arg] vectors up front, which is a common way to deal with dynamically dispatched things

Russell Mull22:07:48

• I'm using multimethods for the dispatch, because it's just too clean not to do it here.

Russell Mull22:07:49

• The 'state' argument is always the first argument. If you're taking a single structure and modifying it according some parameters, the main thing always comes first by convention. This is so it works cleanly with ->

Russell Mull22:07:31

• It's still broken 🙂

Andrew Berrien23:07:45

I just updated mine using all the advice that was given to me:

Andrew Berrien23:07:49

Just one recur statement, index access, much simpler approach

Andrew Berrien23:07:34

Interestingly, even though this version is much more readable, it doesn't seem to run that much faster than the todo/`done` version

sova-soars-the-sora04:07:08

bravo! looks very nice 😄

❤️ 3
Russell Mull21:07:13

I /think/ those are all sequences?

Andrew Berrien22:07:07

Thanks @russell.mull for the ideas of using let ... doall and also the alternative implementation using a vector/hashmap for access by index. And @sova @emccue thank you for the quick help getting my code running.

Andrew Berrien22:07:33

In my primary language everything is eager unless you specify otherwise, so I was caught off-guard for a minute, but now I understand what I have to do in the future.

🎆 3
Rob Haisfield22:07:58

@denis.baudinot how would you distinguish between code that needs to be hand optimized and code that doesn’t need to be? Context: https://clojurians.slack.com/archives/C053AK3F9/p1627485014380700?thread_ts=1627481121.379100&amp;cid=C053AK3F9

Russell Mull22:07:48

Measure performance using a profiler, and see what it points at.

seancorfield23:07:52

Avoid premature optimization -- assume your readable code will be fast enough until you have evidence it isn't...

3
seancorfield23:07:36

(and don't try to do micro-benchmarks -- no one cares if a piece of code runs 50% faster if it isn't a bottleneck in the first place!)

3
sova-soars-the-sora05:07:24

Yeah. Maybe some ideas I would throw at the question: It works? Great. You can understand what it does on first glance? Superb. If hand-optimization makes it less understandable at first-glance then maybe punt downfield for later. If hand-optimization solves a bottle-neck or would speed up an often-called thing, and can be tucked away as its own innovation in its own function you can call, implement it [eventually].

dgb2308:07:25

“Make it work, make it right, make it fast” [Kent Beck](https://en.wikipedia.org/wiki/Kent_Beck)

dgb2309:07:27

Although again, there is a useful distinction between what we can mean with “hand optimised” vs “idiomatic and performant”. We might not care at all about intermediate representations of an iteration (or similar), so we want to use transients (see the implementation of frequencies in clojure.core). We might think of something as an explicit composition of sequential transformations, then transducers are a good fit. On the other hand we might not want to care to know about which parts of a data structure are realised in a different context, then laziness is the way to go. Sometimes there is clear, duplicated effort/computation that we can avoid with simple things like let or maybe memoization and sometimes (rarely) using macros to steer evaluation order (when we know enough things in advance, but want to keep our code declarative). However I almost never have to do real hand optimisation of general code, working on small to medium projects, mostly on the web. I think the most common things I do is caching/memoization of expensive stuff that I cannot make less expensive, and then a little bit of SQL related things. The most “hand optimised” thing lately was where I read whole (big) documents full into memory in a loop during a migration so it filled up memory too fast. Was kind of expected though and then easy to fix. It could have been done right in the first place anyway so I’m not even sure this counts.

dgb2309:07:05

There is also “When in doubt, use brute force.” by Ken Thompson.

dgb2313:07:57

Another one: “Optimization hinders evolution.” - Alan J. Perlis

phronmophobic18:07:41

My favorite is: "You can't make code run faster. You can only make it do less." Often, you get a lot of bang for your buck just by writing clear, direct code.

🚀 5
Rob Haisfield14:07:20

Multiple people have mentioned only hand optimizing when you have bottlenecks... how do you find the bottlenecks?

dgb2314:07:19

In simple cases you can literally see it from your logs or from your REPL session or even just from using the thing you write. From there you can divide and conquer for example by skipping intermediate results. However a more sophisticated approach would be profiling and benchmarking with a proper tool. For example browsers have this inbuilt for network requests.