Fork me on GitHub

one thing I've experimented with is adding extraneous semantic terms to the (m/cata) sub expressions


so that it's' easy to demarcate which subexpression it's meant to recurse into


my general takeaway this week though is mixed; the syntax/form is concise that it doesn't feel like a chore but debugging the recursion patterns is quite frustrating


caveat: that might also be colored by basic clojure debugging knowledge gaps


don't really have a solution/anything actionable; just sharing thoughts outloud


maybe if anything, next week i'll add some debugging walkthrough examples to the cookbook


> one thing I've experimented with is adding extraneous semantic terms to the (m/cata) sub expressions so that it's' easy to demarcate which subexpression it's meant to recurse into There is a pattern that we've used a few times to accomplish what you are talking about:

(m/rewrite [[1 2] [3 4]]
  [?x ?y]
  [(m/cata (`vec-pattern-1 ?x)) (m/cata (`vec-pattern-2 ?y))]

  (`vec-pattern-1 [?a ?b])
  {:vec1 [?a ?b]}

  (`vec-pattern-2 [?a ?b])
  {:vec2 [?a ?b]})
Here I have two completely overlapping patterns, but I can decide which one I want to cata into. In zeta, we use this pretty extensively. Here is an old commit where you can see it directly at play, we have actually abstracted it out a bit in new versions because we used it so often. In the current version we call them meta function and have a macro that generates some defsyntax patterns for us.

👌 3

I trick I do for debugging these things is to add a catch all and intentionally misspell something.

(m/rewrite [[1 2] [3 4]]
  [?x ?y]
  [(m/cata (`vec-pattern-1 ?x)) (m/cata (`vec-pattern-2 ?y))]

  (`vec-pattern-misspelled [?a ?b])
  {:vec1 [?a ?b]}

  (`vec-pattern-2 [?a ?b])
  {:vec2 [?a ?b]}
  ?x ?x)

;; => 

[(wander.core10/vec-pattern-1 [1 2]) {:vec2 [3 4]}]
So now I can see exactly what was going to go down the vec-pattern-1 code path. I will admit this isn't ideal but pretty handy. We have talked about how if we had an interpreter we could maybe make debugging these things easier.


ah this is pretty great


i think i'll definitely walkthrough some debugging examples then in the cookbook


> but debugging the recursion patterns is quite frustrating I have felt this one too once I’ve built up a large enough system. It’s a problem with a solution and it’s pretty close to the top of the todo list.

👍 3

somewhat related, i'm having trouble determining when non-exhaustive matches trigger failures vs. not


e.g. i thought i had to add (m/some ) everywhere on my map expression matches but that doesn't seem to be the case


and last night I noticed for first time that non-exhaustive matches don't trigger errors sometimes? I've been using (m/cata) & (m/rewrite) a lot more if that adds more context


Only match complains about exhaustion.


Normally what I do for rewrite is to make my final clause look something like

(m/rewrite _
  [:MISTAKE ?x])


that would explain it!


And this isn’t immutable behavior, we can support complaining about exhaustion in rewrite.


I suppose to that extent we could also do so for find.


oh? i was going to leave that convo thread for another day but now that you bring it up.. :)


Ha! Yeah, by all means share your thoughts. I mean, if people don’t share their struggles and other people don’t run into them personally, we can’t help.

👍 3

the meta questions i had were: what are the motivating reasons for said behavior? and assuming it wasn't unintuitive "here be dragons", if it's possible to change the default behavior: - auto complain on exhaustion - force {:key val} to match without using (m/some)


my reasons: i'd rather default to "strict mode" with having to put extra effort to accomodate slop; i prefer immediate failfast => easier to debug


The behavior, I think fell out of history and not really a particular design choice. I started with match, search, and substitute first and then created find, and then based rewrite on top of find and substitute.


Yeah, and I’m inclined to support an option to make that possible.


I’ve just gotten in the habit of jotting down a catch all “mistake” rule at the bottom of the system when I start.


yeah makes sense and honestly, it might be a moot point/aesthetic thing


bc that's the first thing i do as soon as i hit a bug


i think at this point all my meander matches have a catchall that throws an exception with stacktrace


what do you & @jimmy put in those clauses? (subtext: a. wonder if there's some nice tricks in there b. is it something you do all the time/common => maybe a case for makingg it a default?)


Mine typically looks like what @noprompt mentioned above. Something that stands out like [:INCOMPLETE ?x]. That way if I’m doing a recursive pattern I can see where in my recursion I am stopping.

👌 3

So something like

^::m/flag-to-throw (m/find ,,,)

👌 3

@jimmy will probably do a better job explaining this than I will but the idea behind the rules like

(`name ?x)
{:name ?x}

[!xs ...]
[(m/cata (`name !xs)) ...]
is that you’re emulating a function call.

👌 3

It’s a way to give you control.


It’s also extremely intoxicating. 🙂


haha; yeah, it really is


But a case and point for needing explanations when shit doesn’t work.


i have to admit, it was pretty amazing the first part of the week; unfortunately the debugging aspect of it was the only bummer but i thinkn that can be solved


I think it’s really cool when people get to the point you’re at though because in terms of experience I’m more or less where you are and at this point we can just trade techniques.


yeah; it's really making very complex term rewriting or graph optimizations really tractable


or rather, i've been able to quickly do simple term rewriting and constant folding passes


that i normally would've relegated to a whole separate pass with work building up necessary "infrastructure"


(unrelated but btw, @jimmy since i saw you on futureofcoding, all of this is for implementing something similar to gibber or s-ol's alv demo but for C/hlsl)

👍 3

This is kind of the best situation I could hope for. It confirms a point I’ve made about regular expression elsewhere which is that because the semantics of the symbols don’t change from context to context, you no longer need context to talk about a particular program.


yup. it's also really easy to read the transformation after the fact


and also made me realize how much "unnecessary" work I do that goes into just changing shape of data


Slightly off topic, are you familiar with the nano pass stuff?


especially in exploratory/design phase


no not at all. what's that?


Basically the idea is that you build a compiler out of all these little passes that do one thing after another to an AST, etc.


haha oh my god i'm going to hate cry laugh if it's what i think it is


Anyway, there’s a paper about it and that’s where cata came from.


ah no shit. thanks for the link, i'll definitely read into it tonight


But it’s only on the match side.


i definitely which there was way more tooling/infra for creating DSLs but I couldn't find much when I started all of this


cata on the substitution side was a like a light year leap for rewrite. It made it possible to do a bunch of stuff that was really awkward to do before it.


Totally. And we have really fantastic stuff like instparse in Clojure but doing anything remotely interesting with those ASTs using stock Clojure is tedious.

☝️ 3

Because there are commonly complex relationships between siblings, grandnodes, etc that are painful to code up.


imagine trying to do that in C++ 😅 <=== wrestling with templates at 3 am, I heard the "it was at this moment that he knew he fucked up" and started looking into nim + clojure the next day


Maintaining a pattern, to press the Clojure vocabulary buttons, is simple and easy.


(btw, anything else you'd recommend in the line of nanopass? the only other thing i've seen in that space is the MLIR effort by llvm i'm mostly a low level graphics guy so only 3 months into knowing about compilers/prog. langs. or the space)


Yeah, this is why I’m really excited about what you’re hacking on by the way.


(*ah i should say, i also learned a lot looking at nasser's mage compiler; that approach seemed really good)


I dunno, there’s so much stuff to recommend but normally I don’t unless it’s about a particular topic. I heard about the nano pass framework a few years ago, forgot about it, and then someone here mentioned the cata thing and brought that back to my attention.


This stuff makes me wonder “why the hell aren’t we being encouraged to exploit :inline and :inline-arities?”


@noprompt i'm watching the nanopass talks and it's fantastic ; if nothing else, highlighting some interesting ways could architect the dsl compiler I'm missing the link to meander though?


It’s mentioned on pages 4 - 5.


> The match form also supports catamorphisms [68] to recur through the sub-expressions of the input forms. A catamorphism, for our purposes, recurs through sub-forms in the language until a terminal case, such as x or (quote d), is found. The simplify pattern can be rewritten to use catamorphisms as follows:

(define simplify
  (lambda (x)
    (match x
      [,x (guard (symbol? x)) x]
      [(quote ,d) `(quote ,d)]
      [(if ,[e0] ,[e1]) `(if ,e0 ,e1 (void))]
      [(if ,[e0] ,[e1] ,[e2]) `(if ,e0 ,e1 ,e2)]
      [(begin ,[e*] ... ,[e]) (make-begin e* e)]
      [(lambda (,x* ...) ,[body*] ... ,[body])
       `(lambda (,x* ...) ,(make-begin body* body))]
      [(letrec ([,x* ,[e*]] ...) ,[body*] ... ,[body])
       `(letrec ([,x* ,e*] ...) ,(make-begin body* body))]
      [(,[e] ,[e*] ...) `(,e ,e* ...))])))
> Here, the square brackets (`[ ]`) in the syntax ,[e0] indicate that a catamorphism should be applied.


I really would love to have something as lightweight as [ ] instead of cata.


this is really exciting though; i've been fumbling around trying to architect the compiler and it's various passes in an extensible way


and at least from what i've seen so far, this approach is very mappable to how i'm trying to use meander


*rather, the entire compiler chain can be done this way vs. i'm just using it for front end/medium end


It’s certainly a sane way to do it.