Fork me on GitHub

in the functional-programming chapter with pegthing they (defn tri* ...) and later on (def tri (tri*)) many functions refer to tri (without the star). What is the idea behind this?


e.g. this function refers to tri and not tri*

(defn triangular?
  "Is the number triangular? e.g. 1, 3, 6, 10, 15, etc"
  (= n (last (take-while #(>= n %) tri))))


In this particular example, tri* is a function that calculates a lazy sequence. Without the star it's just a def, not a defn, so it's basically using the tri* function to calculate the sequence for you one time so that you can then use it more than once without re-calculating it.


In other words it's like a singleton: once you've run tri* once, you don't need to run it again. Just keep a copy in tri and use that sequence.


thinking java it’s like

if (_instance == null) { tri = tri*}
return tri


The page reads The next expression calls tri*, actually creating the lazy sequence and binding it to tri: (def tri (tri*)) /end quote. What does actually creating mean here? Because it seems like we’re just binding the function to another name.


Is it like this? tri* always returns a new lazy-seq and tri returns the same instance of that lazy-seq?


"actually creating" means it's calling the function that generates the lazy seq. Notice the parens. If you do (def tri tri*), then that would be binding the function to another name, but that's not what it's doing. (def tri (tri*)) is actually running the tri* function, and the result of running it is being stored in tri


so the return value of (tri*) is getting bound to tri


Thank you again man! 🙂


This chapter is odd, I’m sure I’m not catching everything. I don’t know if I should go very very slow or just start trying the excerises


It’s a bigger “chunk” than other chapters thus far


I haven't read it, but given that it's called "functional programming," I'd recommend "all of the above" 🙂


I have a bit of pseudo-code I use to express kind of the "zen" of functional programming...


10 LET X = 5;
20 PRINT X;// <== prints "5"
30 X = X + 1;
40 PRINT X; // <== prints "6"


So from line 20 we know that X equals 5, and from line 40 we know that X equals 6, so from line 30 we know that 5 equals 6.


When you read that and say, "Wait, WAT??" you're smacking head-first into the difference between functional programming and imperative programming.


The immutable aspect?


Immutability is a big part of it


One way to look at it is that with imperative programming, every variable has two dimensions that you have to know in order to get the value of the variable: its name, and its time.


If all you have is the name ("X"), then you can't really know the correct value of X because it's different on line 20 than it is on line 40.


You have to know it's name, the value it has at different times, and what the "current" time is. So really, it could conceivably have an unlimited number of values, according to what time it is.


Functional programming is more mathematical, and the thing about math is that in general you're solving equations in which time is not part of the problem. For any one value of X, there's only one value of sin X, for example.


You don't have to worry that the value of X somehow changed between the time you said "X" and the time you said "sin X"


Yes, that seems like a wonderful thing and it feels like part of the pitch to try clojure and FP in general


debugging complex state is so tricky


Also, … in a lot of my programming experience due to the problems I’ve needed to solve, I just smashed data around. Now looking back, I wonder if making all those data-object-classes was needed


So the brain-bending part of learning FP is learning how to see functions as descriptions of the "timeless" relationship between values as opposed to seeing them as imperative recipes for beating things into some kind of desired shape.


anyways. 🙂


Yeah, what you said last makes sense


but yeah, I'd say, read slow, try the exercises, and be ready to back track and re-read things as individual concepts become clearer


and of course feel free to post questions here


thanks. Yeah. I’m on my second reading of this ch. Now I see how the program continues without looping. Basically it’s always the last call in each function that moves you along. But I don’t follow how the board is getting manipulated. I’ll keep going. It’s day two on this ch. 🙂


so for lazy sequences, are the parts that are “reified” cached/saved somehow?


Yeah, they stay in memory, and they'll eventually bite you if you leave them (aka "holding on to the head").


as in you need to stop referring for gc to work?


They get garbage collected when there are no longer any references to them, but in the tri/`tri*` example, the tri var is always pointing to the first item in the lazy seq, so it will never be gc'ed


I thought tri is pointing to the sequence itself


because there is (take-while (conditions) tri) going on


exactly, yes, but more specifically, it's pointing to the head of the sequence


à la linked lists


yes, exactly


and the first item in the list is the "head," hence the expression "holding on to the head"


so, that’s a reason why you’ll need to use let,.. to have local variables when needed


Exactly. (def tri ...) is immutable, so it will always hold onto the head.


You're better off with a (let ...) if you think your lazy-seq might ever get too big. You can get away with it here because the peg-board game is only going to realize/reify a relatively small portion of that lazy seq