This page is not created by, affiliated with, or supported by Slack Technologies, Inc.
2017-07-27
Channels
- # aleph (3)
- # beginners (89)
- # boot (198)
- # cbus (4)
- # cider (11)
- # clara (2)
- # cljs-dev (27)
- # cljsrn (4)
- # clojure (141)
- # clojure-austin (4)
- # clojure-italy (11)
- # clojure-nl (1)
- # clojure-poland (2)
- # clojure-russia (35)
- # clojure-spec (33)
- # clojure-uk (55)
- # clojurescript (111)
- # core-logic (15)
- # cursive (2)
- # datascript (47)
- # datomic (132)
- # emacs (4)
- # jobs (1)
- # lein-figwheel (13)
- # leiningen (15)
- # lumo (20)
- # off-topic (110)
- # om (8)
- # onyx (20)
- # parinfer (2)
- # protorepl (1)
- # re-frame (36)
- # reagent (5)
- # remote-jobs (1)
- # ring (2)
- # ring-swagger (5)
- # specter (6)
- # uncomplicate (3)
- # unrepl (77)
I've yet to come across a single explaination of how continuations are useful and/or how they make code more clear.
i think in imperative code it's important to be able to "exit early" in the middle of iteration
but various ways of doing that aren't "expression level"
ie the call to return
in a function doesn't itself evaluate. it's context-sensitive, as it needs to be called inside a function. and so it's a kind of special form
and so i think part of the appeal of continuation is, it's a way to somewhat align the idea of exiting early with the idea of functional programming
since return
in the context of continuations isn't a special form, it's "just a function"
yeah, upon more reading, the common uses for it are: * early return * implement error handling / exception system * backtracking during search
so I guess in a scheme world this makes sense since "continuation" is a "primitive" that lets you do thihngs that otherwise are "language level constructs"
i think so
i think their utility is also an effect of the collection model in most lisps
like, in clj we don't feel we need them for early return, because we don't really make open loops in clj. we do lazy seq stuff
a term i invented :-) just the style of collection a language advocates, how to represent n things
reduced, definitely. i would say loop/recur is closer to tail calls, which are also most useful when you have mutable collections
or when you want to build a result without building anything intermediate
kind of
but you can't i.e pass a function an argument that's a function, and inside this passed function, there's a recur... since the binding is lexical
I don't know if this was your original intention, but you've now convinced me continuations are a good thing.
i'm not so sure they're good as unavoidable
same as exceptions
https://gist.github.com/alandipert/9415252 btw, might interest you. experiment of mine from awhile ago
I'm really starting to wish for a racket-jvm and racket-js (this continuation question came from reading racket docs)
kawa is a really nice jvm scheme, if you haven't seen it
Another way of looking at continuations is if you wanted to implement the primitive of “if” on a semantic level rather than a parsing / special form level. It fills some of the roles that a macro would take on (the non syntax related ones)
so I'm looking at porting racket to clojure, and it seems like tye two hardest things are continuations and tail call recursion
I have a hunch these things are related pretty strongly
self call can automatically turned into recur in tail position of course, generalized tail call optimization can be transformed into usage of trampoline
you just need to turn f into a function that returns f
and adjust all calls accordingly
oh, returns f, which returns a result or g
(I wonderi f that’s at all intelligible)
so you could do a full program transform where every f is turned into a function returning a boxed value or a function to trampoline to
then you get generalized tail call jumps
(kind of)
are you saying: assuming we have TCO, continuations can all be resolved by code rewriting ?
I’m saying, assuming TCO is impossible in the general case on the JVM (which is the case because of method size limits and the semantics of the JVM goto op) the only way to replace it is turning everything into a trampolined function
you replace duplicate the behavior but don’t get the full optimization
it does duplicate the behavior of not growing the stack, but it isn’t as optimized sadly
I actually do miss TCO, there's a number of algorithms that are easy to write recursively but less elegant as loop-recur
if they can’t be done with loop recur they can’t be tail call optimized
the reason we need recur is not because self calls can’t be automatically optimized, it’s the bugs caused by coders assuming their calls are tail calls when they are not
oh, sorry, I thought you meant self calls, yes I understand
and trampoline does help with that usage
letfn + trampoline is a nice combo btw
since the letfn binding rules let every function refer to every other function to trampoline
in fact without trampoline I’d probably never need letfn
how does other languages, like haskell get away with fast tco? iirc they allocate thunks on teh heap, and gc the useless ones, yet somehow manage to make it fast
well, they can goto in a general way
while jvm can only goto within one method body
(and jvm method bodies have limited size)
so it can see that a mutual recursion can be jumps between the two lambdas
(without stack allocation that is)
@qqq you are probably familiar that the standard calling convention from eg. C is that you put your args and space for locals on the stack, then do a jump to the new code with a pointer to where you came from
that’s how a function call usually works
@qqq a TCO says “hey, we never need to come back, so we can throw away all the stack the previous function used right now, and reuse the return pointer”
and haskell is able to actually do that on a stack level
the jvm intentionally lacks that power
wait, I thought haskell was all spineless tagged -something, where it's all thunks instead of stack
OK - I’m describing the general principle, I guess I misunderstand haskell’s internals though
it still needs a call stack, and the reusing instead of allocating stack still applies, right?
https://ghc.haskell.org/trac/ghc/wiki/Commentary/Rts/HaskellExecution <-- I don't fully understand this either
even if it never stack allocates for data
haskell is lazy, so when you 'call a function' in haskell, it just creates a thunk somewhere on the heap, doesn't eval it, returns immediately
@qqq Haskell is a continuation passing language. Every function takes as an argument the continuation to call into next
that's part of it also being lazy. With proper support from a compiler (e.g. LLVM) true TCO isn't hard. A tailcall is basically: 1) move all your arguments into registers/parts of the stack 2) pop all your locals/args off the stack, 3) push all your new args to the stack, 4) call the new function
Really that's it, TCO is simply saying "remove my stack frame and call X with Y
BTW, some languages like Erjang do full program transformation to get light weight threads/processes on the JVM
in practice, what does this mean if I'm trying to run a language with continuations, like racket, on clojure ?
on any language that doesn't natively support stack manipulation or TCO, it means you have to go to trampolines, which will be slow
On some platforms (like chicken scheme which is in C) there's other tricks you can do.
Because it implements all this at the machine code layer, and doesn't have to fight a virtual machine.
TCO in assembly isn't really hard at all, it's just some VM security models restrict your ability to hand set the stack pointer, or to define your own calling conventions.
@tbaldridge glad you can describe this stuff more lucidly than I, thanks
Notice how call and ret (http://www.cs.virginia.edu/~evans/cs216/guides/x86.html) on x86 use the stack to store the "return to this place" pointer, and the function pointer. The register SP points to the current head of the stack. That means you can do some really wacky stuff, like change the ret pointer after you've called into a function, or completely change the SP to some other value to jump to a different stack.
setjmp/longjmp have been used for years to implement call/cc like behavior in C. And that's all they really do: save the registers to the stack, then change the SP to point somewhere else: https://en.wikipedia.org/wiki/Setjmp.h
But yeah, none of that exists on the JVM. Or .NET, or Python...
that's awesome info
@tbaldridge : does this imply it is basically impossible to get fast continuations/tco in any scheme that runs on the jvm ?
Quasar / Pulsar add delimited continuations for the JVM (or fibers / green threads, afaik they are one and the same), by instrumenting the bytecode
@qqq pretty much, you have to define "fast" though
Fast as Java? Fast as Clojure, no I don't think it's possible.
Faster than CPython or some interpreted language, perhaps.
@qqq but to be honest with you continuations and TCO isn't all roses either. I agree with Guido's take on TCO in Python. It actually makes code harder to reason about as whole portions of your stack trace are missing.
So it can be really hard to figure out "how did I get to here from where I started". That's even worse with continuations because you may have jumped into some sort of stack without any information about where I came from.
So while I see TCO and continuations as powerful tools, they do make the code more complex.