Fork me on GitHub

I wonder why the #’x/f trick makes it possible to access a private function f in namespace x ?


Best not to take private too seriously. I am an adult, just let me use the function that is there, that I can see does exactly what I want, but because it doesn't match your idea of what you want me to use, you mark it private

🎯 1

Hi. Thanks for the reminder. I am more interested in the mechanism on HOW the private access is bypassed (or indeed how does private access restriction works)


Private is only check by the compiler as it resolves vars


Of you resolve vars yourself it isn't checkiled


If you use a name thing f the compiler checks if it is a locally bound name, if not it resolves the name against *ns*


Then it looks in the metadata and throws an error if it is private, then it adds the var. to the list of vars used in the compilation unit (typically a fn) and then ...


But the private check is a distinct thing the compiler does there, not part of the general var. resolution code


What if the var is dynamically created AFTER the function is compiled?


Then the function cannot refer to it


If the var. f is not interned before a function they uses f is compiled, then compiling that function throws an error


Which is why something like declare exists, it interns the var without a value


Right. Confused with how Python performs LEGB lookup.


what's the best explainer for iterations vs take-while + iterate?


iterate's function "must be free of side effects". iteration (singular) is specifically designed to work with a side-effecting function.


user=> (doc iterate)
([f x])
  Returns a lazy sequence of x, (f x), (f (f x)) etc. f must be free of side-effects


☝️:skin-tone-2: Is that helpful @lilactown?


I'd like to understand why it ought to be free of side effects


can't tell from a cursory look at the clojure source whether it chunks


Also, iterate's result starts with the initial value (without applying f). iteration's result starts with step applied to initk:

user=> (take 10 (iterate inc 0))
(0 1 2 3 4 5 6 7 8 9)
user=> (take 10 (iteration inc :initk 0))
(1 2 3 4 5 6 7 8 9 10)


It's probably also worth noting that iteration returns an IReduceInit/`Seqable` -- it doesn't actually do anything until you either reduce it or walk the sequence.


(technically, iterate returns an Iterate which is also those things but also a whole lot more so printing it will evaluate and walk it)


the original ticket has a little bit more discussion on it


thanks this is great


I'm trying to sketch out a version for CLJS. but most APIs like used in the examples, are async in CLJS, requiring some amount of continuation passing


this doesn't seem to work very well with lazy seqs


so I'm taking a step back and trying to figure out what concerns drove the design of iteration and think about what kind of design might solve similar concerns in an async context


I've chatted a bit with @ghadi about what it would mean to iterate a process vs. a function


And it is tricky, because basically you are introducing a feedback loop, where the output feedback into the input

hiredman05:02:18 is the snippet I have in my scratch.clj about it


I think of it as a sort of higher level thing, iteration transforms a function into a process that generates values, but async iteration would transform a process


hmm I'll have to work to understand that


Concretely something like a web crawler could be a good example of async iteration. You have a downloader process that fetches urls, then an extract process that extracts urls, and they form a loop, potential for lockup if you connect them using core.async channels


Is it an oversimplification to assume in such a situation that step would return a promise and vf and kf would wait on that promise?


you could write a simple eager version of iteration doing that, but you would have to consume every page


(thinking in CLJS w/ promises, which are eager)


So for the JavaScript case, if you are reading a file like by line, you'll have a callback or fresh promise for each line


I don't know where I was going with that


Basically you need to be able to handle chains of promises


Right, each step call would produce a promise. Each of somef, vf, and kf would wait on the promise and so something with the value inside. So it would only be eager for a single step -- and then you either reduce it (eager) or get a (lazy) sequence that would only run step as the sequence is consumed?


Because what does an async iteration produce as a result even if you make the simplifying assumption that step returns a single promise like thing


But neither reduce or lazy seq will work on that context for clojurescript or core.async


(I'm asking to try to understand more of the CLJS model which I just don't understand because it's "single-threaded" but inherently supports some limited form of async)


It isn't really limited, it is all callbacks running on a shared single thread cooperatively scheduled


So you cannot block on a js promise to reduce or produce a lazy seq


All you can is attach a callback


Ah, OK. Then JS promises don't work the way I thought/hoped.


This was the same reaction I had when I learned about them too 😊


Hah... Does anything in JS work the way we initially expect? 🙂


you could essentially loop and build up a chain of promises, but figuring out a way to stop the loop from the outside - a la consuming a seq or reducing a recucible - is the tricky part


this is why I am such a fan of core.async channels vs promise style


Channels model a stream of output from a process


Promises just model an async function call


okay different tack


I've been thinking about trampoline


I have a lib that implements a bunch of helper fns for creating trampolined calculations using CPS

Ben Sless06:02:09

Depends if you want to use naked promises or channels You can write your own trampoline that checks of the result is a promise and .then call the continuation

Ben Sless06:02:14

I could be wrong but you can generalize it as monadic bind


I'm not sure I'm actually going in a direction that ends with a solution to the iteration problem


ultimately I cannot control the flow of API calls outside of the transduction. I have to realize the whole thing or know beforehand to insert a take-while or whatever

Ben Sless06:02:39

I think you can pull in promesa, would make it simpler, and look at how metosin/sieppari handles async results


this works and has no dependencies so I like it more than pulling in promesa

Ben Sless17:02:38

You can always pull in the pieces if code you need yourself Anyway, big idea is replace thunk production in core namespace with bind protocol, extend it to function and promise to work correctly, trampoline will "run" the computation


I don't see how that's much better than what I have already in cascade.async, where I check for either fn? or IAwaitable


maybe I'm misunderstanding exactly what you mean. the use case I'm thinking of is, "I'd like to insert some async operation in the middle of this process I am executing," which cascade.async/trampoline would allow.


changing the entire library to some monadic protocol seems unnecessary and would break its usage w/ normal trampoline

Ben Sless18:02:38

It shouldn't break it Let's say you have some CPS function which returns a thunk (fn f [k a] #(k ,,,)) The continuation is just one execution context for "do the next thing", but what if it's an async calculation? (fn f [k a] (.then k ,,,)) One returns a promise, one returns a thunk, both are things you can coerce to get, so we have two operations, building the next calculation and driving the calculation (fn f [k a] (-bind k ,,,)) will build the next calculation correctly based on the context and trampoline will "pull" it

Ben Sless18:02:03

that lets you mix async and thunks and nest them arbitrarily, too

Ben Sless18:02:25

(at least, avoid satisfies?, is evil)


it sounds like I wouldn't need to change any of the core lib, just extend the monadic protocol to Fn


I don't think we can get rid of satisfies because we have to have a signal for when to stop trampolining, and that signal is that whatever is returned isn't a thunk


what if there was a promise-aware version of trampoline?


The way the go macro works you don't need trampoline


(maybe not strictly accurate)


I'm general, if you are calling functions whose body is a call to go, you won't meaningful accumulate stack


Depending, you may accumulate callbacks on channels


And there might be some differences between cljs and clj's go macro, because cljs's tries to avoid context switches more if I recall

Timofey Sitnikov15:02:32

Good Morning Clojurians, have a question, if I have some namespaced keywords like so:

{:user/id 1
 :user/name "Joe"}
and I would like to change the namespace to something like:
{:my-user/id 1
 :my-user/name "Joe"}
Is there a canned method to change them all at the same time? Or do you just use rename-keys to rename each, assuming that there are 20 namespaced keywords.


There is no function to change a keyword's namespace, so you have to do it yourself for each keyword. If you know keywords in advance then clojure.set/rename-keys will work just fine. If you don't know them in advance, a manual reduce or into + map where you change the namespace of each keyword will do the job.

👍 1

also the brand new update-keys in the current beta clojure release is nice for this:

(update-keys {:user/id 1 :user/name "Joe"}
  (fn [k] (keyword "my-user" (name k))))

;;=> {:my-user/id 1, :my-user/name "Joe"}

👍 5
Timofey Sitnikov15:02:34

Wow, this is interesting


dumb q: the hash value of a persistent structure isn't stable across process restarts, right?


e.g. if I write to disk some data and read it back out, the hash wouldn't necessarily be the same


Judging by the code, it is stable. But I wouldn't rely on it for two reasons: • It's an implementation detail • It may change:


no it is not guaranteed


Assuming two things are true: • Clojure version is the same (1.10.3 in my case) • Only numbers, strings, and immutable Clojure collections are used what exactly would make hashes unstable?


within a specific clojure version it is consistent

👍 2

In all Clojure versions released so far 🙂


(I know of no plans to change this in future Clojure versions, by the way. Just causin' trouble)


for the same reason that map iteration order is undefined


when the hash algorithm changed (1.6.0?) a bunch of test suites had breakages because they relied on order


when I need some succinct stable identifer for a map, I use or my own similar scheme


as in it may change between versions of clj?


that's understandable


If a function sent to an agent does another send to the same agent, I know that it will be queued appropriately. Is it queued at the back of the queue or does it go to the front? I assume the back, but the docs are mute on this point. In fact, the docs are mute on how multiple sends are queued at all, other than sends from a thread will be executed in the same order they are sent, possibly interleaved with sends from other threads. This implies a single FIFO queue but doesn’t explicitly require one.

Alex Miller (Clojure team)00:02:03

The back, and I think that's the only sane interpretation


OK, thanks. I figured that was the case, but just wanted to make sure.


Is this expected? user> (deref (delay (Thread/sleep 5000) :foo) 0 :still-working-on-it) Execution error (ClassCastException) at clj-tda.tqqq/eval19509 (REPL:925). class clojure.lang.Delay cannot be cast to class java.util.concurrent.Future (clojure.lang.Delay is in unnamed module of loader 'app'; java.util.concurrent.Future is in module java.base of loader 'bootstrap') Seems like deref with a timeout can’t be used with a delay. Is this because the first thread to force/`deref` a delay is the thread used to run the computation? Therefore, it can’t timeout?


This seems like a subtlety in deref explained in its docstring


> The variant taking a timeout can be used for blocking references (futures and promises), and will return timeout-val if the timeout (in milliseconds) is reached before a value is available. See also - realized?.


I have to think about it more, but it seems like what you are asking is impossible. delay implements IPending and IDeref, but not IBlockingDeref, which is what the deref ref ms default-value signature cares about. And you've come up with the perfect example for why it cannot possibly do that.


When you call Thread/sleep you are denying any opportunity for the current thread to do anything on your timeout-ms. You've slept the thread and nothing can resurrect it, so the timeout-ms cannot possibly be honored.


OK, that’s what I suspected.


delay is meant to delay execution until you deref it. it is NOT meant for async/thread purposes. that would be future


Yes, understood. I was actually interested in doing something of the form (delay (future …)) in order to delay the creation of a future (I’d like to effectively create a future without starting it running yet) and was playing around with deref in the REPL when I found that behavior and just wanted to make sure I was thinking straight. Apparently, I was.