This page is not created by, affiliated with, or supported by Slack Technologies, Inc.
2022-07-11
Channels
- # announcements (2)
- # asami (10)
- # aws (2)
- # babashka (28)
- # beginners (103)
- # calva (2)
- # clj-kondo (10)
- # clojure (69)
- # clojure-austin (11)
- # clojure-europe (48)
- # clojure-nl (10)
- # clojure-switzerland (1)
- # clojure-uk (2)
- # clojurescript (6)
- # conjure (2)
- # consulting (1)
- # core-async (2)
- # core-typed (2)
- # cursive (5)
- # datomic (15)
- # jobs (1)
- # malli (4)
- # meander (7)
- # membrane (26)
- # missionary (6)
- # nbb (39)
- # reagent (3)
- # releases (1)
- # ring (3)
- # shadow-cljs (28)
- # spacemacs (2)
- # sql (6)
- # vim (5)
Is there anything that can cause an io/reader built from a socket to yield an empty string on readLine
Well, since there are kind of two sides here, I'll explain both. I'm basically running my code in a repl, and I expect to type stuff into the repl, which would be my client, my server is just netcat listening to a port, where should whatever I typed into my repl be printed. If I hit enter on netcat, it writes a newline, if I hit enter on my repl, nothing is being printed or anything
I'm... Not really sure, I just got even more confused. I'm not even sure how that got returned since I got a function for that, which uses .write and it didn't do anything here, so how did it get written in the first place
I can send the code if that's useful because I'm just confused at this point to be honest
Cider used nrepl, which has historically not handled calling read line from the repl well
I would make sure you are both using the latest cider version in emacs (which includes the nrepl client) and make sure the nrepl server you are using is the latest (you are likely using whatever is baked into lein, so latest lein should suffice)
As long as you are connecting to/starting a repl via cider you are almost certainly doing it via the nrepl protocol, which, because of how it's protocol makes a distinction between code sent for evaluation and "input" has historically had weirdness around the running code that tries to read from *in*
There is some kind of solution in place for it, but it would not surprise me at all if that was still wonky
Ah, actually I forgot, because it is so rare that I call read-line on *in*
like that, but I think what is actually going on here is, you typed in in (read-line)
and hit enter, then the clojure read read up to the closing ) and returned the form which was then evaled, and when read-line looked at *in*
the newline from when you hit enter is still unread, so it reads up until that newline producing an empty string
Clojure built in repl skips newlines after forms https://github.com/clojure/clojure/blob/master/src/clj/clojure/main.clj#L169 but the nrepl server (which cider is sending your code to) does not https://github.com/nrepl/nrepl/blob/master/src/clojure/nrepl/middleware/interruptible_eval.clj#L104
@U0NCTKEV8 I don't really think the problem is cider or using an nrepl, as running from a command line using clj
leads to the same thing as running it from an nrepl
Basically, I'm reading the line doing
(defn read-input
[socket]
(let [input (a/chan)
reader (io/reader (.getInputStream socket))]
(a/go (loop []
(let [message (.readLine reader)]
(a/>! input message)
(recur))))
input))
nc as a server can behave unexpectedly(and may even behave differently depending on the os), one way to try and verify would be to connect to the nc server with nc as a client and see if typing a line and hinting enter in the server shows up in the client
Although if I use nc as a client and a server it kind of works as the function I use for writing the output works
When you implement a protocol In Clojure, is the resulting object immutable? I would think that unless you intentionally make something an atom it would be immutable, but I wanted to ask
Ahhhhh!!! That's the sort of thing I'm looking for. What does one need to do to have both?
if you have something you can mutate, it is not immutable, if you have something you cannot mutate, it is immutable
If all attributes of an object are immutable, does it follow that everything about the object is immutable?
it depends what you mean by the attributes(fields) are immutable. An immutable (assigned once at creation) field can reference a mutable thing just fine
and then the inverse, (into-array Object [[]])
, a mutable array referencing an immutable vector
Backing up…
If you “implement a protocol” in Clojure then you have used defrecord
or deftype
and extended the protocol to it. You can also extend protocols to Java classes. I don’t know if you call that “implementing”, but let’s assume that it is. What are the implications for mutability then?
Well, Java classes are totally mutable. So that’s an “implementation” that’s mutable.
Meanwhile, records and types are both immutable structures.
So:
(defprotocol Named (showname [n] "shows the name of the object"))
(defrecord N [n]
Named
(showname [_] (str "The name is: " n)))
(extend-protocol Named
String
(showname [s] (str "The string's name is: " s)))
=> (def a (->N "apple"))
=> (showname a)
"The name is: apple"
=> (showname "banana")
"The string's name is: banana"
However, the fields within those structures can be mutable.
(defrecord Changeable [n]
Named
(showname [_] (str "The changeable name is: " @n)))
(defn new-changeable [n] (->Changeable (atom n)))
=> (def c (new-changeable "carrot"))
=> (showname c)
"The changeable name is: carrot"
=> (reset! (:n c) "durian")
user=> (showname c)
"The changeable name is: durian"
So we couldn’t mutate the record itself, but the data that it refers to can be changed.
As for the protocol… that’s really just a way to find functions that can operate on the data. It doesn’t know or care about what is in that dataI have a list inside a map `
(def a {:a {:b {:c [{:d "d" :e "e"}]} }})
How can I updated a key :d
so the result would be like
{:a {:b {:c [{:d "updated" :e "e"}]} }}
?I could do that trick because you had a vector (not a list), and vectors are associative like maps, except the keys are numbers

@U051N6TTC How to update this?
(def a {:a {:b {:c [:k [{ 0 {:d "d" :e "e"}}] ]} }})
The 1
will get to the second element of that vector that starts with :k
. Then first 0
gets the first element of the embedded vector, which is a map, and the second 0
does a lookup in the map for that number, finding the embedded map
Can someone explain where in the dedupe source the flushing happens? > In the completion step, a transducer with reduction state should flush state prior to calling the nested transformer’s completion function
dedupe doesn’t have any state impacted by that. It just has a notion of last seen value. There’s no cleanup as everything happens as it sees a value in the two arity case
(defn dedupe
"Returns a lazy sequence removing consecutive duplicates in coll.
Returns a transducer when no collection is provided."
{:added "1.7"}
([]
(fn [rf]
(let [pv (volatile! ::none)]
(fn
([] (rf))
([result] (rf result))
([result input]
(let [prior @pv]
;; store current value as new last-seen value
(vreset! pv input)
;; if current value is the same as the prev last-seen value
;; don't call `(rf result input)` on it, just return result
;; no cleanup needed. We either have a value or it is duplicated
;; and we don't want it. No "state" that needs cleaning up
(if (= prior input)
result
(rf result input))))))))
([coll] (sequence (dedupe) coll)))
I see, and it makes sense when comparing to partition-all, which does call .clear to do the flushing it needs. Why does dedupe return the value when it does find a dupe? Wouldn't not returning a nil or something like :dupe be more clear?
read (rf result input)
as doing work with your reducing function and result
as just returning work already done with your reducing function without taking action on the (duplicated) input
That's a good tip for interpreting it. One more question about transducers here. It says that the init step eventually uses rf to "call out to the transducing process". What does this mean?
>Init (arity 0) - should call the init arity on the nested transform rf, which will eventually call out to the transducing process.
https://clojure.org/reference/transducers under Creating Transducers.
Somewhere along the way the reducing function needs some concrete accumulator. For instance, conj
just creates a vector
(def
^{:arglists '([] [coll] [coll x] [coll x & xs])
:doc "conj[oin]. Returns a new collection with the xs
'added'. (conj nil item) returns (item).
(conj coll) returns coll. (conj) returns [].
The 'addition' may happen at different 'places' depending
on the concrete type."
:added "1.0"
:static true}
conj (fn ^:static conj
([] []) ;; the init arity returns a vector for accumulating
([coll] coll)
([coll x] (clojure.lang.RT/conj coll x))
([coll x & xs]
(if xs
(recur (clojure.lang.RT/conj coll x) (first xs) (next xs))
(clojure.lang.RT/conj coll x)))))
So it's basically just trying to describe that at some point the transformation stack bottoms out at the reducing function which has to return something concrete?
There's a bit more description in the transducers talk, https://github.com/matthiasn/talk-transcripts/blob/master/Hickey_Rich/Transducers.md > OK, there's a 3rd kind of operation that's associated with processing in general, which is Init. [...] . The basic idea is just, sometimes it's nice for a transformation operation to carry around an initialization capability. It need not be the identity value or anything like that. It does not matter. What does matter is that a reducing function is allowed to, may, support arity-0. In other words, given nothing at all, here's an initial accumulated value. From nothing. Obviously, a transducer can't do that because it's a black box. One thing it definitely does not know how to do is to make a black box out of nothing. Can't do it. So all it can ever do is call down to the nested function. So transducers must support arity-0, init, and they just define it in terms of the call to the nested step. They can't really do it but they can carry it forward except the resulting transducer also has an init, if the bottom transducer has an init.
depending on the transducing context (I think that is the clearer name over transducing process) the 0 arity may never be called
Yes, I understand that some step arities may do something like filter every single element and so the invocation would never reach the rf.
the 3 different arities of a reducing function are called at different places, a "filter" which is processing individual data elements is only going to effect the 2 argument arity
Ah, I think that's what the snippets here are also illustrating https://www.zhiqiangqiao.com/blog/transducers-in-clojure-explained#init-value-of-transduce
So the arities are not always necessarily called in the order init > step(s) > completion?
read the source of transduce. The init is only called if non init value is provided
That transducers reference page is good but the wording in the Creating Transducers section really loses me.
I think you may be confusing which steps are called per value and which steps are called just once
init and completion are called at most once and the start and end of a transducing context (process?, whatever)
I think I understand that part. I think I presumed init will always be called, which I now understand is not the case. It's only there to provide an init value when there isn't one otherwise. Gotta watch Rich's talk again.
so a manual transducing context that does something like reduce with a reducing function f might look like (loop [init (f) values] (if (seq values) (recur (f init (first values)) (rest vaues) (f values))
and the way you get you reducing function f, is you taking whatever your original reducing function is and apply your transducer to it
> read the source of transduce. The init is only called if non init value is provided
It can be pretty confusing. transduce
does not call the init arity of the transducer, but the init arity of the reducing function.
(def bad-xform
(fn [rf]
(fn
([] (throw (Exception. "hi")))
([result] (rf result))
([result input]
(rf result input)))))
(transduce bad-xform
+
[1 2])
;; 3
However, you can use a transducer to build a reducing function, so that's why transducers must support the init arity.
(transduce identity
(bad-xform +)
[1 2])
;; throws Exception
(def useless-xform
(fn [rf]
(fn
([] (rf))
([result] (rf result))
([result input]
(rf result input)))))
(transduce identity
(useless-xform +)
[1 2])
;; 3
Thank you for the input everyone. I now understand the answer to the question that started this thread. Also, the role of the init arity is more nuanced than I first thought. I'm going to take some hammock time on the tips in this thread before coming back with more questions. Thanks!
@U7RJTCH6J I think your snippets are the clearest illustration of the the role of init.
In transduce
, does the xform
's (the transducer) init ever get called, or is the init arity really only called in the case where the rf
is a transducer itself?
but yes, for some odd reason transduce invokes f's init before it transforms f via xform
Wait, that doesn't sound quite right. My understanding is that a reducing function only has 2 arities - zero and two - and completing
is used to add the third arity (one) if you need to use a reducing fn in a transducer.
completing will add the missing arity if needed, but it adds a dummy noop version, if you actually need to do something during completion then you don't use completing
> but yes, for some odd reason transduce invokes f's init before it transforms f via xform lol this is so confusing 🤯
yea, it is a bit odd, but I don't think it makes a difference in practice.
(def useless-xform
(fn [rf]
(fn
;; init
([] (rf))
;; completion
([result] (rf result))
;; step
([result input]
(rf result input)))))
While the completion and step arities can do interesting things, the init arity in transducers are exclusively (I think) just deferring to the reduce function at the bottom. It's pretty safe to just include the boilerplate and forget about it.@U7RJTCH6J thank you for the added clarification. I'm going to presume this is basically the gist of it unless I come across anything to the contrary: > the init arity in transducers are exclusively (I think) just deferring to the reduce function at the bottom. It seems like a description that should be included somewhere in the http://clojure.org transducers reference page, because the way it currently describes it as "rf eventually calls out to the transducing process" is very confusing language. Your snippets provide a much clearer explanation.