Fork me on GitHub

Is there anything that can cause an io/reader built from a socket to yield an empty string on readLine


What if they just hit enter?


Or alternatively ctrl+d ?


(Not saying it is, just don't know the behaviour if those happen)


Well, since there are kind of two sides here, I'll explain both. I'm basically running my code in a repl, and I expect to type stuff into the repl, which would be my client, my server is just netcat listening to a port, where should whatever I typed into my repl be printed. If I hit enter on netcat, it writes a newline, if I hit enter on my repl, nothing is being printed or anything


I think it's clear already, but this is an echo server


I mean, if I go read-line in a repl and hit enter, it returns an empty string


Try it with .read one character at a time and see if you've got input?


It... Kind of works? It returns the string in cider


Does that give you enough to investigate on?


I'm... Not really sure, I just got even more confused. I'm not even sure how that got returned since I got a function for that, which uses .write and it didn't do anything here, so how did it get written in the first place


I can send the code if that's useful because I'm just confused at this point to be honest


Cider used nrepl, which has historically not handled calling read line from the repl well


(this has to do with how the nrepl protocol works)


Building the program gives exactly the same result


I would make sure you are both using the latest cider version in emacs (which includes the nrepl client) and make sure the nrepl server you are using is the latest (you are likely using whatever is baked into lein, so latest lein should suffice)


What do you mean by building the program gives the same result?


As long as you are connecting to/starting a repl via cider you are almost certainly doing it via the nrepl protocol, which, because of how it's protocol makes a distinction between code sent for evaluation and "input" has historically had weirdness around the running code that tries to read from *in*


There is some kind of solution in place for it, but it would not surprise me at all if that was still wonky


Ah, actually I forgot, because it is so rare that I call read-line on *in* like that, but I think what is actually going on here is, you typed in in (read-line) and hit enter, then the clojure read read up to the closing ) and returned the form which was then evaled, and when read-line looked at *in* the newline from when you hit enter is still unread, so it reads up until that newline producing an empty string


@U0NCTKEV8 I don't really think the problem is cider or using an nrepl, as running from a command line using clj leads to the same thing as running it from an nrepl


Also, I'm using Java's readLine not Clojure read-line


Basically, I'm reading the line doing

(defn read-input
  (let [input  (a/chan)
        reader (io/reader (.getInputStream socket))]
    (a/go (loop []
            (let [message (.readLine reader)]
              (a/>! input message)


nc as a server can behave unexpectedly(and may even behave differently depending on the os), one way to try and verify would be to connect to the nc server with nc as a client and see if typing a line and hinting enter in the server shows up in the client


I have tried that and nothing happens


Although if I use nc as a client and a server it kind of works as the function I use for writing the output works

Jim Strieter16:07:38

When you implement a protocol In Clojure, is the resulting object immutable? I would think that unless you intentionally make something an atom it would be immutable, but I wanted to ask


depends what you mean by the resulting object


mutability and protocol satisfaction are orthogonal

Jim Strieter16:07:26

Ahhhhh!!! That's the sort of thing I'm looking for. What does one need to do to have both?


if you have something you can mutate, it is not immutable, if you have something you cannot mutate, it is immutable

Jim Strieter16:07:29

If all attributes of an object are immutable, does it follow that everything about the object is immutable?


it depends what you mean by the attributes(fields) are immutable. An immutable (assigned once at creation) field can reference a mutable thing just fine


[(object-array 1)] immutable vector contains a mutable array


and then the inverse, (into-array Object [[]]), a mutable array referencing an immutable vector


Backing up… If you “implement a protocol” in Clojure then you have used defrecord or deftype and extended the protocol to it. You can also extend protocols to Java classes. I don’t know if you call that “implementing”, but let’s assume that it is. What are the implications for mutability then? Well, Java classes are totally mutable. So that’s an “implementation” that’s mutable. Meanwhile, records and types are both immutable structures. So:

(defprotocol Named (showname [n] "shows the name of the object"))
(defrecord N [n]
(showname [_] (str "The name is: " n)))
(extend-protocol Named
 (showname [s] (str "The string's name is: " s)))
=> (def a (->N "apple"))
=> (showname a)
"The name is: apple"
=> (showname "banana")
"The string's name is: banana"
However, the fields within those structures can be mutable.
(defrecord Changeable [n]
 (showname [_] (str "The changeable name is: " @n)))
(defn new-changeable [n] (->Changeable (atom n)))
=> (def c (new-changeable "carrot"))
=> (showname c)
"The changeable name is: carrot"
=> (reset! (:n c) "durian")
user=> (showname c)
"The changeable name is: durian"
So we couldn’t mutate the record itself, but the data that it refers to can be changed. As for the protocol… that’s really just a way to find functions that can operate on the data. It doesn’t know or care about what is in that data


I have a list inside a map `

(def a {:a {:b {:c [{:d "d" :e "e"}]} }})
How can I updated a key :d so the result would be like
{:a {:b {:c [{:d "updated" :e "e"}]} }}


(assoc-in a [:a :b :c 0 :d] "updated")

🙌 3

I could do that trick because you had a vector (not a list), and vectors are associative like maps, except the keys are numbers

clojure-spin 2

A vector of [:a :b :c] behaves a lot like a map of: {0 :a, 1 :b, 2 :c}


oh great! Thanks


@U051N6TTC How to update this?

(def a {:a {:b {:c [:k [{ 0 {:d "d" :e "e"}}] ]} }}) 


so the resultant will be

(def a {:a {:b {:c [:k [{ 0 {:d "d" :e "updated"}}] ]} }})


Same as earlier: (assoc-in a [:a :b :c 1 0 0 :e] "updated")


The 1 will get to the second element of that vector that starts with :k. Then first 0 gets the first element of the embedded vector, which is a map, and the second 0 does a lookup in the map for that number, finding the embedded map


Let me try this! Thank you 🙂


Can someone explain where in the dedupe source the flushing happens? > In the completion step, a transducer with reduction state should flush state prior to calling the nested transformer’s completion function


dedupe doesn’t have any state impacted by that. It just has a notion of last seen value. There’s no cleanup as everything happens as it sees a value in the two arity case


(defn dedupe
  "Returns a lazy sequence removing consecutive duplicates in coll.
  Returns a transducer when no collection is provided."
  {:added "1.7"}
   (fn [rf]
     (let [pv (volatile! ::none)]
         ([] (rf))
         ([result] (rf result))
         ([result input]
          (let [prior @pv]
            ;; store current value as new last-seen value
            (vreset! pv input)
            ;; if current value is the same as the prev last-seen value
            ;; don't call `(rf result input)` on it, just return result
            ;; no cleanup needed. We either have a value or it is duplicated
            ;; and we don't want it. No "state" that needs cleaning up
            (if (= prior input)
              (rf result input))))))))
  ([coll] (sequence (dedupe) coll)))


I see, and it makes sense when comparing to partition-all, which does call .clear to do the flushing it needs. Why does dedupe return the value when it does find a dupe? Wouldn't not returning a nil or something like :dupe be more clear?


Oh, it's not returning the value, it's the accumulated result.


read (rf result input) as doing work with your reducing function and result as just returning work already done with your reducing function without taking action on the (duplicated) input


That's a good tip for interpreting it. One more question about transducers here. It says that the init step eventually uses rf to "call out to the transducing process". What does this mean?


where does it say that?


>Init (arity 0) - should call the init arity on the nested transform rf, which will eventually call out to the transducing process.


where is that from?


which docstring is that? or is it some random book? blog post, etc


it is a not super great description of the 0 arity call that calls rf


([] (rf)) <- this


Somewhere along the way the reducing function needs some concrete accumulator. For instance, conj just creates a vector

  ^{:arglists '([] [coll] [coll x] [coll x & xs])
    :doc "conj[oin]. Returns a new collection with the xs
    'added'. (conj nil item) returns (item).
    (conj coll) returns coll. (conj) returns [].
    The 'addition' may happen at different 'places' depending
    on the concrete type."
    :added "1.0"
    :static true}
  conj (fn ^:static conj
         ([] []) ;; the init arity returns a vector for accumulating
         ([coll] coll)
         ([coll x] (clojure.lang.RT/conj coll x))
         ([coll x & xs]
          (if xs
            (recur (clojure.lang.RT/conj coll x) (first xs) (next xs))
            (clojure.lang.RT/conj coll x)))))


rf is the reducing function passed in to the transducer


which will usually bottom out at calling the 0 arg arity of the reducing function


(describing it as calling out to the transducing process seems muddled)


So it's basically just trying to describe that at some point the transformation stack bottoms out at the reducing function which has to return something concrete?


There's a bit more description in the transducers talk, > OK, there's a 3rd kind of operation that's associated with processing in general, which is Init. [...] . The basic idea is just, sometimes it's nice for a transformation operation to carry around an initialization capability. It need not be the identity value or anything like that. It does not matter. What does matter is that a reducing function is allowed to, may, support arity-0. In other words, given nothing at all, here's an initial accumulated value. From nothing. Obviously, a transducer can't do that because it's a black box. One thing it definitely does not know how to do is to make a black box out of nothing. Can't do it. So all it can ever do is call down to the nested function. So transducers must support arity-0, init, and they just define it in terms of the call to the nested step. They can't really do it but they can carry it forward except the resulting transducer also has an init, if the bottom transducer has an init.


depending on the transducing context (I think that is the clearer name over transducing process) the 0 arity may never be called


it is there to fill in the initial accumulator value if it is missing


(transduce identity conj (range 10)) vs. (transduce identity conj [] (range 10))


Yes, I understand that some step arities may do something like filter every single element and so the invocation would never reach the rf.


the 3 different arities of a reducing function are called at different places, a "filter" which is processing individual data elements is only going to effect the 2 argument arity


the 0 argument arity is used to fill in a missing init value at the start


the [] in my transduce example


conj's 0 arity returns []


So the arities are not always necessarily called in the order init > step(s) > completion?


they are, but init may not be called


read the source of transduce. The init is only called if non init value is provided

👍 1

That transducers reference page is good but the wording in the Creating Transducers section really loses me.


I think you may be confusing which steps are called per value and which steps are called just once


init and completion are called at most once and the start and end of a transducing context (process?, whatever)


and step is called for each value


I think I understand that part. I think I presumed init will always be called, which I now understand is not the case. It's only there to provide an init value when there isn't one otherwise. Gotta watch Rich's talk again.


so a manual transducing context that does something like reduce with a reducing function f might look like (loop [init (f) values] (if (seq values) (recur (f init (first values)) (rest vaues) (f values))


and you can see all the places where f is called


and the way you get you reducing function f, is you taking whatever your original reducing function is and apply your transducer to it


((map inc) +) or whatever


> read the source of transduce. The init is only called if non init value is provided It can be pretty confusing. transduce does not call the init arity of the transducer, but the init arity of the reducing function.

(def bad-xform
  (fn [rf]
      ([] (throw (Exception. "hi")))
      ([result] (rf result))
      ([result input]
       (rf result input)))))

(transduce bad-xform
           [1 2])
;; 3
However, you can use a transducer to build a reducing function, so that's why transducers must support the init arity.
(transduce identity
           (bad-xform +)
           [1 2])
;; throws Exception

(def useless-xform
  (fn [rf]
      ([] (rf))
      ([result] (rf result))
      ([result input]
       (rf result input)))))

(transduce identity
           (useless-xform +)
           [1 2])
;; 3


Thank you for the input everyone. I now understand the answer to the question that started this thread. Also, the role of the init arity is more nuanced than I first thought. I'm going to take some hammock time on the tips in this thread before coming back with more questions. Thanks!


@U7RJTCH6J I think your snippets are the clearest illustration of the the role of init. In transduce, does the xform's (the transducer) init ever get called, or is the init arity really only called in the case where the rf is a transducer itself?


xform doesn't have an init arity


transducers are functions from a reducing function (rf) to a reducing function


reducing functions have 3 arities


but yes, for some odd reason transduce invokes f's init before it transforms f via xform


Wait, that doesn't sound quite right. My understanding is that a reducing function only has 2 arities - zero and two - and completing is used to add the third arity (one) if you need to use a reducing fn in a transducer.


completing will add the missing arity if needed, but it adds a dummy noop version, if you actually need to do something during completion then you don't use completing


> but yes, for some odd reason transduce invokes f's init before it transforms f via xform lol this is so confusing 🤯


yea, it is a bit odd, but I don't think it makes a difference in practice.

(def useless-xform
  (fn [rf]
      ;; init
      ([] (rf))
      ;; completion 
      ([result] (rf result))
      ;; step
      ([result input]
       (rf result input)))))
While the completion and step arities can do interesting things, the init arity in transducers are exclusively (I think) just deferring to the reduce function at the bottom. It's pretty safe to just include the boilerplate and forget about it.


@U7RJTCH6J thank you for the added clarification. I'm going to presume this is basically the gist of it unless I come across anything to the contrary: > the init arity in transducers are exclusively (I think) just deferring to the reduce function at the bottom. It seems like a description that should be included somewhere in the transducers reference page, because the way it currently describes it as "rf eventually calls out to the transducing process" is very confusing language. Your snippets provide a much clearer explanation.


Thanks to everyone in this thread for helping me improve my understanding of this challenging area.