Fork me on GitHub

one thing I found inconvenient is the short variable name s


s could be a parameter for string, or for the clojure.string namespace.


so often times have to think about is it really this s or that s .


s -> param for a string, s/SOMETHING -> reference to a namespace


really kind of impossible to mix them up, IMO


It's considered "good style" to generally use the last segment of a namespace as its alias, as long as that would be unique in your file and consistent across namespaces -- with some short aliases being "common practice". I've not seen s as an alias for clojure.string -- but I have seen str as alias for it very commonly. Where I've seen s as a common alias is for clojure.spec.alpha -- it's all over the docs.


But, yeah, you can't confuse foo for a symbol with foo/ for a namespace qualifier really...


The REPL guide on uses str for clojure.string.

Alex Miller (Clojure team)03:04:00

s as an alias can only be used in the qualifier position of a keyword or symbol - there is no place in the language where it can be ambiguous with an unqualified local symbol or var name

Alex Miller (Clojure team)03:04:20

these are independent naming domains


I can see how beginners might find (str "Hello, " "World!") and (str/join ["Hello, " "World!"]) confusing on first glance -- but the context is fundamental and just one of those things you get used to.


is using the function str and the string namespace as str a common practice that is not frowned upon?


It is very common and definitely not frowned on. I'm on my phone right now but I'll check my clojure books tomorrow and see how various books alias clojure.string

Alex Miller (Clojure team)05:04:42

str, s, and string are all common aliases for clojure.string

Alex Miller (Clojure team)05:04:10

I nearly always use str


Yup, I mostly use str, though sometimes I go with string. It is fine to have an alias "clash" with a function, like others have said, they don't actually shadow each other, the context of each is pretty clear str/wtv or str. But s for me right now is dedicated to Spec, and I do get confused if I see a s/... that isn't for Spec.

👍 4

Books: Clojure in Action str, Joy of Clojure str and string, Clojure Cookbook str, Clojure Programming str, Clojure Applied (no alias), Functional Programming Patterns in Scala and Clojure str, Getting Clojure (no mentions of clojure.string at all), Programming Clojure str, Web Development with Clojure string. Several of these also use the full namespace with no alias. I don't have Living Clojure. I did not search any of my Packt Clojure books (I have several). So str seems to be the most popular with authors 🙂 @i


@U04V70XH6 thanks for that. really great efforts to compile such a list.


Hey! Yeah, I can search the Packt books too if you want -- but I mostly don't think they're very good books in general so I don't pay much mind to them.

ryan echternacht04:04:00

@i As someone who started spamming clojure about about a month ago, I agree that clojure has some different conventions. And then one day it clicks and you’re good to go


hello All I have several projects on my firebase console, but now all gone.... projects are all gone! firebase is downed?

Braden Shepherdson14:04:00

Firebase, and Google Cloud more broadly, are having a major outage; ~none of the management APIs or consoles are working.

Braden Shepherdson14:04:02

and it seems to be recovered now, more or less.


Yes disaster


I have the following docker file: and I'm trying to deploy on heroku. But I get the error that clojure.main not found.


Even when running locally using: docker build -t test . and docker run -ti test I get the same error


What am I doing wrong and how do I fix this


how do you cope with multiple if guards?


I feel some-> and cond still not elegant


I find cond quite elegant. As an alternative you could try writing a macro that meets your aesthetic requirements.


@i I really enjoyed reading a blog post on similar issue a few days ago "attempt-all" - a monadic error-checking construct


What do you mean multiple if guards?


@U0K064KQV saw your post: . very enlightening. multiple if guards means several error checks so the function early returns.


Oh thanks 🙂


I see what you mean now. That's an interesting ask, I know I've seen others before ask for things in that vein. I think though, because of Clojure trying to not fight Java too much, and embrace it, is why you don't have more of those.


For example, you have some-> and some->> to guard against nil. You can see those a bit like the safe navigating operator some languages have, sometimes being it is .?


That one is really useful when working with Java objects actually, but it can be used in Clojure too, if you'd have a function that doesn't take nil and another that can return it.


So I'm guessing here though, you're asking where are the similar "safe navigators" for errors?


I think the answer here is that by default, Clojure models errors using Java's exceptions, and those are always short-circuiting, no need for any special operator for that.


But, I know some people like to model their error as values, maybe you return :error, or something else that you cooked up which is meant as an error. The thing is, since those type of values as error are all custom, you can't really build a generic safe navigator for them


ah.. you speak out a point that clojure throws exception instead of returns error values.


“Clojure models errors using Java’s exceptions” a common practice ?


That's where you could build your own though. And a lot of these "value error" libraries do provide such macro already. I think failjure being the most popular one:


for example, in validating the user input


Yes, I'd say it is more common to model errors by throwing exceptions


Normally you'd throw an ex-info


thanks. then i’d go with the exception routine.


Personally, I think its the smoothest route. The problem is, whenever you don't go with them, you hit a point where they still leak out into your code, because underneath there is Java or JS, and both throw exceptions


If you're going to go the other way, I do recommend failjure, it does provide nice utilities to deal with the leaking exceptions from Java and JS, and other nice ones for working with value errors. But still, I'd say best to just accept exceptions and work with them


Hum... though you gave me an idea 😛


(defmacro unless->
  [pred & [expr & forms]]
  (let [g (gensym)
        steps (map (fn [step] `(if (~pred ~g) nil (-> ~g ~step)))
    `(let [~g ~expr
           [email protected](interleave (repeat g) (butlast steps))]
       ~(if (empty? steps)
          (last steps)))))


(defn might-error [e]
  (if (rand-nth [true false])

(unless-> #{:error} 10 inc might-error inc)


So now you can give a predicate, and between each invocation, it will check the result against it, and if it is true, it will short-circuit and return nil.


Kind of a more generic version of some->:

(defn returns-nil [_] nil)
(some-> 10 inc returns-nil inc)
;; => nil
(unless-> nil? 10 inc returns-nil inc)
;; => nil


This one might be more useful:

(defmacro until->
  [pred & [expr & forms]]
  (let [g (gensym)
        steps (map (fn [step] `(if (~pred ~g) ~g (-> ~g ~step)))
    `(let [~g ~expr
           [email protected](interleave (repeat g) (butlast steps))]
       ~(if (empty? steps)
          (last steps)))))


It returns the last value instead of nil, so with the above you get:

(defn might-error [e]
  (if (rand-nth [true false])

(until-> #{:error} 10 inc might-error inc)
;; => :error

Jakub Holý (HolyJak)15:04:46

Hello! I get the feared StackOverflowError with a stack trace that only contains some high-level app code and "Caused by" has just a useless repetition of

               [clojure.lang.LazySeq sval "" 42]
               [clojure.lang.LazySeq seq "" 51]
               [clojure.lang.RT seq "" 535]
               [clojure.core$seq__5402 invokeStatic "core.clj" 137]
               [clojure.core$concat$fn__5493 invoke "core.clj" 725]]
Can I do something somewhere to not get these clojure.* frames and get rather my app frames that called core? Now I have no idea where in the app this happened...

Alex Miller (Clojure team)15:04:47

do you really have no frames that are outside core? sometimes you might need to actually look at the exception stack directly vs using pst or the data representation (.printStackTrace *e)

Alex Miller (Clojure team)15:04:28

the big clue above though is "concat" - that probably narrows it down quite a bit

Jakub Holý (HolyJak)15:04:24

Thank you for the tip, I will try. I use concat at few places so I could wrap them with try-catch and retry...

Alex Miller (Clojure team)15:04:49

the most common way concat gets into something like this is if you're doing some kind of recursive concat - that creates a stack of suspended "firsts" which will blow up on first realization

Alex Miller (Clojure team)15:04:19

so look at whether you're doing concats in a loop/recur or something equivalent (and stop doing that :)

Jakub Holý (HolyJak)15:04:57

There is no obvious place where I would be doing that so I must search more... I have looked at the stack trace but there only seems to be core:

(first (seq (.getStackTrace *e)))
=> [clojure.lang.LazySeq seq "" 51]
(last (seq (.getStackTrace *e)))
=> [clojure.core$concat$fn__5493 invoke "core.clj" 725]


Also note that poring over stack traces is a good reason to avoid using the source file name core.clj for your own code.

Jakub Holý (HolyJak)15:04:23

Hm, I have replaced two concat s with into and now it works. Thank you!

Alex Miller (Clojure team)15:04:19

well it would certainly be good to understand why that helped


Can someone help me understand this docstring at ?


(utilization-executor utilization)
(utilization-executor utilization max-threads)
(utilization-executor utilization max-threads options)

Returns an executor which sizes the thread pool according to target utilization, within
`[0,1]`, up to `max-threads`.  The `queue-length` for this executor is always `0`, and by
default has an unbounded number of threads.
It sounds like it's just like a fixed-thread-executor where N = max-threads * utilization. But I really doubt that because then there would be no reason for a different implementation.


Ah, I think I understand. It allows max-threads but it will try to keep the pool size just so that the utilization is at the preferred level.


it's probably easier to just understand the source - after chasing down var imports and thin abstractions you get this


Yeah, I got to that part. I just blanked on the concept of utilization in this context.

Johnny Hauser18:04:27

I am greatly confused by the completion aspect of transducers. It makes sense when transducing synchronous collections, but not collections which produce their member values over time. You put such a value into the transduce function and immediately get the result back, just like synchronous collections, but the input collection doesn't complete until later (if ever). That makes the semantics described by Rich impossible for such a collection, right? Because the completion function would have to be called immediately, and the return value of it should be the return value of the call to transduce, but that is before all the step functions would have been called, since those are happening later. On the other hand, calling the completion function when the input process actually completes seems favorable for those types, but that breaks compatibility among different collections. Am I missing something?


> You put such a value into the transduce function and immediately get the result back you don’t always get a result back immediately. if it can’t produce a value, the transducer will hold state until the next value

💯 4

exactly, see for example partition-all

Johnny Hauser18:04:37

Hmm. Let me try to digest this for a few... It doesn't seem like we're talking about the same things.


(defn partition-all
  "Returns a lazy sequence of lists like partition, but may include
  partitions with fewer than n items at the end.  Returns a stateful
  transducer when no collection is provided."
  {:added "1.2"
   :static true}
  ([^long n]
   (fn [rf]
     (let [a (java.util.ArrayList. n)]
         ([] (rf))
            (let [result (if (.isEmpty a)
                           (let [v (vec (.toArray a))]
                             ;;clear first!
                             (.clear a)
                             (unreduced (rf result v))))]
              (rf result)))
         ([result input]
            (.add a input)
            (if (= n (.size a))
              (let [v (vec (.toArray a))]
                (.clear a)
                (rf result v))
  ... other arities)

Johnny Hauser18:04:16

Yeah, I'm pretty lost. To try to say it again, in case it helps, I put in a collection and immediately get back a transformed collection, but neither input nor output collection have any values yet, because they come to exist later on. It's just that the transformed collection that was returned will have transformed values at those times, in contrast to the input collection.

Johnny Hauser18:04:50

Which means that I was able to produce a transduced collection immediately, but the iteration and so step functions happen after that


now, I’m not sure I follow. do you have a small code example?

Johnny Hauser18:04:45

Unfortunately, I'm from JavaScript, so I couldn't offer any clojure 😕

Johnny Hauser18:04:28

RxClojure is probably a good point of reference

Johnny Hauser18:04:23

If you were to transduce an Rx Observable, you would put an observable in and get an observable back immediately.


one difference between transducers and similar ideas elsewhere (like in observable) is that the transducer (or step function) is just the step function and isn’t coupled to an observable, collection, or anything else

Johnny Hauser18:04:52

Rx for JS used to have a transduce function, which would take an observable and a transducer/pipeline and return an observable that, on each value from the input observable, would run it through the step function and if it reaches the end, it goes into the resulting observable. Then when the input observable completes, it calls the transducer completion function.


in Rx, the map they provide is coupled to an observable

Johnny Hauser18:04:25

Yes... fwiw, I have written a whole transducer lib. I'm not that much a noob, I hope 🙂

Johnny Hauser18:04:50

I particularly don't want operators specific to my frp lib - which is why I wrote the transducer lib.

Johnny Hauser18:04:37

A lot of stuff works.


yea, hopefully I’m not just telling you a bunch of stuff you already know 😬. I’m just trying to understand the crux of the question.

Johnny Hauser18:04:37

I am just confused in that the protocol seems to believe that the return value and the completion of the input are one and the same.


in the clj transducer protocol? or the Rx protocol?

Johnny Hauser18:04:21

transducer protocol

Johnny Hauser18:04:01

it builds up the result, then passes that result into the transducer result fn, and returns the value from it

Johnny Hauser18:04:08

exactly as Rich said in the talk about transducers

Johnny Hauser18:04:24

His slide said the transduction has to return the result from the completion function

Johnny Hauser18:04:46

But that means that the return value from the completion function and the return value from the transduce call have to be the same, and that's impossible for observables.

Johnny Hauser18:04:11

You get the resulting observable immediately/synchronously, and the values/steps/iteration come later

Johnny Hauser18:04:15

and so completion is later


per the docs: > The inner function is defined with 3 arities used for different purposes: > Init (arity 0) - should call the init arity on the nested transform rf, which will eventually call out to the transducing process. > Step (arity 2) - this is a standard reduction function but it is expected to call the rf step arity 0 or more times as appropriate in the transducer. For example, filter will choose (based on the predicate) whether to call rf or not. map will always call it exactly once. cat may call it many times depending on the inputs. > Completion (arity 1) - some processes will not end, but for those that do (like transduce), the completion arity is used to produce a final value and/or flush state. This arity must call the rf completion arity exactly once. the completion is separate from the step. there is a transduce function that will do a bunch of this stuff for you by calling the step function for each value of the supplied collection and then calling the completion function


you don’t have to use transduce to use your transducer, if that makes sense


those arities in the docs refer to a reduction function, also known as a stepped process

Johnny Hauser18:04:50

That doesn't seem to mention this thing I'm referring to.


transducers are about altering or augmenting a process


transducers do not see values from the collection


transducers are called with processes


and return processes


it is the process that sees the input / collection


my point is that you can use a transducer without ever calling its completion function if a completion function doesn’t make sense for your use case


my point is being precise about terminology


transducers don't have completing functions


it's the processes that transducers return that have the completing processes

Johnny Hauser18:04:15

weelll, it's a losing battle, because that slide is directly from Rich and muddies it all up


that slide is a bit muddled, IMHO

Johnny Hauser18:04:38

It's the "must return what it returns" thing that I'm referring to

Johnny Hauser18:04:55

He also specifically mentioned that transducers may want to modify the final accumulated value


so that last slide should read:


last bullet, rather:


the completion operation of the updated process returned by a transducer...


...must call the original process's completion op


transducers are process -> process

Johnny Hauser19:04:15

So we agree that returning the result from the completion function as the result of the transduction is not necessary?


give an example?


it depends on the transducer, but I don’t think the completion function ever has to be called if your input never completes

Johnny Hauser19:04:45

I had linked to the js transducer lib above, where it passed the accumulated value into the completion function and returns that


completion must be called because processes can abort early

Johnny Hauser19:04:11

return xf["@@transducer/result"](acc);

// vs.

return acc

Johnny Hauser19:04:27

I don't mean whether it's called - I mean whether it's returned.


consider the take transducer


that’s for arrayReduce. presumably, its a finite array and calling the completion function does make sense

Johnny Hauser19:04:11

I have the take transducer and it works fine for my arrays and observables


even if the input collection is infinite, the a reducing function transformed by the take transduder will abort

Johnny Hauser19:04:26

heh, I still feel like my point is missed


and when it aborts, must call the completion op


to flush any nested processes

Johnny Hauser19:04:02

Of course we should call the completion function, I'm asking whether the return value of the completion function is supposed to be the return value of the transduction

Johnny Hauser19:04:07

Rich said it should be


yes, that is correct

Johnny Hauser19:04:58

But that's the paradox


some reduction contexts do not reveal the completed value though


like channels

Johnny Hauser19:04:30

You have an input observable, you get back an immediate observable, but there hasn't even been any step functions yet, and those will happen later and the completion even later. You can't have both.


the completion function doesn’t return the full result, it emits any accumulated values that might need to be flushed


so values* can have been emitted long before a completion function is called or even if the completion function is never called


do you mind restating the question here?

Johnny Hauser19:04:06

const transduce = transducer => etc => input => {
  const result = reduceSomehow(transducer.step, input, etc)
  return transducer.complete(result) // this!

Johnny Hauser19:04:17

for the sake of demonstration - obviously not real and coherent code

Johnny Hauser19:04:54

You get the result immediately, and return it immediately, but Rich says you're supposed to pass it through the completion function and return that

Johnny Hauser19:04:05

but all the code there is synchronous

Johnny Hauser19:04:31

inside that reduction is where the setup is done which will hook up future values to the transformation pipeline


that code above is not a good representation of how transduce works


transducers don't have steps


processes have steps

Johnny Hauser19:04:30

I was trying to get to the meat of it without dealing with that stuff


(defn transduce
  [xf f init coll]
  (let [tricked-out-process (xf f)]
    (-> (reduce tricked-out-process init coll)
      (tricked-out-process)))) <- completion step

Johnny Hauser19:04:04

yes, you need a builder, and pass it into the transducer to get back, what I have usually heard referred to as the transformer, which has the step, result, init properties


transducer is xf


process is f


here’s an example that is maybe more concrete that we can fix:

(def my-transducer (map (fn [x] (inc x))))

(def my-atom (atom []))
(defn add-to-atom! [atm val]
  (swap! atm conj val))

(def process (my-transducer add-to-atom!))

(process my-atom 1)
@my-atom ;; [2]

(process my-atom 2)
@my-atom ;; [2 3]


so you can see that every time we call process, the steps are emitting value


....keep going

Johnny Hauser19:04:53

You know what I just noticed. Rich said a process may want to do a final transformation of the value being built up


keep in mind add-to-atom! is missing the completion arity


here’s another example:

(def my-transducer (comp (map (fn [x] (inc x)))
                         (partition-all 2)))

(def my-atom (atom []))
(defn add-to-atom! [atm val]
  (swap! atm conj val))

(def process (my-transducer add-to-atom!))

(process my-atom 1)
(process my-atom 2)
(process my-atom 1)
(process my-atom 2)
(process my-atom 1)

;; notice the there is a 1 missing
@my-atom ;;[[2 3] [2 3]]

Johnny Hauser19:04:39

What does he mean by a process?

Johnny Hauser19:04:12

I understand your examples, btw


process is something with: a beginning (arity-0), step (arity 2) called many times, completion (arity 1)

Johnny Hauser19:04:35

ah yeah, so back in the same boat


sometimes loose usage of process refers to the step (arity 2)

Johnny Hauser19:04:51

In other words, it's the thing that takes the next thing and returns a thing


in your example add-to-atom! needs to return the atom for the next step


you're manually calling it with the same atom every time


but you couldn't feed that function into transduce/reduce and have the intended behavior

Johnny Hauser19:04:47

Then, in your example, the issue is that any process that wants to modify the final accumulated value can't do it


the process's arity-1 is exactly when it can complete the accumulated value


look at into:


like this?

(def my-transducer (comp (map (fn [x] (inc x)))
                         (partition-all 2)))

(def my-atom (atom []))
(defn add-to-atom! [atm val]
  (swap! atm conj val)

(def process (my-transducer add-to-atom!))

(-> my-atom
    (process 1)
    (process 2)
    (process 1)
    (process 2)
    (process 1))

@my-atom ;;[[2 3] [2 3]]

Johnny Hauser19:04:38

Is this example not making sense?

Johnny Hauser19:04:42

return xf["@@transducer/result"](acc);
// vs.
return acc


yes @U7RJTCH6J that is more correct

👍 4

I can’t connect how the js example from arrayReduce applies to your question


the "into" process does: make a mutable collection (arity-0) add to the collection (the step) make the mutable collection persistent (completion / arity1)

Johnny Hauser19:04:35

In the first case, the accumulated value is passed to the completion fn and the result from the completion fn is returned, and in the second case, the completion fn is called, and it receives the accumulated value, but it's return value goes out into the void. Nothing cares. The accumulated value is just returned, independent of the completion fn.


  identity ;;  <-- don't modify the process!
    ([] (transient []))
    ([tc] (persistent! tc))
    ([tc v] (conj! tc v)))

Johnny Hauser19:04:19

Rich said the first one, which that array reduce does, it what is supposed to be done

Johnny Hauser19:04:22

You know, I've got an idea of a solution


what is the first case vs. second case?

Johnny Hauser19:04:42

heh, fun times


here’s the example above using the completion function:

(def my-transducer (comp (map (fn [x] (inc x)))
                         (partition-all 2)))

(def my-atom (atom []))
(defn add-to-atom!
  ([atm val]
   (swap! atm conj val)

(def process (my-transducer add-to-atom!))

(-> my-atom
    (process 1)
    (process 2)
    (process 1)
    (process 2)
    (process 1)

@my-atom ;; [[2 3] [2 3] [2]]

Johnny Hauser19:04:10

perhaps when he said "a process may want to do a final transformation of the value being built up", there is the rule that it is a mutation of that value, and not a replacement of it. If so, then my concern is irrelevant.

Johnny Hauser19:04:24

Imagine a completion fn that, no matter what, just returns the number 7. You could put any array through that process, and you're just going to get back a 7 with that transducer lib I linked, because it returns the result of passing the accumulated value into the transformation fn.

Johnny Hauser19:04:42

And so it should be true that no matter what you're operating on, you should just get back a 7.

Johnny Hauser19:04:22

But that would only be true if you called the completion fn immediately/synchronously, which is only relevant to some collections. Which means something is whacked.

Johnny Hauser19:04:11

Given a collection that is an observable, you would get back an observable, but given a collection that is an array, you would get back a 7? Bananas!


there are different types of transducing contexts, and not all of them even expose the completed value


core.async channels vs reduce/transduce -- totally different


the general idea of transducers being "process transformation" is the key


in a channel the "process/step" is adding to the channel

Johnny Hauser19:04:21

Well, maybe I'll settle this in my mind from another angle. I would like to generalize some common operations, for example, last


in reduce/transduce, the process is the reducing function that the user passed

Johnny Hauser19:04:49

I am surely coming across a lot dumber than I hopefully am


last doesn’t make sense for an infinite collection. right?

Johnny Hauser19:04:32

It would be dumb to do it haha

Johnny Hauser19:04:45

An unresolved promise would be a comparable idea


again transducers and processes have nothing to do with the input


I guess the last value of an infinite number of 1s might make sense, but I’m not sure that’s what you’re talking about

Johnny Hauser19:04:06

I really, really do understand transducers

Johnny Hauser19:04:38

It'd be amazing if I wrote a transducer lib and am transducing all kinds of things and don't actually understand transducers lol


you could have a reducing process that threw away all prev values except the immediate one

Johnny Hauser19:04:45

and you could return a promise of the one that it is at the completion of the input collection


i hope we’re not coming off as patronizing. I’m really just trying to help and understand the question. I feel kinda dumb for not understanding the question exactly


as @U050ECB92 said, I think most-recent-value makes more sense than last for an infinite collectino, but maybe that’s just a naming thing

Johnny Hauser19:04:37

What you need for totally collection generic last is just that collections provide a reduce fn (so you can step it), a thing to keep holding the latest value of the step, and access to the value in a way that accords with the semantics of the collection, which would just be synchronous or async access to it, meaning, you could get it down to either a value now or a promise of a value.


you can do that

Johnny Hauser19:04:45

Yeah, I have done it, but some confusing stuff with the transducer protocol is preventing me from really moving forward with that stuff.

Johnny Hauser19:04:21

I really suspect the rule I'm looking for would mean that that array reduce I had linked could have (and should have) passed the accumulated value into the completion function and returned it after that, rather than returning the result of the completion function.

Johnny Hauser19:04:15

because that completion function should just be mutating that value, if anything


imho transducers.ITransformer in the transducers-js library is a really bad name

Johnny Hauser19:04:42

never, for example, saying "well, I've decided the result of this operation is going to be a 7, instead of the array I was given"


IProcess would have been better


I think those semantics make sense for arrayReduce, but another consumer of the transducer can use the transducer in a different context with different rules

Johnny Hauser19:04:39

Another way of asking my original question would be whether the return value of transduce has to be the same thing that is (thisResultRightHere, value) => in the step/reducer.

Johnny Hauser19:04:56

Do you have an example of a different context?

Johnny Hauser19:04:08

I'd really like that not to be the case... seems like a bad idea.


the return value of transduce has to be the result of calling complete on the accumulator

Johnny Hauser19:04:46

sigh... but that's once again, impossible for some collections


the two main contexts for transducers are: 1. concrete collections 2. core.async channels


right, arrayReduce is a particular transducing context


in Clojure, arrays know how to be reduced, and transduce is built on top of that


but they’re really flexible, so you can use the same transducers (map, filter, partition-all) in other contexts (eg. observables)

Johnny Hauser19:04:33

by 'context', do you mean having a different transduce fn altogether?

Johnny Hauser19:04:53

That seems to be the common thing to do, but so far, I don't agree with it.


yeah, things like Iterators have a "stepping process" that can be transformed


same with channels


I suspect observables too, but haven't considered them

Johnny Hauser19:04:05

In either case, do you not put one in and immediately get one back?


put what in?

Johnny Hauser19:04:31

input a channel, output a channel

Johnny Hauser19:04:47

no return value?


channels have values flowing through them


a transducer on a channel changes the flow


like (partition-all 500) would batch things

Johnny Hauser19:04:43

Ofc, just like with observables


you put in a value ,but no one sees it on the other side until 500 values are put in


channels do not expose the completion value

Johnny Hauser20:04:18

Can you show me like a two liner code sample for this, because that doesn't many any sense.... that you don't get a return value


which is very different than the collection transducing context


(def my-transducer (comp (map (fn [x] (inc x)))
                         (partition-all 2)))
(def my-atom (atom []))
(defn add-to-atom! [atm val]
  (swap! atm conj val))
(def process (my-transducer add-to-atom!))
(process my-atom 1)
(process my-atom 2)
(process my-atom 1)
(process my-atom 2)
(process my-atom 1)
;; notice the there is a 1 missing
@my-atom ;;[[2 3] [2 3]]


notice the missing 1

Johnny Hauser20:04:15

channelA = makeAChannel()
channelB = transform(ChannelA) // no??


(sequence (halt-when even? (fn [ret i] {:ret ret :i i})) [1 3 5 2])


not channels, but a one-liner ^

Johnny Hauser20:04:43

dudes, I really have no problem understanding abstract special collection semantics 🙂


(you keep bringing it up, but no one is making any judgements on your capacity)


(sequence xf coll) <-- returns a lazy seq subject to a transducer

Johnny Hauser20:04:55

We just keep getting back to examples talking about how the transduction is independent of the collection semantics, but sort of naturally affects them in cool ways. I get that.

Johnny Hauser20:04:02

I thought we were almost to a good thought, but we got sidetracked somehow


notice halt-when is a transducer that allows you to barge in a different completed value


but the sequence transducing context never shows the completed value


I believe you are conflating source collections with transducible contexts

Johnny Hauser20:04:56

strong possibility that is true

Johnny Hauser20:04:15

What's the difference?


-`transduce` (the function in clojure, is a context) - async channels (another context) - sequenceanother context


> input a channel, output a channel within the context of core async channels, the channel is still returned, but it’s not a new or different channel (it’s the exact same channel). the important part is the side effect of stepping with the channel, which may have had 0 or more values put onto it

Johnny Hauser20:04:39

That seems like equivocation


transducible contexts just means a place where a process is exposed to a user's transformation


channel's process is the "put this thing in the channel" operation


sequence's process is the act of pulling from a source


transduce's process is directly supplied by the user


not every step with a channel will put a value on the channel though (eg. if your transducer is filtering as part of its step).

Johnny Hauser20:04:40

Let's say you have a channel, you can transduce it to put a bunch of stuff into it, and then transduce it again afterward and put more stuff in?

👍 4

sequence & transduce contexts allow a user to supply the source collection


channel contexts do not allow the user to supply the source


yes, you can use a transducer with a channel, put some stuff on it, then wait, then put more stuff on it later. however, you won’t use the transduce function to do this


your sentence doesn't parse to me


someone produces values into a channel, and a consumer sees values subject to transformation


so I wouldn’t say you were “transducing” the channel, but hopefully the idea makes sense

Johnny Hauser20:04:29

Yeah, that's like an observable, except mutable 😕


I mean, channels are synchronization tools

Johnny Hauser20:04:16

It's an observer/emitter kind of thing, right?


"mutable" doesn't really capture it -- certainly some of the transformations are stateful


takeor partition being the classic stateful transducers


(channels are like channels in golang)


anyways... I gotta run but let me recap:

Johnny Hauser20:04:26

What I mean is, this is how I would do that sort of thing:

const a = Event.create()
// this is the impure/nasty thing -
// putting values 'in', but this is the only nasty thing
const b = T.transduce( (etc), T.filter (etc)) (a) // ah, delicious purity

Johnny Hauser20:04:07

occur being like pushing/mutating an array, sure, but that's only at those source events. b is exactly what it says it is and only what it says it is.

Johnny Hauser20:04:27

It would be horror nightmare code if you could do occur on b, in that example


there exist different transducible contexts, that can act slightly differently, they all have a "step" processes are things with a init,step,complete transducers: process -> process Not all transducible contexts expose the value returned from the 0/1 arities


all of them will call the arity-1


but might not reveal that return value (sequence & channels do not)

Johnny Hauser20:04:12

Thanks for chatting! @U050ECB92

ghadi20:04:25 ^ a transducible context for Iteration (not directly exposed in Clojure)


fundamentally, how would partition-all (for example) do anything that partition doesn't if there's no "completion" (that is, if the source never ends)


if a source of data can stop, then a completion is relevant, if it can't, a completion can't be relevant

Ben Grabow20:04:15

I'm not sure this is accurate. Consider Rich's (take-while not-ticking?) example from his talk. Even if you have an infinite stream of bags coming to the baggage handler, the handler can still stop the loading process if it encounters a ticking bag.

Ben Grabow20:04:14

Completion is relevant if either the source of data is finite, or if the transducing process decides the process should terminate early.


thanks - that's true, apologies for overgeneralizing


also, pedantically, transducers are not tied to collections - that's a big part of the point


They aren't? I don't understand. I thought that transducers had to operate over collections. Am I missing something?


transducers operate on a data context, see for example core.async


thanks to transducers, we were able to deprecate async/map, async/filter etc. which were otherwise re-implementing the basic collection operations, now we can reuse the transduce version of those on channels


a channel can take a transducer, if provided the transducer manages the data on that channel, no need for a collection conversion in between


We have an error reporting mechanism which can be used from multiple threads and which writes error details in JSON format to a file. We got some bug reports from customers suggesting that the JSON data stored in the error file are corrupted. My colleague analyzed the issue and come up with this fix to prevent multiple threads overwriting their data:

(ns concurrency
  (:require [ :as io]
            [cheshire.core :as json]
            [taoensso.timbre :as log]))

(let [lk (Object.)]
  (defn- known-errors-from 
    "Read known errors from file - call only with lock taken!"
    (if (.exists (io/as-file destination))
      (with-open [r (io/reader destination)]
        (doall (json/parse-stream r keyword)))
  (defn- write-errors
    "Write errors to file - call only with lock taken"
    [errors destination]
    (with-open [w (io/writer destination)]
      (json/generate-stream errors w)))
  (defn- with-lock-persist-to-file
    "Persisting errors to file must be thread safe.
     Use a local lock and make sure everything is:
     - realized before taking the lock,
     - persisted when releasing the lock"
    [path-fn log-entry]
    (let [destination (path-fn "error.json")]
      (locking lk
        (let [known-errors (known-errors-from destination)
              updated-errors (conj known-errors log-entry)]
          (write-errors updated-errors destination)

(defn persist-to-file
  [{:keys [path-fn] :as _context}
   {:keys [description] :as details}]
    (with-lock-persist-to-file path-fn {:category category :error (:error details)})
    (catch Throwable t
      (log/error t "Failed to persist the reported error: " description))))
I instinctively didn't like the (let [lk (Object.)] part; the lock is then used via locking ; I didn't find a better way to do it though; my first thought was to at least lock something more meaningful like the destination file but for that to work strings representing the same destination path would have to be the same object (which would require "interning" I think). Notice it's not possible to simply append to the error file (I think) because the format is JSON so you basically need to "insert" new element at proper position (although I believe in this case it's really just inserting it before the final ] marking the end of JSON array) So my question is: Does the solution look reasonable or do you know a better way to do it?

Alex Miller (Clojure team)19:04:41

Some people use an agent to write logs (as agents already force single queue)

💡 4
Alex Miller (Clojure team)19:04:56

The locking / Object. thing is common in Java, no inherent problem with that (I usually put it in a private defonce though rather than use the big let)


Great, thanks! Yeah, I'd define it like ordinary def I think. Agent-based solution looks interesting - I also thought about somehow using a queue for that but it felt a bit too complicated at a first sight.

Alex Miller (Clojure team)19:04:19

Yeah, agents have that all built in

👍 4

If you're using something like logback, you can define remote agents for ingestion (fluentd etc) or use logback's own configuration to write to a file, setup logrotate etc)


Another minor thing - why the time macro casts to double?

(prn (str "Elapsed time: " (/ (double (- (. System (nanoTime)) start#)) 1000000.0) " msecs"))
It looks to me as if it would be perfectly fine without it (the same Double as a result of division by 1000000.0) :
(prn (str "Elapsed time: " (/ (- (. System (nanoTime)) start#) 1000000.0) " msecs"))


I think the precision of double is greater than the precision of OS timing reports, so there's nothing to lose in the cast ?


I mean, both these expressions yields Double so I'm wondering why would one want to cast it at all...

(let [start (System/nanoTime)] 
  (type (/ (double (- (System/nanoTime) start)) 
;;=> java.lang.Double

(let [start (System/nanoTime)]
  (type (/ (- (System/nanoTime) start)
;;=> java.lang.Double


there's actually a very subtle bug in this code, and it's not related to double casting


@U053XQP4S By "this" you mean the first version (`clojure.core/time`) or the second version? And do you have an idea why the cast is there?


they both have the same problem


read the doc for System/nanoTime and -


I have no idea why the cast, it looks redundant to me as well


I’m not sure what you meant by that. The docs for - looks pretty harmless to me apart from “Does not auto-promote longs, will throw on overflow” which given the range for longs should never happen. And System.nanoTime docs suggest the same approach for computing duration. So the only difference I can see is that clojure’s - throws an overflow error, while Java’s - will silently overflow The fact that System.nanoTime can return negative values (if the origin is in the future) is surprising but it should still work:

(- -10 -20)
;;=> 10
(And I doubt it’s actually ever implemented like that but of course I can never know in what kind of weird environment a java program might run)


The problematic situation is when the period start is positive and the period end is negative, then - will throw. System/nanoTime explicitly allows that and relies on arithmetic overflow to stay consistent in that case. I've no idea how problematic it is in the wild but I would be pretty worried to have that kind of code in production.


Ah, right - I interpreted javadoc as either "positive" (the common case) or "negative", but never both. Thanks for clarification!


Yeah, that was my instinct as well. There’s been a lot of work around coordinating writes to a file on the JVM. I would start there 😉


But it's not an ordinary log file; it's a custom JSON file which serves the purpose of presenting some errors to end users. But maybe logback offers something in that space I'm not familiar with...


It helps with coordinating writes to a file - what's in that file is a different matter, as you can configure appenders for different namespaces, with different formatters and output formats. Admittedly - I'm not a logback expert, but I've seen some impressive configs in the wild


I mean, you basically set the pattern to a no-op


and get all the benefits of a logging framework (e.g. optimized writes, possible buffering, etc)


Interesting, I haven't thought about that - I'll have a look at it later to see what can be done 🙂


looks like

would work


so you serialize to JSON before you write to the log


In this particular case at least part of the problem is that you need to read the whole file before you can append the next element


Yeah, other people get around that by parsing line-by-line (not assuming proper JSON)


hang on, searching

👍 4

Yeah, I dunno. It’s kind of an interesting question. Streaming to a file is kind of antithetical to JSON.


e.g. what happens if the proc just dies before it prints the final ]?


At any rate, feel free to ignore. You happened to hit one of my soapbox issues: Just Use Logback! 😄


It sounds like you have interesting constraints here.


(e.g. logback)


(println "!!!!"
           (total-sell-rate room-rate)
           (/ 9626 100)
           (class (money/minor-of (total-sell-rate room-rate)))
           (type (money/minor-of (total-sell-rate room-rate))))
!!!! #USD 96.26 4813/50 java.lang.Long java.lang.Long
(println (/ (money/minor-of (total-sell-rate room-rate)) 100))
CompilerException java.lang.IllegalArgumentException: Unable to resolve classname: [email protected], compiling money is form clojure library in github
(defn ^long minor-of
  "Returns the amount in minor units as a long"
  [^Money money]
  (.getAmountMinorLong money))
What I miss here? Why I see this exception?

Alex Miller (Clojure team)20:04:25

^long is being resolved to the long function in this location

Alex Miller (Clojure team)20:04:36

(defn minor-of
  "Returns the amount in minor units as a long"
  ^long [^Money money]
  (.getAmountMinorLong money))


Not my function, this is library function


what exactly happening here? I don’t get it.

Alex Miller (Clojure team)20:04:10

var metadata is evaluated

Alex Miller (Clojure team)20:04:27

here, the hint is evaluating not to a long type hint, but to the long function object

Alex Miller (Clojure team)20:04:47

because the hint is in the wrong place


hmm and it works for class type and is able to print this. This is the part which confuse me. Why it works at all.

Alex Miller (Clojure team)20:04:14

hints are hints, and typically ignored if not right

Alex Miller (Clojure team)20:04:50

depending how you're calling it, you may be running into a situation where you're causing it to matter (primitives invoke a different execution path)

Alex Miller (Clojure team)20:04:25

you haven't shared enough context for me to know what you're doing to actually get the error, so I'm just guessing

Alex Miller (Clojure team)20:04:33

but the root cause is that the code is bad


ok, I understand the general idea. Not exactly why the issue exactly with this call ,but I understand the point. Thank you.


I need to create private API for my full-stack app (clj+cljs). I'd rather avoid creating full-blown REST APIs as it makes no sense here. Ideally I'd like to have something taking advantage of cljc, so somehow backend and frontend uses the same code in many places... I'd like to avoid having "server" and "client" in one project. Do you have any suggestions what to use?


what's the advantage of having client and server in separate projects if they can only be used together?


of course you could have a client lib (that the server uses to get the js payload to send to the frontend)


and bundle the cljc either in the client lib, or in a third lib that both use


I don't understand the benefit offered by this split though


Oh, you misunderstood me!


I am exactly trying to avoid such split within one project.


I am looking for something like in Fulcro.


clj and cljs will both use cljc files if present on classpath when loading, there's no special setup needed


But it's not very useful in my case as I have to rely heavily on react components.


Yes, but my problem is: if I have backend and ui in one project, then in this single project I will have to create "explicit" API server and API client.


Imagine creating full-blown REST API with documentation, swagger, etc in project only to be used by itself.


so you're looking for something that abstracts the communication between the two, and also the definition of the two processes (browser and server)?


Something like that.


It feels stupid to build REST with all its HTTP sheningans only to talk between clojure and clojurescript.


I haven't done cljs work in years, but back then sente worked decently for sharing clojure native data between client and server with an event sourcing communication model

👍 4

then each side effectively runs a reduce across the events offered by the other


Thanks, I will have a look.


sente could be good for simple transfer of data between the UI and backend. You could also just use Ring + Compojure for a very simple HTTP server with a few routes (and pass JSON).


@slawek098 You should have a look at It's much simpler than sente and based on what you describe sounds like it may be exactly the sort of thing you want.