Fork me on GitHub

is there a way to add protocol implementations to a class instance? ie, extend a particular java.util.Date. with some protocol rather than any java.util.Date?


similar to extend-via-metadata but for arbitrary java class instances


{:cluster-id nil, :app-id nil, :message-id nil, :expiration nil, :type nil, :user-id nil, 
:headers #object[java.util.HashMap 0x1af91489 {, 
I’m getting this response in a queue message header, and I want to get the x-jwt-payload from it, I’m using cheshire.core/parse-string to parse it but not happening getting error
#error {\n :cause class java.util.HashMap cannot be cast to class java.lang.String (java.util.HashMap and java.lang.String are in module java.base of loader 'bootstrap')\n :via\n [{:type java.lang.ClassCastException\n   :message class java.util.HashMap cannot be cast to class java.lang.String (java.util.HashMap and java.lang.String are in module java.base of loader 'bootstrap')\n   :at [cheshire.core$parse_string invokeStatic core.clj 209]}]\n :trace\n [[cheshire.core$parse_string invokeStatic core.clj 209]


parse-string takes a string containing json, that is a map


How can I get x-jwt-payload than?


The problem is it’s a java hashmap?


Parse-string takes a string, any kind of map an error


So what’s the alternative?


(get-in m [:headers "x-jwt-payload"])


Thanks @U2FRKM4TW but that’s not working, I have tried (.get value key) and it worked for me


Strange, Java maps support get.


Does JavaHashmap supports get?


user=> (def m (java.util.HashMap.))
user=> (.put m "a" "b")
user=> (get m "a")
user=> (def x {:a m})
user=> (get-in x [:a "a"])


let me try this

Endre Bakken Stovner08:10:32

Is it possible to extend keywords and numbers in Clojure so they walk and quack like keywords/numbers but can have metadata?


So you could try an experiment like going into the clojure.lang.Keyword class defined in Java in Clojure's implementation, and adding a field to hold metadata for that, and methods to get and add metadata to a keyword, implementing the appropriate interfaces.


But that might violate an assumption of keywords that is pervasive in Clojure -- that two occurrences of the same keyword are the same identical object in memory, to enable fast equality comparisons between keywords using identical?, i.e. JVM reference / pointer equality.


If you have the same keyword appearing two places in a text input, and you want different line/column metadata on each, they cannot be the same JVM object. The keyword without the metadata can be identical, but the keyword with the line/column metadata must be different objects.

Endre Bakken Stovner08:10:54

It would be fun to play around with and see. I just want it to work for the limited use-case described earlier (when specter is navigating the data-structure)


Changing numbers to have metadata is also possible, but it might mean changing the Java implementation of most/all arithmetic operations, which are quite a few.


You would end up with your own custom modified version of Clojure that no one else in the world was using.

😂 3

and likely, a custom version of Clojure that maybe one or two other people might ever want to use.


For numbers, I suspect it would break the interaction with some third-party numeric libraries and your hacked version of Clojure.


because some of those probably assume that Clojure's numbers are the classes that it use now, e.g. JVM's java.lang.Long, java.lang.Double, java.lang.Number, and your modified version could not add metadata fields to those classes, or at least I don't know a way without modifying the implementation of the JVM itself, which sounds like something I doubt anyone but the JVM implementers would be well-prepared to do in less than a year.


(The year there is a wild guess -- maybe you know a JVM implementer who likes you and can pull it off in a weekend for beers.)

😂 3

actually, yes 😄


but mostly no


err… There is no way to put metadata on top of individual numbers/keywords, but you can have a registry that allows registering metadata on keywords/numbers. Sort of like specs — they are identified mostly by keywords, and s/def is a way to assoc a spec to a keyword. This is not quite metadata, but maybe good enough for you? what’s your use case?


hmmm, so this is a sort of a visual exploration tool?

✔️ 3

is it clj only, or clj/cljs?

Endre Bakken Stovner08:10:05

Does not matter I think, because they should be identical. I can write it in whichever language is easier. Specter is a tool for navigating data structures, not code.


yeah I know


but how do you show highlights? not in the repl I assume, because there is only text?

Endre Bakken Stovner08:10:22

Yes, I can show them in emacs or a SPA. I just want to write the backend first.


maybe your specter tool could, given a specter selection pattern, transform the values at that pattern with #(tagged-literal ’selected %), and your output panel can just show that?

👏 3
clj 3
👍 3

or it can process the resulting data structure to highlight those?


The registry idea won't work for multiple occurrences of the same keyword in the data, without changing Clojure/JVM's Java code, because all Keyword JVM objects for :foo are the same JVM object in memory.

✔️ 3
🤯 3

That is also true for some integers with small absolute value, in many JVMs, as a memory optimization for commonly occurring small numbers.


And changing the implementation of the JVM itself is a step of difficulty beyond changing the Java code that implements Clojure.


"Any place is walking distance if you have the time." is a phrase I am reminded of in situations like this. The approach of trying to read standard Clojure data structures, but add metadata with line/col info for keywords and numbers, is a long walk away.


Creating a custom type that "wraps" an object that doesn't have metadata, so that the wrapper object has the metadata, and contains the original value, seems like an approach worth thinking about. You would need your own modified reader to read text and create those objects instead of what clojure.core/read or clojure.edn/read does. You would probably want your own custom print/display functions for that, which might be straightforward with Clojure's print-method -- not sure.

👍 3

Or, I should back up and say, if you want to read a string representation of Clojure data that looks like EDN, and the result of reading it had all the line/col metadata, then you need a modified version of clojure.core/read.

Endre Bakken Stovner08:10:18

But specter can use user-defined functions to find not only numbers, but, for example, numbers greater than 5. These functions cannot understand my wrapped numbers easily, right?

Endre Bakken Stovner08:10:37

That is the downside of the wrapping approach as I see it.


I think my suggestion with wrapping using tagged literal still holds..

Endre Bakken Stovner08:10:20

@U47G49KHQ I will play around with your idea. To begin with I can create a limited version of the highlighter 🙂 But if there are multiple keywords named :a they can only have one place in the register, right? I cannot think of an easy way to tell these apart 🙂


This might be a very bad and unworkable idea, but off the top of my head it might be worth thinking of the registry idea, but instead of the registry pointing at a single line/column location for a keyword/number/whatever, it could point at an ordered list of locations.

👍 3

I don't know your use case well enough to determine if it would help you do what you want.

Endre Bakken Stovner09:10:56

That might help me get closer! Like if the result from specter is ["a", :a, "b", :a] And I have the location data of "a" and "b" I should be able to deduce the locations of the two :a s.

Endre Bakken Stovner09:10:32

But if you have a dict [{:a "bla"} {:a "blo"} {:a "bli"}] and want to get every :a in every map with an odd index, you cannot tell them apart, really. But come to think of it, you cannot tell them apart in the regular specter result either really.

Endre Bakken Stovner09:10:17

Now I understand more what you mean by transforming and tagging @U47G49KHQ! Brilliant. That is what I should try.

👍 3

@dominicm About LSP: Take a look at which is an LSP implementation for clj-kondo, written in Clojure based on lsp4j


It's used in VSCode. The gnarly interop code is heavily borrowed from clojure-lsp (I know see that's been mentioned already)


hi what is an idiomatic name for an arbitrary data structure? e.g. in the context of:

(defn as-set [data] 
  (if (coll? data) 
    (set data) 


vec calls its argument coll.


Same for set.


thanks i see - wondering now why this function as-set is necessary in the first place in this context actually


why is set not good enough? 🙂


and then there’s probably a better name depending on the context


Ah, wait - I somehow failed to notice the check for coll?. In this case, set is not good enough if data is e.g. 1.


FWIW I would name such an argument maybe-coll. But I would try to get rid of the need for as-set altogether in the first place.


i think prefixing with a question mark like ?set instead of data is not so idiomatic, since that is mostly used in logic programming (datalog, core match or core.logic) for unbinded vars

Eloy Pazos Lema09:10:24

hello guys, anyone here with experience with cider clojure?


Probably plenty of folks — you might want to try the #cider channel, though.


Hi! I have this function and I wonder if there’s a nicer way to write it. It looks something like this:

(defn determine-command [some-map]
  (or (when-let [arg ((comp :key2 :key1) some-map)]
        [#'get-using-method-1! arg])
      (when-let [arg ((comp :key4 :key3) some-map)]
        [#'get-using-method-2! arg])))
In other words: it returns a vector that says which command to use based on the structure of the input, but it also passes the thing that was extracted from the map to the output of the function. This function is easy to unit-test: I verify both the expected function to be used, as well as the expected arg. I don’t think this way of writing it is too bad, but I’m curious if there’s a better way. Thanks!


LGTM Some minor things: - A nested if-some might look a bit better - There's a difference between when-let and when-some that sometimes might be important


Ah, I would replace ((comp :key2 :key1) some-map) with just (-> some-map :key1 :key2). Or get-in.

👍 3

Thanks @U2FRKM4TW, I wan’t aware of if-some and the difference between when-let and when-some, so that’s a very helpful reply!


core.async is great, but sometimes everything just stops because of some bug. in cases like these it would be really useful to be able to inspect the channels. how many callbacks (preferably named) are queued, etc. Any support for this?

Alex Miller (Clojure team)13:10:20

that would be my guess anyways


Got a PR that marks a constant as ^:const. I remember some issues cropping up over use of this. Is this something I should be wary of or is it pretty innocuous?

Alex Miller (Clojure team)14:10:01

depends whether it's being used correctly :)


ha. that makes sense. i just don't know where to go to determine that

Alex Miller (Clojure team)14:10:35

is it a constant compile-time value?


(def ^:const date-format
  "Standard date format for :type/Date objects"

(def ^:const datetime-format
  "Standard date/time format for any of the :type/Date variants with a Time"
  "m/d/yy HH:MM:ss")


yes it is. some date formats

Alex Miller (Clojure team)14:10:45

seems fine then. doubt it really matters though


thanks. yeah i figured the benefits are absolutely marginal and possibly evaporate with jit magic eventually but was worried about foot guns.

Alex Miller (Clojure team)14:10:00

I mean, I assume this being used in some kind of formatter object

Alex Miller (Clojure team)14:10:34

depending on whether that's an old school Java formatter or a new school java.time formatter different advice...


its for excel cells using docjure

Alex Miller (Clojure team)15:10:27

in the former case, they are not thread safe so you either need to create the formatter every time (in which case, you're using an interned string regardless so it really doesn't matter). the trick is to use a threadlocal and create it once. in the latter case, they are thread safe so you want to ensure you just make it once (which is not a const but can be done in a defonce or something)

Alex Miller (Clojure team)15:10:20

if you're just literally passing in the string, then sure, but I seriously doubt it matters much

👍 3

Hello guys! Any library which I could use for putting a watermark on my images? It is for a webshop. I want to manage it on backend. Any help appreciated!

Alex Miller (Clojure team)16:10:21

image import, manipulation, and export are all part of java2d in the jdk so you probably don't even need any additional library. can't say I know the magic code to do so but if you google for java2d watermark yada yada you can probably find something

❤️ 3

Does anyone have a good idea how to solve the problem in general of an infinite stream of data copied to another thing, using or similar? Repro:

(require '[ :as io])

(let [os (io/output-stream "/tmp/log.txt")
      out (io/writer os)]
  (binding [*out* out]
      (loop []
        (println "Hello")
        (Thread/sleep 1000)

(io/copy (io/input-stream "/tmp/log.txt") *out*)
This only copies the first line. I guess it makes sense since at the time the stream reaches the end of the file, it doesn't know more is coming. I'm running into this with


(with-open [fs ( "/tmp/log.txt")]
    (let [buf (byte-array 24)]
      (loop [bytes-read (.read fs buf)]
          (= bytes-read -1)
            (println "Reached end of file, waiting...")
            (Thread/sleep 500)
            (recur (.read fs buf)))

            (println "Got some bytes")
            (.print *out* (String. buf 0 bytes-read))
            (recur (.read fs buf)))))))


yep that works. I guess there is no general solution since you don't know if you're reading from some infinite thing or not, and at one point you'd like the thread to stop reading, if you're not dealing with something infinite


In unix they have the pipe signal for this.


Hmm yea I think I remember better abstractions for this on .NET, but not sure


is core.async applicable here?


can put lines of input on a chan


a half-baked observation, but that’s my goto for managing e.g. file resources that you want to stream


@U07S8JGF7 My use case is making something like work, but I guess there can't be a general solution without knowing the source


(I mean to link to the piping infinite input section)


Hmm, it works with a slight patch on io/copy:

(defn copy [in out opts]
  (let [buffer (make-array Byte/TYPE (buffer-size opts))]
    (loop []
      (let [size (.read in buffer)]
        (when (pos? size)
          (.write out buffer 0 size)
          (.flush out)


so when I flush, the output will be visible


So, it just turns out to be a buffering issue


With deps.edn and the Clojure CLI tools, is it possible to compile .java source files? (if necessary, I can explain why gen-class is not ideal in my scenario)

Alex Miller (Clojure team)17:10:50

well no, with just those things. yes, with other additional tools


( "javac" "")

Alex Miller (Clojure team)17:10:08

well it's javac, and there is more to it than that

Alex Miller (Clojure team)17:10:45

support for this is coming in the upcoming


I will try to explain the situation then. Actually, this is kind of a fun side-project. I am trying to write a minecraft server plugin in Clojure. The problem is with how the classloader works in the server with it's plugins. Basically, the plugins use one type of classloader and the Clojure runtime uses another.


so it's a chicken-egg problem if I use gen-class


or it just doesn't work. The Clojure code can't access the APIs from the server libraries


afaik, when you use gen-class, your class actually loads the Clojure runtime, if it isn't already loaded


@cpmcdaniel You can get a classpath using clojure -Spath. Maybe combining that into a little bash script which calls javac works. There might be tools here which can automate that for you: Else, use leiningen maybe?

Alex Miller (Clojure team)18:10:48

some people have had success with badigeon with clj


really wish I could get gen-class to work


the chicken and egg happens because the resulting class needs to extend, which the Clojure runtime classloader can't find


The server loads the plugin class, which would thus init the Clojure runtime, then it tries to resolve it's parent class, I guess


anywho, lein is probably the least friction way to do this then

Alex Miller (Clojure team)18:10:34

you might end up finding it easier to write a plugin stub in Java that invokes the Clojure runtime


@cpmcdaniel FWIW, this was some gen-class stuff I had to figure out because I never used it like this before:


@alexmiller make the Java plugin class it's own library... it could then use plugin config to know which namespace to load.


I have this record:

(defrecord Process [proc exit in out err args]
  (deref [this]
    (wait this)))
When I want to print one, I get the error:
Error printing return value (IllegalArgumentException) at clojure.lang.MultiFn/findAndCacheBestMethod (
Multiple methods in multimethod 'print-method' match dispatch value: class babashka.process.Process -> interface clojure.lang.IDeref and interface clojure.lang.IRecord, and neither is preferred
When I add:
(prefer-method print-method Process clojure.lang.IRecord)
I get:
Syntax error (IllegalStateException) compiling at (babashka/process.clj:50:1).
Preference conflict in multimethod 'print-method': interface clojure.lang.IRecord is already preferred to class babashka.process.Process


Oh I see:

(prefer-method print-method clojure.lang.IRecord clojure.lang.IDeref)


hmm, that's not great to have in a library, since this will override people's preferences for printing things?


perhaps if you define custom print-method specifically for Process there won't be any conflicts


Good idea 💡


Is there a solution like core.async, but can go across JVMs?


Currently I'm thinking using Redis lists, but is there good alternatives?


You could always use a message broker like rabbitmq or artemis


Here is an intriguing one! What is meant exactly in (doc sequence) by “..Won’t force a lazy sequence..“? The following example reveals a lazy seq input to sequence which is realized, against my expectations:

(defn lazify [[x & xs :as coll]]
   (when (seq coll)
     (cons x (lazify xs)))))
(def my-lazy-input (lazify [1 2 1 3 1]))
(type my-lazy-input)  ;; => clojure.lang.LazySeq
(type (rest my-lazy-input)) ;; => clojure.lang.LazySeq
(def not-lazy (sequence (comp (map inc) (distinct)) my-lazy-input)) 
(type not-lazy);; => clojure.lang.LazySeq  so far so good!
(type (rest not-lazy)) ;; => clojure.lang.ChunkedCons Ho HUh???
(type (rest (rest not-lazy)));; => clojure.lang.ChunkedCons WHERE'S my lazy stuff??
What am I missing? How do I prevent chunking? Thanks!

Alex Miller (Clojure team)19:10:18

there are several overlapping topics here, not really sure which part you're already familiar with or care about


final public class ChunkedCons extends ASeq implements IChunkedSeq{

final IChunk chunk;
final ISeq _more;
the fields of a ChunkedCons can give some insight here why it still qualifies as lazy


@alexmiller - I simply want to prevent evaluation other than the front element

Alex Miller (Clojure team)19:10:19

Clojure generally does not guarantee when lazy element will be realized

Alex Miller (Clojure team)19:10:32

if you want that level of control over realization, don't use lazy seqs

Alex Miller (Clojure team)19:10:49

(use loop/recur or reduce, etc)


OK Thank you Alex. So I simply need to roll my own..?

Alex Miller (Clojure team)19:10:21

what is your actual problem? why are you trying to avoid realization? are there side effects like io?


No side effects - suppose each element take 1/2 hour to eval. I want to do as little as possble.


No, I am just learning 🙂 No io yet, but expensive computation

Alex Miller (Clojure team)19:10:47

then use loop/recur to process each element and decide whether to process the next one


OK…so I just return a lazy-seq when pausing?


To prevent an unrequested computation.


The same reason I would be using a lazy seq…or (map inc range) etc

Alex Miller (Clojure team)19:10:26

how do you when it's time to do the next one? there's not enough problem here to answer this well


The client code dictates…sorry if I am not clear.


Concretely, I am generating stuff that may be, or is time consuming for each element. Just the same as having an infinite stream of elements that take e.g. 1/2 hour to process. If I pass that stream to (sequence ….) I would expect it not to eval more than one single element at a time, on request, right?


…At least if my input is truly a lazy seq I would think…

Alex Miller (Clojure team)19:10:06

it sounds like something like this would be useful: which is coming to Clojure, probably in 1.11


Will check it out. In the mean time, hand rolling my task works for me. Thank you!

Alex Miller (Clojure team)19:10:04

or maybe it's simple enough to handle with something like iterate


Isn't the problem that if he transforms the seq with sequence, it also changes how lazy the resulting sequence becomes? (E.g., if he did (sequence (map inc) (iteration ...)) he would still have the same problem

Alex Miller (Clojure team)19:10:54

all of these things depend what you use them with in combination


if you want to limit speed of consumption for performance reasons, I think a queue fits better than laziness, thanks to things like chunking


Sounds like good advice - thank you all!


@alexmiller - very convoluted, but it works. Thank you for your advice and comments.


can you show a sample consumer of this?


You mean other than what’s in the gist?


I would have to make something up, since my true usage is more complex.


> Clojure generally does not guarantee when lazy element will be realized with this advice, i still think you're going to trip up at some point


as a general rule, preventing chunking is not the right solution for controlling timing of side effects / long running tasks


But the entire motivation rests on the need / desire to avoid consuming more than the front element. Suppose that in my example the thread is sleeping before outputting a result..


laziness is the wrong solution for this


I am not trying to control any timing…just the trigger


i think you're stuck in a box with the phrasing "consuming the front element"


I have no control over the timing, but I do over when to ask for it..


Isn’t this what laziness is about? The front element, and only when requested?


right. and a loop or a reduce with reduced might be far better


@kingcode if realizing 10 items instead of 1 when calling first is an error, then what you are doing is controlling timing


not in clojure.


But I am <not> trying to realize 10 items! Only one 🙂


yes. and the constant refrain has been tucking that behind a lazy sequence is not a good way to go


right - but chunking means you can't control that


loop/recur and reduce are eager, precisely the opposite of lazy !?


And why I don’t want chunking ! 🙂


you control the iterations in both of those. consume as many or as few as you like


if chunking causes errors, don't use laziness to control execution


ok ok.. so a qeue then?


use: doseq, loop, reduce, or run!, these will all act on a single item at a time, guaranteed


or use a queue (which you then consume from eg. a loop)


Hmm… I am trying to understand how reduce/loop would prevent eager execution though. Sorry if we’re going circles with my question


a reduce or loop can wait on some external condition before realizing the next element


Ah, I didn’t know this about waiting within reduce/loop. Other than within a transducer, how would you wait on an external condition in reduce? Other than returning the accumulator untouched? Or in loop?


you can use a queue with an atom, or use a core.async channel


the manifold library has some constructs for these things too


@noisesmith sure, but I would prefer an immutable situation, and going to something heavy as core.async/manifold seems overdone?


What is wrong with what is in the gist, other than being hand-rolled?


because you are attempting to prevent over-realization of a lazy seq with disastrous performance implications and > Clojure generally does not guarantee when lazy element will be realized


Plus, my external condition is the client calling my stuff


you still have to manage consumption of the result - either you have a closure that is cut off from the rest of your code, or a top level memory leak


with a closure you've punted the same problem you already have: how do you control when the next item is consumed, and how it's delivered to the thing that needs it


that's not a problem that lazy-seqs can solve


(without a memory leak)


Hmmm….I agree entirely about the memory leak issue, but my lazy-seqs are all small, and will be garbage collected after use. My issue is generally purely performance.


In other words, I am lazy-seq’ing to control evaluation


"they will be garbage collected" how and when? if they are not a top level binding, they are closed over, if they are closed over there are better ways to use the values than wrapping in a lazy seq


while providing sequences to my client

Daniel Stephens21:10:42

sounds like a sequence of delay or no arg functions that the client can then realise/call when they need the answer would be one option


I may be out of my depth on subtleties involved here, but queues are one of the simplest of all the mutable thing that exist, perhaps?


right, and they are made for precisely this kind of situation


Now, suppose you are generating nodes within a DFS search tree, and each of these nodes have children generated as lazy seqs, and consuming on demand during traversal. You want the nodes’ and their children to be consumed on demand, one at a time, but still be seqs, not queues?


the queue isn't the data, it's a governor that lets you control the timing of consumption (something that clojure laziness emphatically doesn't provide)


the return value of a DFS and the internal stacks or queues used for traversal don't have to coincide


Concretely, I am providing the sequences to a clojure zipper.


@kingcode the typical shape: a queue that gets all the inputs (from your search tree or whatever), a queue that gets the results, and N processing loops in the middle, all reading from and writing to the same pair of queues, that N controls the parallelism / rate of consumption


So I was hoping that the nodes not traversed are left un’evaled


you can even have recursion, where one of those workers puts more input back into the incoming queue


This may be the wrong time to toss this in, but there is an unchunk function that has been used in a few contexts like this that I believe correctly unchunks a lazy sequence? I believe that even then, you will not get 100% ironclad promises that it will never realize even 1 more element than you ask for, in all situations.


I wouldn't make any assumptions about the eagerness of a zipper consuming nested lazy structures


@andy.fingerhut and unchunk isn't in core because these sorts of constructs still end up being buggy, even if you unchunk


Hmmm. queues processing loops and unchunking sound like more complications than a simple lazy seq of nodes…. but thanks for your advice, I will try to digest all of this 🙂


Here is one way of writing an unchunk function, that helped avoid at least some level of evaluating too far ahead in a math.combinatorics library function. Again, if you want 100% guarantees in all cases that it will prevent evaluating even one more element, I don't think it provides such guarantees:

Alex Miller (Clojure team)21:10:10

imo, unchunk is a smell that you're using lazy seqs when you cannot accept their constraints (and thus should be doing something else)

💯 3

that's what I've been trying to say, but better articulated


I won't deny it is a code smell. I don't think anyone had the strong interest of reworking the implemetation of the affected math.combinatorics function in a more significant way to avoid the out of memory issues that were occurring before unchunk was added to it.


yeah - in that case it's kind of a corner case where you want the combinatoric function to be side effect free, but the memory consumption became a side effect


Interesting - Thanks for sharing.


In my case I will use delays with regular seqs to get rid of the smell :)


@noisesmith I agree about zipper eagerness, will have to investigate

Daniel Stephens21:10:31

I think in the DFS example each child is quick to find, so the fact that a few in a chunk get realised eagerly is probably not going to cause an issue, it only sounds like an issue where each next item is a big performance hit.


if you need to control the timing of realizing elements, lazy seqs are not actually simple - they are not made for task management, and if you need to control things that strictly (because of the expense of some computation) they are the wrong abstraction


It all depends on what is being done to generate your DFS child…in my case it could be expensive 🙂

Daniel Stephens21:10:53

ahh, sorry, had misunderstood the example 😊


@noisesmith ok, I will try to find a way to do it right and still consume computations as a seq.


another thing to consider: if you have a cheap input but expensive processing step, you can use run! on the input lazy seq, and write to a blocking queue for each result


then the reader controls the workload


Ok sure…but blocking queues and run! make me think “architecture”, when I just have a very small library


I will start with hand-rolling for now as in the gist, and look for a better way to do exactly what I need.


Thanks 🙂


either it's safe to pretend the calculation is side effect free (so chunking doesn't matter), or you need to control execution, and side effects are in fact your primary concern


I don't see a third option?


No side effects ! Just expensive values, and potentially a lot of bactracking.


an expense is a side effect


The only third option I see is the gist :;


Good point..


that only displaces the problem, because the consumers of your lazy value have to deal with the same issues you do


So indeed I need to control execution of expensive (and potentially a lot of) computations producing immutable values.


OK…indeed if it comes to that, then definitely architecture becomes a concern.


Thank you @noisesmith and all


Well, I wouldn't go and call an expansive operation a side effect


the consumption of resources (and the way that effects program correctness) is a side effect


and CPU power is one resource


I mean, that's not the normally accepted definition of side effect. If you want it to mean that sure


Its just quite confusing to use it like that to most people I feel.


Agreed that it is a use of the term you typically won't find in a functional programming course or talk.


It certainly can be "an event you are trying explicitly to avoid happening, even in code that is otherwise dealing only with pure functions"

👍 3

I wasn’t thinking of it that way but it certainly can be, at least indirectly, e.g. by virtue of the domino effect if you lose a user/customer because of an unmitigated concern.


Its definitely a concern to consider, especially in lazy context, since it might surprise the consumer that reading the next element blocks for X amount of time. But it doesn't fall in the classical definition of a side effect


The thing to know is that Clojure's lazy seq are not meant as a way to implement lazy behavior, but as an internal optimization for chaining operations.


That's why they are chunking most of the time, as that is an attempt at further optimizing them for that purpose


In theory, you can construct non chunking lazy seq, but even those, you're probably relying on implementation details of various functions. Its possible that anything taking one would return a chunked version of it.


And even non chunking lazy seq are not guaranteed lazy in all circumstances, some operations might still realize one or more elements


So when using them, you need to ask yourself: Am I okay with anything I do with it possibly realizing 1 or more elements up to probably about two full chunks, so about 64 elements


Now, in general, if realizing an element causes a side effect, most likely you won't be okay with it. Which is why in general people say avoid side effects with lazy-seq. The other thing is generally when the side effect happens matter a lot with dealing with side effect, and with lazyness you lose control on that, so that can be a source of bugs.


But, in your case, it might be if say realizing an element takes 1 minute of pure computation, you might also not be okay with not being able to reliably predict if getting the next element will take 1 minute or 32


Thx @U0K064KQV, well noted and interesting stuff. Some of my initial confusion had to do with lazy-seq doc’s mention that “..[it] will not force a lazy seq..”


@kingcode Which doc exactly says that?


I see that the doc string for the function sequence contains that phrase, but not the doc string for lazy-seq


Ooops sorry, indeed I meant sequence. I was feeding it a true lazy seq, but in spite of “..will not force ..“, got the ChunkedCons version at the other end.


returning a chunked collection and not forcing the input are entirely compatible


it doesn't force the first chunk until you realize an element


as discussed previously, if realizing 1 element at a time vs. realizing 32 at a time actually matters, laziness is the wrong abstraction to use in clojure


got it 👍:skin-tone-5:


Ya, or at least, not an easy one to use as such, cause you'd need to be super careful that you only use things that keep the seq as a lazy-seq, and operations over it that don't ever realize one more than you want. Which I think Clojure would make no guarantee that those assumptions would hold in the next minor version bump of Clojure itself. So ya,all in all, better to avoid for those use case. Delay is way way better, or use core.async channels, agents, something custom, etc.


That said, I've had use cases before that were IO, and variable laziness was fine. For example, I have a lazy-seq backed AWS SQS queue abstraction. SQS itself returns variable length batches of messages, if you use the bath API, and so for that it works really well for me.


But that doesn't mean it isn't a relevant consideration


@kingcode Have you looked into core.async? That should let you write very similar code to what you wanted. And I don't think Rich is going to wake up tomorrow and decide he is going to start chunking puts/takes on channels


Thanks @isak, I have thought some more about my problem and from advice given earlier, using regular sequences and wrapping my tasks in delays should do the trick...indeed chunking is a good and necessary thing. Channels are cool but not what I need in this case - thanks!