Fork me on GitHub
#clojure
<
2020-10-16
>
dpsutton03:10:01

is there a way to add protocol implementations to a class instance? ie, extend a particular java.util.Date. with some protocol rather than any java.util.Date?

dpsutton03:10:14

similar to extend-via-metadata but for arbitrary java class instances

svt06:10:45

{:cluster-id nil, :app-id nil, :message-id nil, :expiration nil, :type nil, :user-id nil, 
:headers #object[java.util.HashMap 0x1af91489 {__TypeId__=fooservice.bar.events.EventMessage, 
x-jwt-payload={"exp":1573686594705,
                "iat":1573621794705,
                "session-id":"c06fbf0c-04bb-4da8-91f9-4c44d439ca7b"}}]}
I’m getting this response in a queue message header, and I want to get the x-jwt-payload from it, I’m using cheshire.core/parse-string to parse it but not happening getting error
#error {\n :cause class java.util.HashMap cannot be cast to class java.lang.String (java.util.HashMap and java.lang.String are in module java.base of loader 'bootstrap')\n :via\n [{:type java.lang.ClassCastException\n   :message class java.util.HashMap cannot be cast to class java.lang.String (java.util.HashMap and java.lang.String are in module java.base of loader 'bootstrap')\n   :at [cheshire.core$parse_string invokeStatic core.clj 209]}]\n :trace\n [[cheshire.core$parse_string invokeStatic core.clj 209]

hiredman06:10:35

parse-string takes a string containing json, that is a map

svt06:10:38

How can I get x-jwt-payload than?

svt06:10:41

The problem is it’s a java hashmap?

hiredman06:10:16

Parse-string takes a string, any kind of map http://will.be an error

svt06:10:36

So what’s the alternative?

p-himik07:10:56

(get-in m [:headers "x-jwt-payload"])

svt07:10:29

Thanks @U2FRKM4TW but that’s not working, I have tried (.get value key) and it worked for me

p-himik07:10:33

Strange, Java maps support get.

svt07:10:42

Does JavaHashmap supports get?

p-himik07:10:38

user=> (def m (java.util.HashMap.))
#'user/m
user=> (.put m "a" "b")
nil
user=> (get m "a")
"b"
user=> (def x {:a m})
#'user/x
user=> (get-in x [:a "a"])
"b"

svt07:10:52

let me try this

Endre Bakken Stovner08:10:32

Is it possible to extend keywords and numbers in Clojure so they walk and quack like keywords/numbers but can have metadata?

andy.fingerhut08:10:28

So you could try an experiment like going into the clojure.lang.Keyword class defined in Java in Clojure's implementation, and adding a field to hold metadata for that, and methods to get and add metadata to a keyword, implementing the appropriate interfaces.

andy.fingerhut08:10:30

But that might violate an assumption of keywords that is pervasive in Clojure -- that two occurrences of the same keyword are the same identical object in memory, to enable fast equality comparisons between keywords using identical?, i.e. JVM reference / pointer equality.

andy.fingerhut08:10:19

If you have the same keyword appearing two places in a text input, and you want different line/column metadata on each, they cannot be the same JVM object. The keyword without the metadata can be identical, but the keyword with the line/column metadata must be different objects.

Endre Bakken Stovner08:10:54

It would be fun to play around with and see. I just want it to work for the limited use-case described earlier (when specter is navigating the data-structure)

andy.fingerhut08:10:21

Changing numbers to have metadata is also possible, but it might mean changing the Java implementation of most/all arithmetic operations, which are quite a few.

andy.fingerhut08:10:43

You would end up with your own custom modified version of Clojure that no one else in the world was using.

😂 3
andy.fingerhut08:10:35

and likely, a custom version of Clojure that maybe one or two other people might ever want to use.

andy.fingerhut08:10:15

For numbers, I suspect it would break the interaction with some third-party numeric libraries and your hacked version of Clojure.

andy.fingerhut08:10:38

because some of those probably assume that Clojure's numbers are the classes that it use now, e.g. JVM's java.lang.Long, java.lang.Double, java.lang.Number, and your modified version could not add metadata fields to those classes, or at least I don't know a way without modifying the implementation of the JVM itself, which sounds like something I doubt anyone but the JVM implementers would be well-prepared to do in less than a year.

andy.fingerhut08:10:34

(The year there is a wild guess -- maybe you know a JVM implementer who likes you and can pull it off in a weekend for beers.)

😂 3
vlaaad08:10:31

actually, yes 😄

vlaaad08:10:44

but mostly no

vlaaad08:10:13

err… There is no way to put metadata on top of individual numbers/keywords, but you can have a registry that allows registering metadata on keywords/numbers. Sort of like specs — they are identified mostly by keywords, and s/def is a way to assoc a spec to a keyword. This is not quite metadata, but maybe good enough for you? what’s your use case?

vlaaad08:10:16

hmmm, so this is a sort of a visual exploration tool?

✔️ 3
vlaaad08:10:27

is it clj only, or clj/cljs?

Endre Bakken Stovner08:10:05

Does not matter I think, because they should be identical. I can write it in whichever language is easier. Specter is a tool for navigating data structures, not code.

vlaaad08:10:44

yeah I know

vlaaad08:10:41

but how do you show highlights? not in the repl I assume, because there is only text?

Endre Bakken Stovner08:10:22

Yes, I can show them in emacs or a SPA. I just want to write the backend first.

vlaaad08:10:51

maybe your specter tool could, given a specter selection pattern, transform the values at that pattern with #(tagged-literal ’selected %), and your output panel can just show that?

👏 3
clj 3
👍 3
vlaaad08:10:30

or it can process the resulting data structure to highlight those?

andy.fingerhut08:10:33

The registry idea won't work for multiple occurrences of the same keyword in the data, without changing Clojure/JVM's Java code, because all Keyword JVM objects for :foo are the same JVM object in memory.

✔️ 3
🤯 3
andy.fingerhut08:10:19

That is also true for some integers with small absolute value, in many JVMs, as a memory optimization for commonly occurring small numbers.

andy.fingerhut08:10:57

And changing the implementation of the JVM itself is a step of difficulty beyond changing the Java code that implements Clojure.

andy.fingerhut08:10:24

"Any place is walking distance if you have the time." is a phrase I am reminded of in situations like this. The approach of trying to read standard Clojure data structures, but add metadata with line/col info for keywords and numbers, is a long walk away.

andy.fingerhut08:10:00

Creating a custom type that "wraps" an object that doesn't have metadata, so that the wrapper object has the metadata, and contains the original value, seems like an approach worth thinking about. You would need your own modified reader to read text and create those objects instead of what clojure.core/read or clojure.edn/read does. You would probably want your own custom print/display functions for that, which might be straightforward with Clojure's print-method -- not sure.

👍 3
andy.fingerhut08:10:10

Or, I should back up and say, if you want to read a string representation of Clojure data that looks like EDN, and the result of reading it had all the line/col metadata, then you need a modified version of clojure.core/read.

Endre Bakken Stovner08:10:18

But specter can use user-defined functions to find not only numbers, but, for example, numbers greater than 5. These functions cannot understand my wrapped numbers easily, right?

Endre Bakken Stovner08:10:37

That is the downside of the wrapping approach as I see it.

vlaaad08:10:33

I think my suggestion with wrapping using tagged literal still holds..

Endre Bakken Stovner08:10:20

@U47G49KHQ I will play around with your idea. To begin with I can create a limited version of the highlighter 🙂 But if there are multiple keywords named :a they can only have one place in the register, right? I cannot think of an easy way to tell these apart 🙂

andy.fingerhut09:10:40

This might be a very bad and unworkable idea, but off the top of my head it might be worth thinking of the registry idea, but instead of the registry pointing at a single line/column location for a keyword/number/whatever, it could point at an ordered list of locations.

👍 3
andy.fingerhut09:10:35

I don't know your use case well enough to determine if it would help you do what you want.

Endre Bakken Stovner09:10:56

That might help me get closer! Like if the result from specter is ["a", :a, "b", :a] And I have the location data of "a" and "b" I should be able to deduce the locations of the two :a s.

Endre Bakken Stovner09:10:32

But if you have a dict [{:a "bla"} {:a "blo"} {:a "bli"}] and want to get every :a in every map with an odd index, you cannot tell them apart, really. But come to think of it, you cannot tell them apart in the regular specter result either really.

Endre Bakken Stovner09:10:17

Now I understand more what you mean by transforming and tagging @U47G49KHQ! Brilliant. That is what I should try.

👍 3
borkdude08:10:39

@dominicm About LSP: Take a look at https://github.com/borkdude/clj-kondo.lsp which is an LSP implementation for clj-kondo, written in Clojure based on lsp4j

borkdude08:10:54

It's used in VSCode. The gnarly interop code is heavily borrowed from clojure-lsp (I know see that's been mentioned already)

erwinrooijakkers09:10:29

hi what is an idiomatic name for an arbitrary data structure? e.g. in the context of:

(defn as-set [data] 
  (if (coll? data) 
    (set data) 
    data))

p-himik09:10:07

vec calls its argument coll.

p-himik09:10:41

Same for set.

erwinrooijakkers09:10:13

thanks i see - wondering now why this function as-set is necessary in the first place in this context actually

erwinrooijakkers09:10:23

why is set not good enough? 🙂

erwinrooijakkers09:10:41

and then there’s probably a better name depending on the context

p-himik09:10:11

Ah, wait - I somehow failed to notice the check for coll?. In this case, set is not good enough if data is e.g. 1.

p-himik09:10:13

FWIW I would name such an argument maybe-coll. But I would try to get rid of the need for as-set altogether in the first place.

erwinrooijakkers09:10:10

i think prefixing with a question mark like ?set instead of data is not so idiomatic, since that is mostly used in logic programming (datalog, core match or core.logic) for unbinded vars

Eloy Pazos Lema09:10:24

hello guys, anyone here with experience with cider clojure?

flowthing09:10:41

Probably plenty of folks — you might want to try the #cider channel, though.

Stefan09:10:41

Hi! I have this function and I wonder if there’s a nicer way to write it. It looks something like this:

(defn determine-command [some-map]
  (or (when-let [arg ((comp :key2 :key1) some-map)]
        [#'get-using-method-1! arg])
      (when-let [arg ((comp :key4 :key3) some-map)]
        [#'get-using-method-2! arg])))
In other words: it returns a vector that says which command to use based on the structure of the input, but it also passes the thing that was extracted from the map to the output of the function. This function is easy to unit-test: I verify both the expected function to be used, as well as the expected arg. I don’t think this way of writing it is too bad, but I’m curious if there’s a better way. Thanks!

p-himik10:10:25

LGTM Some minor things: - A nested if-some might look a bit better - There's a difference between when-let and when-some that sometimes might be important

p-himik10:10:57

Ah, I would replace ((comp :key2 :key1) some-map) with just (-> some-map :key1 :key2). Or get-in.

👍 3
Stefan11:10:11

Thanks @U2FRKM4TW, I wan’t aware of if-some and the difference between when-let and when-some, so that’s a very helpful reply!

hanDerPeder12:10:44

core.async is great, but sometimes everything just stops because of some bug. in cases like these it would be really useful to be able to inspect the channels. how many callbacks (preferably named) are queued, etc. Any support for this?

Alex Miller (Clojure team)13:10:20

that would be my guess anyways

dpsutton14:10:11

Got a PR that marks a constant as ^:const. I remember some issues cropping up over use of this. Is this something I should be wary of or is it pretty innocuous?

Alex Miller (Clojure team)14:10:01

depends whether it's being used correctly :)

dpsutton14:10:17

ha. that makes sense. i just don't know where to go to determine that

Alex Miller (Clojure team)14:10:35

is it a constant compile-time value?

dpsutton14:10:14

(def ^:const date-format
  "Standard date format for :type/Date objects"
  "m/d/yy")

(def ^:const datetime-format
  "Standard date/time format for any of the :type/Date variants with a Time"
  "m/d/yy HH:MM:ss")

dpsutton14:10:27

yes it is. some date formats

Alex Miller (Clojure team)14:10:45

seems fine then. doubt it really matters though

dpsutton14:10:45

thanks. yeah i figured the benefits are absolutely marginal and possibly evaporate with jit magic eventually but was worried about foot guns.

Alex Miller (Clojure team)14:10:00

I mean, I assume this being used in some kind of formatter object

Alex Miller (Clojure team)14:10:34

depending on whether that's an old school Java formatter or a new school java.time formatter different advice...

dpsutton15:10:05

its for excel cells using docjure

Alex Miller (Clojure team)15:10:27

in the former case, they are not thread safe so you either need to create the formatter every time (in which case, you're using an interned string regardless so it really doesn't matter). the trick is to use a threadlocal and create it once. in the latter case, they are thread safe so you want to ensure you just make it once (which is not a const but can be done in a defonce or something)

Alex Miller (Clojure team)15:10:20

if you're just literally passing in the string, then sure, but I seriously doubt it matters much

👍 3
p4ulcristian16:10:10

Hello guys! Any library which I could use for putting a watermark on my images? It is for a webshop. I want to manage it on backend. Any help appreciated!

Alex Miller (Clojure team)16:10:21

image import, manipulation, and export are all part of java2d in the jdk so you probably don't even need any additional library. can't say I know the magic code to do so but if you google for java2d watermark yada yada you can probably find something

❤️ 3
borkdude17:10:17

Does anyone have a good idea how to solve the problem in general of an infinite stream of data copied to another thing, using clojure.java/io or similar? Repro:

(require '[ :as io])

(let [os (io/output-stream "/tmp/log.txt")
      out (io/writer os)]
  (binding [*out* out]
    (future
      (loop []
        (println "Hello")
        (Thread/sleep 1000)
        (recur)))))

(io/copy (io/input-stream "/tmp/log.txt") *out*)
This only copies the first line. I guess it makes sense since at the time the stream reaches the end of the file, it doesn't know more is coming. I'm running into this with https://github.com/babashka/process#piping-infinite-input

isak17:10:31

(with-open [fs (java.io.FileInputStream. "/tmp/log.txt")]
    (let [buf (byte-array 24)]
      (loop [bytes-read (.read fs buf)]
        (cond
          (= bytes-read -1)
          (do
            (println "Reached end of file, waiting...")
            (Thread/sleep 500)
            (recur (.read fs buf)))

          :else
          (do
            (println "Got some bytes")
            (.print *out* (String. buf 0 bytes-read))
            (recur (.read fs buf)))))))

borkdude17:10:45

yep that works. I guess there is no general solution since you don't know if you're reading from some infinite thing or not, and at one point you'd like the thread to stop reading, if you're not dealing with something infinite

borkdude17:10:25

In unix they have the pipe signal for this.

isak17:10:53

Hmm yea I think I remember better abstractions for this on .NET, but not sure

potetm18:10:32

is core.async applicable here?

potetm18:10:06

can put lines of input on a chan

potetm18:10:43

a half-baked observation, but that’s my goto for managing e.g. file resources that you want to stream

borkdude18:10:02

@U07S8JGF7 My use case is making something like https://github.com/babashka/process#piping-infinite-input work, but I guess there can't be a general solution without knowing the source

borkdude18:10:33

(I mean to link to the piping infinite input section)

borkdude20:10:25

Hmm, it works with a slight patch on io/copy:

(defn copy [in out opts]
  (let [buffer (make-array Byte/TYPE (buffer-size opts))]
    (loop []
      (let [size (.read in buffer)]
        (when (pos? size)
          (.write out buffer 0 size)
          (.flush out)
          (recur))))))

3
borkdude20:10:36

so when I flush, the output will be visible

borkdude22:10:27

So, it just turns out to be a buffering issue

3
cpmcdaniel17:10:55

With deps.edn and the Clojure CLI tools, is it possible to compile .java source files? (if necessary, I can explain why gen-class is not ideal in my scenario)

Alex Miller (Clojure team)17:10:50

well no, with just those things. yes, with other additional tools

borkdude17:10:42

(clojure.java.shell/sh "javac" "Foo.java")

Alex Miller (Clojure team)17:10:08

well it's javac, and there is more to it than that

Alex Miller (Clojure team)17:10:45

support for this is coming in the upcoming tools.build

cpmcdaniel17:10:17

I will try to explain the situation then. Actually, this is kind of a fun side-project. I am trying to write a minecraft server plugin in Clojure. The problem is with how the classloader works in the server with it's plugins. Basically, the plugins use one type of classloader and the Clojure runtime uses another.

cpmcdaniel17:10:31

so it's a chicken-egg problem if I use gen-class

cpmcdaniel17:10:41

or it just doesn't work. The Clojure code can't access the APIs from the server libraries

cpmcdaniel18:10:25

afaik, when you use gen-class, your class actually loads the Clojure runtime, if it isn't already loaded

borkdude18:10:29

@cpmcdaniel You can get a classpath using clojure -Spath. Maybe combining that into a little bash script which calls javac works. There might be tools here which can automate that for you: https://github.com/clojure/tools.deps.alpha/wiki/Tools Else, use leiningen maybe?

Alex Miller (Clojure team)18:10:48

some people have had success with badigeon with clj

cpmcdaniel18:10:26

really wish I could get gen-class to work

cpmcdaniel18:10:23

the chicken and egg happens because the resulting class needs to extend org.bukkit.plugin.java.JavaPlugin, which the Clojure runtime classloader can't find

cpmcdaniel18:10:12

The server loads the plugin class, which would thus init the Clojure runtime, then it tries to resolve it's parent class, I guess

cpmcdaniel18:10:45

anywho, lein is probably the least friction way to do this then

Alex Miller (Clojure team)18:10:34

you might end up finding it easier to write a plugin stub in Java that invokes the Clojure runtime

borkdude18:10:11

@cpmcdaniel FWIW, this was some gen-class stuff I had to figure out because I never used it like this before: https://github.com/borkdude/sci/blob/ecb4cba114793c07566768ccc3abcdf9db5813e4/libsci/src/sci/impl/libsci.clj#L4

cpmcdaniel18:10:37

@alexmiller make the Java plugin class it's own library... it could then use plugin config to know which namespace to load.

borkdude18:10:43

I have this record:

(defrecord Process [proc exit in out err args]
  clojure.lang.IDeref
  (deref [this]
    (wait this)))
When I want to print one, I get the error:
Error printing return value (IllegalArgumentException) at clojure.lang.MultiFn/findAndCacheBestMethod (MultiFn.java:179).
Multiple methods in multimethod 'print-method' match dispatch value: class babashka.process.Process -> interface clojure.lang.IDeref and interface clojure.lang.IRecord, and neither is preferred
When I add:
(prefer-method print-method Process clojure.lang.IRecord)
I get:
Syntax error (IllegalStateException) compiling at (babashka/process.clj:50:1).
Preference conflict in multimethod 'print-method': interface clojure.lang.IRecord is already preferred to class babashka.process.Process
:thinking_face:

borkdude18:10:11

Oh I see:

(prefer-method print-method clojure.lang.IRecord clojure.lang.IDeref)

borkdude18:10:54

hmm, that's not great to have in a library, since this will override people's preferences for printing things?

vlaaad19:10:22

perhaps if you define custom print-method specifically for Process there won't be any conflicts

borkdude19:10:03

Good idea 💡

zhuxun218:10:47

Is there a solution like core.async, but can go across JVMs?

zhuxun218:10:19

Currently I'm thinking using Redis lists, but is there good alternatives?

hiredman19:10:05

You could always use a message broker like rabbitmq or artemis

kingcode19:10:01

Here is an intriguing one! What is meant exactly in (doc sequence) by “..Won’t force a lazy sequence..“? The following example reveals a lazy seq input to sequence which is realized, against my expectations:

(defn lazify [[x & xs :as coll]]
  (lazy-seq
   (when (seq coll)
     (cons x (lazify xs)))))
(def my-lazy-input (lazify [1 2 1 3 1]))
(type my-lazy-input)  ;; => clojure.lang.LazySeq
(type (rest my-lazy-input)) ;; => clojure.lang.LazySeq
(def not-lazy (sequence (comp (map inc) (distinct)) my-lazy-input)) 
(type not-lazy);; => clojure.lang.LazySeq  so far so good!
(type (rest not-lazy)) ;; => clojure.lang.ChunkedCons Ho HUh???
(type (rest (rest not-lazy)));; => clojure.lang.ChunkedCons WHERE'S my lazy stuff??
What am I missing? How do I prevent chunking? Thanks!

Alex Miller (Clojure team)19:10:18

there are several overlapping topics here, not really sure which part you're already familiar with or care about

dpsutton19:10:03

final public class ChunkedCons extends ASeq implements IChunkedSeq{

final IChunk chunk;
final ISeq _more;
the fields of a ChunkedCons can give some insight here why it still qualifies as lazy

kingcode19:10:07

@alexmiller - I simply want to prevent evaluation other than the front element

Alex Miller (Clojure team)19:10:19

Clojure generally does not guarantee when lazy element will be realized

Alex Miller (Clojure team)19:10:32

if you want that level of control over realization, don't use lazy seqs

Alex Miller (Clojure team)19:10:49

(use loop/recur or reduce, etc)

kingcode19:10:51

OK Thank you Alex. So I simply need to roll my own..?

Alex Miller (Clojure team)19:10:21

what is your actual problem? why are you trying to avoid realization? are there side effects like io?

kingcode19:10:50

No side effects - suppose each element take 1/2 hour to eval. I want to do as little as possble.

kingcode19:10:18

No, I am just learning 🙂 No io yet, but expensive computation

Alex Miller (Clojure team)19:10:47

then use loop/recur to process each element and decide whether to process the next one

kingcode19:10:22

OK…so I just return a lazy-seq when pausing?

kingcode19:10:57

To prevent an unrequested computation.

kingcode19:10:21

The same reason I would be using a lazy seq…or (map inc range) etc

Alex Miller (Clojure team)19:10:26

how do you when it's time to do the next one? there's not enough problem here to answer this well

kingcode19:10:03

The client code dictates…sorry if I am not clear.

kingcode19:10:02

Concretely, I am generating stuff that may be, or is time consuming for each element. Just the same as having an infinite stream of elements that take e.g. 1/2 hour to process. If I pass that stream to (sequence ….) I would expect it not to eval more than one single element at a time, on request, right?

kingcode19:10:51

…At least if my input is truly a lazy seq I would think…

Alex Miller (Clojure team)19:10:06

it sounds like something like this would be useful: https://clojure.atlassian.net/browse/CLJ-2555 which is coming to Clojure, probably in 1.11

kingcode19:10:53

Will check it out. In the mean time, hand rolling my task works for me. Thank you!

Alex Miller (Clojure team)19:10:04

or maybe it's simple enough to handle with something like iterate

isak19:10:48

Isn't the problem that if he transforms the seq with sequence, it also changes how lazy the resulting sequence becomes? (E.g., if he did (sequence (map inc) (iteration ...)) he would still have the same problem

Alex Miller (Clojure team)19:10:54

all of these things depend what you use them with in combination

noisesmith20:10:54

if you want to limit speed of consumption for performance reasons, I think a queue fits better than laziness, thanks to things like chunking

kingcode20:10:38

Sounds like good advice - thank you all!

kingcode20:10:20

@alexmiller https://gist.github.com/KingCode/8970fe4e2308127ba467ac7f57d3f78b - very convoluted, but it works. Thank you for your advice and comments.

dpsutton20:10:43

can you show a sample consumer of this?

kingcode20:10:21

You mean other than what’s in the gist?

kingcode20:10:41

I would have to make something up, since my true usage is more complex.

dpsutton20:10:52

> Clojure generally does not guarantee when lazy element will be realized with this advice, i still think you're going to trip up at some point

noisesmith20:10:02

as a general rule, preventing chunking is not the right solution for controlling timing of side effects / long running tasks

kingcode20:10:22

But the entire motivation rests on the need / desire to avoid consuming more than the front element. Suppose that in my example the thread is sleeping before outputting a result..

noisesmith20:10:37

laziness is the wrong solution for this

kingcode20:10:40

I am not trying to control any timing…just the trigger

dpsutton20:10:52

i think you're stuck in a box with the phrasing "consuming the front element"

kingcode20:10:00

I have no control over the timing, but I do over when to ask for it..

kingcode20:10:22

Isn’t this what laziness is about? The front element, and only when requested?

dpsutton20:10:22

right. and a loop or a reduce with reduced might be far better

noisesmith20:10:28

@kingcode if realizing 10 items instead of 1 when calling first is an error, then what you are doing is controlling timing

dpsutton20:10:30

not in clojure.

kingcode20:10:56

But I am <not> trying to realize 10 items! Only one 🙂

dpsutton20:10:14

yes. and the constant refrain has been tucking that behind a lazy sequence is not a good way to go

noisesmith20:10:17

right - but chunking means you can't control that

kingcode20:10:23

loop/recur and reduce are eager, precisely the opposite of lazy !?

kingcode20:10:35

And why I don’t want chunking ! 🙂

dpsutton20:10:41

you control the iterations in both of those. consume as many or as few as you like

noisesmith20:10:50

if chunking causes errors, don't use laziness to control execution

kingcode20:10:05

ok ok.. so a qeue then?

noisesmith20:10:17

use: doseq, loop, reduce, or run!, these will all act on a single item at a time, guaranteed

noisesmith20:10:29

or use a queue (which you then consume from eg. a loop)

kingcode20:10:25

Hmm… I am trying to understand how reduce/loop would prevent eager execution though. Sorry if we’re going circles with my question

noisesmith20:10:49

a reduce or loop can wait on some external condition before realizing the next element

kingcode20:10:22

Ah, I didn’t know this about waiting within reduce/loop. Other than within a transducer, how would you wait on an external condition in reduce? Other than returning the accumulator untouched? Or in loop?

noisesmith20:10:52

you can use a queue with an atom, or use a core.async channel

noisesmith20:10:30

the manifold library has some constructs for these things too

kingcode21:10:00

@noisesmith sure, but I would prefer an immutable situation, and going to something heavy as core.async/manifold seems overdone?

kingcode21:10:21

What is wrong with what is in the gist, other than being hand-rolled?

dpsutton21:10:18

because you are attempting to prevent over-realization of a lazy seq with disastrous performance implications and > Clojure generally does not guarantee when lazy element will be realized

kingcode21:10:19

Plus, my external condition is the client calling my stuff

noisesmith21:10:21

you still have to manage consumption of the result - either you have a closure that is cut off from the rest of your code, or a top level memory leak

noisesmith21:10:52

with a closure you've punted the same problem you already have: how do you control when the next item is consumed, and how it's delivered to the thing that needs it

noisesmith21:10:01

that's not a problem that lazy-seqs can solve

noisesmith21:10:31

(without a memory leak)

kingcode21:10:38

Hmmm….I agree entirely about the memory leak issue, but my lazy-seqs are all small, and will be garbage collected after use. My issue is generally purely performance.

kingcode21:10:07

In other words, I am lazy-seq’ing to control evaluation

noisesmith21:10:16

"they will be garbage collected" how and when? if they are not a top level binding, they are closed over, if they are closed over there are better ways to use the values than wrapping in a lazy seq

kingcode21:10:23

while providing sequences to my client

Daniel Stephens21:10:42

sounds like a sequence of delay or no arg functions that the client can then realise/call when they need the answer would be one option

andy.fingerhut21:10:08

I may be out of my depth on subtleties involved here, but queues are one of the simplest of all the mutable thing that exist, perhaps?

noisesmith21:10:28

right, and they are made for precisely this kind of situation

kingcode21:10:21

Now, suppose you are generating nodes within a DFS search tree, and each of these nodes have children generated as lazy seqs, and consuming on demand during traversal. You want the nodes’ and their children to be consumed on demand, one at a time, but still be seqs, not queues?

noisesmith21:10:24

the queue isn't the data, it's a governor that lets you control the timing of consumption (something that clojure laziness emphatically doesn't provide)

dpsutton21:10:27

the return value of a DFS and the internal stacks or queues used for traversal don't have to coincide

kingcode21:10:15

Concretely, I am providing the sequences to a clojure zipper.

noisesmith21:10:41

@kingcode the typical shape: a queue that gets all the inputs (from your search tree or whatever), a queue that gets the results, and N processing loops in the middle, all reading from and writing to the same pair of queues, that N controls the parallelism / rate of consumption

kingcode21:10:02

So I was hoping that the nodes not traversed are left un’evaled

noisesmith21:10:12

you can even have recursion, where one of those workers puts more input back into the incoming queue

andy.fingerhut21:10:45

This may be the wrong time to toss this in, but there is an unchunk function that has been used in a few contexts like this that I believe correctly unchunks a lazy sequence? I believe that even then, you will not get 100% ironclad promises that it will never realize even 1 more element than you ask for, in all situations.

noisesmith21:10:57

I wouldn't make any assumptions about the eagerness of a zipper consuming nested lazy structures

noisesmith21:10:55

@andy.fingerhut and unchunk isn't in core because these sorts of constructs still end up being buggy, even if you unchunk

kingcode21:10:28

Hmmm. queues processing loops and unchunking sound like more complications than a simple lazy seq of nodes…. but thanks for your advice, I will try to digest all of this 🙂

andy.fingerhut21:10:07

Here is one way of writing an unchunk function, that helped avoid at least some level of evaluating too far ahead in a math.combinatorics library function. Again, if you want 100% guarantees in all cases that it will prevent evaluating even one more element, I don't think it provides such guarantees: https://github.com/clojure/math.combinatorics/blob/master/src/main/clojure/clojure/math/combinatorics.cljc#L214-L224

Alex Miller (Clojure team)21:10:10

imo, unchunk is a smell that you're using lazy seqs when you cannot accept their constraints (and thus should be doing something else)

💯 3
noisesmith21:10:39

that's what I've been trying to say, but better articulated

andy.fingerhut21:10:15

I won't deny it is a code smell. I don't think anyone had the strong interest of reworking the implemetation of the affected math.combinatorics function in a more significant way to avoid the out of memory issues that were occurring before unchunk was added to it.

noisesmith21:10:30

yeah - in that case it's kind of a corner case where you want the combinatoric function to be side effect free, but the memory consumption became a side effect

kingcode02:10:50

Interesting - Thanks for sharing.

kingcode02:10:38

In my case I will use delays with regular seqs to get rid of the smell :)

kingcode21:10:08

@noisesmith I agree about zipper eagerness, will have to investigate

Daniel Stephens21:10:31

I think in the DFS example each child is quick to find, so the fact that a few in a chunk get realised eagerly is probably not going to cause an issue, it only sounds like an issue where each next item is a big performance hit.

noisesmith21:10:11

if you need to control the timing of realizing elements, lazy seqs are not actually simple - they are not made for task management, and if you need to control things that strictly (because of the expense of some computation) they are the wrong abstraction

kingcode21:10:37

It all depends on what is being done to generate your DFS child…in my case it could be expensive 🙂

Daniel Stephens21:10:53

ahh, sorry, had misunderstood the example 😊

kingcode21:10:44

@noisesmith ok, I will try to find a way to do it right and still consume computations as a seq.

noisesmith21:10:08

another thing to consider: if you have a cheap input but expensive processing step, you can use run! on the input lazy seq, and write to a blocking queue for each result

noisesmith21:10:52

then the reader controls the workload

kingcode21:10:54

Ok sure…but blocking queues and run! make me think “architecture”, when I just have a very small library

kingcode21:10:55

I will start with hand-rolling for now as in the gist, and look for a better way to do exactly what I need.

kingcode21:10:11

Thanks 🙂

noisesmith21:10:33

either it's safe to pretend the calculation is side effect free (so chunking doesn't matter), or you need to control execution, and side effects are in fact your primary concern

noisesmith21:10:47

I don't see a third option?

kingcode21:10:29

No side effects ! Just expensive values, and potentially a lot of bactracking.

noisesmith21:10:43

an expense is a side effect

kingcode21:10:49

The only third option I see is the gist :;

kingcode21:10:06

Good point..

noisesmith21:10:25

that only displaces the problem, because the consumers of your lazy value have to deal with the same issues you do

kingcode21:10:14

So indeed I need to control execution of expensive (and potentially a lot of) computations producing immutable values.

kingcode21:10:07

OK…indeed if it comes to that, then definitely architecture becomes a concern.

kingcode21:10:42

Thank you @noisesmith and all

didibus21:10:43

Well, I wouldn't go and call an expansive operation a side effect

noisesmith22:10:13

the consumption of resources (and the way that effects program correctness) is a side effect

noisesmith22:10:30

and CPU power is one resource

didibus22:10:06

I mean, that's not the normally accepted definition of side effect. If you want it to mean that sure

didibus22:10:28

Its just quite confusing to use it like that to most people I feel.

andy.fingerhut23:10:19

Agreed that it is a use of the term you typically won't find in a functional programming course or talk.

andy.fingerhut23:10:25

It certainly can be "an event you are trying explicitly to avoid happening, even in code that is otherwise dealing only with pure functions"

👍 3
kingcode23:10:26

I wasn’t thinking of it that way but it certainly can be, at least indirectly, e.g. by virtue of the domino effect if you lose a user/customer because of an unmitigated concern.

didibus01:10:56

Its definitely a concern to consider, especially in lazy context, since it might surprise the consumer that reading the next element blocks for X amount of time. But it doesn't fall in the classical definition of a side effect

didibus01:10:50

The thing to know is that Clojure's lazy seq are not meant as a way to implement lazy behavior, but as an internal optimization for chaining operations.

didibus01:10:18

That's why they are chunking most of the time, as that is an attempt at further optimizing them for that purpose

didibus01:10:43

In theory, you can construct non chunking lazy seq, but even those, you're probably relying on implementation details of various functions. Its possible that anything taking one would return a chunked version of it.

didibus01:10:15

And even non chunking lazy seq are not guaranteed lazy in all circumstances, some operations might still realize one or more elements

didibus01:10:34

So when using them, you need to ask yourself: Am I okay with anything I do with it possibly realizing 1 or more elements up to probably about two full chunks, so about 64 elements

didibus01:10:00

Now, in general, if realizing an element causes a side effect, most likely you won't be okay with it. Which is why in general people say avoid side effects with lazy-seq. The other thing is generally when the side effect happens matter a lot with dealing with side effect, and with lazyness you lose control on that, so that can be a source of bugs.

didibus01:10:55

But, in your case, it might be if say realizing an element takes 1 minute of pure computation, you might also not be okay with not being able to reliably predict if getting the next element will take 1 minute or 32

kingcode15:10:02

Thx @U0K064KQV, well noted and interesting stuff. Some of my initial confusion had to do with lazy-seq doc’s mention that “..[it] will not force a lazy seq..”

didibus19:10:02

@kingcode Which doc exactly says that?

andy.fingerhut20:10:45

I see that the doc string for the function sequence contains that phrase, but not the doc string for lazy-seq

kingcode03:10:05

Ooops sorry, indeed I meant sequence. I was feeding it a true lazy seq, but in spite of “..will not force ..“, got the ChunkedCons version at the other end.

noisesmith16:10:39

returning a chunked collection and not forcing the input are entirely compatible

noisesmith16:10:54

it doesn't force the first chunk until you realize an element

noisesmith16:10:37

as discussed previously, if realizing 1 element at a time vs. realizing 32 at a time actually matters, laziness is the wrong abstraction to use in clojure

kingcode21:10:33

got it 👍:skin-tone-5:

didibus21:10:59

Ya, or at least, not an easy one to use as such, cause you'd need to be super careful that you only use things that keep the seq as a lazy-seq, and operations over it that don't ever realize one more than you want. Which I think Clojure would make no guarantee that those assumptions would hold in the next minor version bump of Clojure itself. So ya,all in all, better to avoid for those use case. Delay is way way better, or use core.async channels, agents, something custom, etc.

didibus21:10:42

That said, I've had use cases before that were IO, and variable laziness was fine. For example, I have a lazy-seq backed AWS SQS queue abstraction. SQS itself returns variable length batches of messages, if you use the bath API, and so for that it works really well for me.

didibus22:10:43

But that doesn't mean it isn't a relevant consideration

isak22:10:38

@kingcode Have you looked into core.async? That should let you write very similar code to what you wanted. And I don't think Rich is going to wake up tomorrow and decide he is going to start chunking puts/takes on channels

kingcode23:10:45

Thanks @isak, I have thought some more about my problem and from advice given earlier, using regular sequences and wrapping my tasks in delays should do the trick...indeed chunking is a good and necessary thing. Channels are cool but not what I need in this case - thanks!

3