Fork me on GitHub
#clojure
<
2018-01-02
>
hiredman00:01:38

I don't know that I've seen rich discuss the rationale for chunking, just that it is a performance optimization, but not why it is and how it works

hiredman00:01:16

My guess is it has more to do with the jvm's jit than memory accesses, e.g. it transforms traversal of 32 elements in to a tight loop for the jit to chew on, but I don't know

sophiago00:01:38

Interesting guess about the JIT. I understand how the optimization works: less I/O (obviously) plus in most cases when lazy-seqs are chunked they're also cached, e.g. composing transducers.

hiredman00:01:56

transducers are a whole other thing

hiredman00:01:03

and are not tied to seqs

sophiago00:01:33

In the thread, our best guess is it's just based on the branching factor (determined through experimentation to be ideal on the JVM) of most data structures that will be converted to lazy-seqs and otherwise arbitrary.

sophiago00:01:32

I just meant transducers return chunked lazy-seqs (mainly to cache them when composing several) where not all lazy-seqs are chunked.

hiredman00:01:50

transducers don't return seqs

hiredman00:01:48

transducers are functions that take and return reducing functions

hiredman00:01:05

no reference to any kind of collection type, caching or otherwise

hiredman00:01:42

people get really confused about what things to label as being "transducers", in part because transducers didn't introduce a new named type or interface or collection.

sophiago00:01:34

I'm abusing terminology. Am I correct that all functions that can return transducers also return their output as chunked lazy-seqs? And that these two things are related in purpose, i.e. you want that behavior when threading a seq through several transducers ending with into?

hiredman00:01:47

if you look at the docstring for map it says "Returns a transducer when no collection is provided."

hiredman00:01:18

they are not related at all, it was just convenient to overload the names in clojure.core

sophiago00:01:05

I think it's pretty clear that the use cases for transducers are what call for passing chunked lazy-seqs between them...

hiredman00:01:06

so there is an arity of map that returns a function (when no collection is given) and an arity that returns a seq when a collection is given

sophiago00:01:20

Yes, I understand that.

sophiago00:01:57

When collections are given they return lazy-seqs, though. And same when transducers are applied.

hiredman00:01:55

when a transducer is applied, it is applied to reducing function and returns a new reducing function, and if you reduce with it you can build up whatever type you build up with the reducing function

hiredman00:01:36

user=> (reduce ((map inc) +) 0 [1 2 3])
9
user=>

hiredman00:01:41

no seqs anywhere

sophiago00:01:00

Ah, ok. I have done that before, but it's not generally the use case I think of: threading between several transducers and calling into at the end.

hiredman00:01:25

sure, and into doesn't create a seq

hiredman00:01:38

it uses conj on whatever collection you supply it

sophiago00:01:50

You need into because all the intermediary structures are lazy-seqs...

hiredman00:01:57

user=> (into [] (map inc) [1 2 3])
[2 3 4]
user=> 

hiredman00:01:18

if you are using transducers well, you will not be creating intermediate seqs

hiredman00:01:00

user=> (into [] (comp (map dec) (map inc)) [1 2 3])
[1 2 3]
user=> 

sophiago00:01:03

I never mentioned seqs. This discussion is about lazy-seqs.

sophiago00:01:26

No, they're not.

sophiago00:01:02

This has gotten really far from my original question of "why chunk at 32 elements?" btw...

hiredman00:01:12

if you want to argue semantics about it, lazy-seqs extend seqs, so saying "no seqs" covers lazy-seqs

hiredman00:01:19

sure, but in the discussion of that issue your remarks lead me to believe you had a misunderstanding about what transducers are and how they work. so I am trying to explain to you that transducers have nothing to do with seqs, lazy-seqs, chunked seqs, or cached values for seqs

hiredman00:01:28

they are an entirely different model

johanatan03:01:14

Hi all, what are people’s thoughts on the state of macros in Clojure?

johanatan03:01:41

(Recent experience w/ spec and the “contagious” nature of macros has lead me to believe that macros are a huge missed opportunity in Clojure. Homoiconicity is arguably the killer feature of Lisp and it seems that the existence of the macro/function distinction (and the lack of an eval-less apply for macros) and its contaminating effect are holding us back from a lot of true power).

pablore03:01:56

I never had much of an experience defining and using macros in lisp in general so I dont have really an idea of how usefull them are. What are your favourites macros that are not in the standard lib?

johanatan03:01:01

Unfortunately due to the aforementioned issues, I have to pretty much avoid macros (and most Clojure teams I’ve encountered do likewise).

johanatan03:01:00

i.e., in practice, Clojure falls short of allowing widespread application of the enlightenment one obtains from readings of SICP or the various Lisp-related koans/ zen stories

tbaldridge03:01:11

macros introduce new syntax, and there's always risk from that

tbaldridge03:01:32

Once you change the syntax/semantics of your language it becomes harder to read

tbaldridge03:01:55

So I'm not sure what the huge missed opportunity is that you're referencing @johanatan

devn03:01:42

it seems strange to me to say that having macros and functions be separate violates a lisp's homoiconicity in some way.

devn04:01:06

the avoidance of macros usually winds up being the result of being able to do it in a cleaner way with plain functions, or at least that's my experience

devn04:01:13

if you want to not macroexpand until runtime, then you're just using eval

dpsutton04:01:53

and classical lisp advocacy highlights the macro features of lisps. Paul Graham basically argues that lisp is great because its a substrate to build a language that has concepts for exactly what you want to do. Lisp is for language building the perfect language. Or so goes the argument

devn04:01:33

i believe some very old lisps actually had multiple ways to define functions. the definition style that didn't eval its args turned into macros, because that's what everyone was using them for.

devn04:01:54

looking around, it looks like Lisp 1.5 did not have macros, but it did have the FEXPR which you'd call with a list of unevaluated arguments and the environment.

johanatan04:01:41

@tbaldridge the missed opportunity is that most people avoid macros because they do not mix well with functions (particularly higher order ones)

johanatan04:01:06

And yes it’s the advocacy that @dpsutton mentioned that I was referencing

tbaldridge04:01:15

but why use macros when you have higher order functions?

tbaldridge04:01:41

Clojure has a different philosophy: use data, manipulate data, and use that data to drive function composition.

tbaldridge04:01:07

as a last resort you can drive code generation with the data and call eval.

johanatan04:01:33

yea I guess it’s also that most people avoid eval lol

tbaldridge04:01:43

That's a holdover from JS, imo

johanatan04:01:51

With eval all things are truly possible

johanatan04:01:04

Also Python, Perl & Ruby

johanatan04:01:12

(Generally avoid eval)

tbaldridge04:01:16

and perhaps from old JVMs that cached generated code. JVMs >= 7 don't

tbaldridge04:01:15

The one thing I didn't like about SICP was how outdated it was, and like all schemes when all you have is a cons, everything looks like a cell 😄

dpsutton04:01:36

i wasn't advocated that necessarily. just pointing out the argument. I love reading articles about lisp and PG's stuff, but I'm not sure I'd like a big codebase of it

johanatan04:01:14

@dpsutton I was referring to the advocacy of Paul Graham; not you per se

tbaldridge04:01:37

So the SICP stuff is awesome, because it's so simple, but in 2018 we have things like hashmaps, types, JITs, blazing fast GCs, and a lot of the stuff in those older texts aren't quite as applicable today.

dpsutton04:01:44

yeah. it seems like a lot of scheme is defining getters and setters that are all giving function names to the 7th cons cell of a list

tbaldridge04:01:24

@dpsutton you don't like caaar and cddddr?

tbaldridge04:01:34

(or whatever they named those)

dpsutton04:01:58

haha i absolutely hate those

dpsutton04:01:16

i always forget if they are front to back or back to front

johanatan04:01:11

There is a certain appeal to growing your own language and I think unfortunately that is being missed by people avoiding macros and eval. And it’s hard to get right as well; I can think of a few clj libs that don’t get it quite right (and I’m not sure I can offer a counterexample that I would say is done right)

dpsutton04:01:27

i would love to work with a good lisp hacker in a code base in scheme or common lisp it just doesn't seem like the opportunity will come up. and the notions of testability and what is paradigmatic isn't really what modern testable code looks like these days to me

noisesmith04:01:51

there's an appeal to growing your own language, but trying to work on a codebase where someone else tried to grow a language is hell

noisesmith04:01:22

(except for the rare case where the implementing coders were qualified to make a new language, so far I haven't been that lucky)

dpsutton04:01:48

but i am working in a lisp codebase in the most popular lisp: elisp ha

dpsutton04:01:18

and buffer local variables are uhhh, no fun

tbaldridge04:01:49

@johanatan but growing your own language doesn't take macros and eval

tbaldridge04:01:30

look no further than get-in for that. It's a DSL that uses neither eval nor macros 😄

noisesmith04:01:12

swap! + get-in / update-in - double mini language (a nice one to work with)

tbaldridge04:01:44

For that matter, most of core.logic doesn't need macros, they're more for adding new syntax

johanatan04:01:55

@tbaldridge that’s true. I think good names + higher order functions get you 90% there but some things are not possible without macros

johanatan04:01:30

(Or at the least become vastly easier with macros)

tbaldridge04:01:44

That all depends though. I recently finished a job working on a custom query language in Clojure. Spent about a year working 100% of my time on it, and it had optimizers, profiling, JIT-ing, etc. And I don't think it used any macros. Just instaparse and Clojure data manipulation. Output was sexprs that got run through Eval

tbaldridge04:01:31

Although it could have also output functions and composed them. Using eval was more about controlling the JVM JIT than limitations in Clojure

johanatan04:01:39

Hmm, that’s interesting.

johanatan05:01:41

Code gen and code transformation seem to be distinctly separate things though

johanatan05:01:07

Perhaps there’s no need for the latter if you do the former.

tbaldridge05:01:31

So you mentioned spec, what about spec would you see changing with different macro limitations?

johanatan05:01:50

Oh, the thing I ran into was that s/keys et al can’t be applied to a a dynamically sized collection

devn05:01:05

RE: @noisesmith on "trying to work on a codebase where someone else tried to grow a language is hell" -- There's a saying I've grown quite fond of: "Hell is someone else's abstractions"

johanatan05:01:17

(Unless you use a macro + eval [which means that any callers needing to apply that will also need to be macros etc etc])

johanatan05:01:39

So macros are contagious like that

johanatan05:01:59

So if s/keys weren’t a leaky abstraction for example (either a function or a macro that for all intents and purposes behaves like a function [which might not be technically possible]), then it’d be fine

johanatan05:01:32

I wonder if this project would alleviate some of the pain associated with macros: https://github.com/brandonbloom/metaclj/blob/master/README.md

tbaldridge05:01:36

There's nothing that stops s/keys from being a function

tbaldridge05:01:41

Except from the initial design of spec, which took a code first approach (mentioned in the spec rationale)

johanatan06:01:20

Yea, I think I heard wind of some initiative to functionify spec so maybe that’s already underway but the bigger problem is that it can happen with any given import (depends on the whims of the implementors).

johanatan06:01:41

I’m not so sure that “code first” or “data first” should even be things in Lisps though (given the whole “code is data; data is code” maxim).

qqq06:01:08

I've developed this weird technique which has eliminated most of my pains with macros instead of writing

(defmacro foo [...]
... lots of cod ehere ...)
I do
(defn foo-helper [...]
 ... lots of code here ... )

(defmacro foo [...]
  (foo-helper ...))

I'm not sure if it's 100% psychological of what, but I've found writing / debugging functions to be MUCH MUCH easier than dealingwith (macroexpand-1 (...))`

qqq06:01:20

err, (macroexpand-1 '(...))

benzap06:01:21

@qqq when I was trying to figure out macros, there was a clojure book I got which talks specifically about macros. I think this is one of the first things it suggested was to try and include most of the functionality in functions, and break it out into macros only when you really want or need to. I still find myself avoiding macros most of the time 🙂

qqq06:01:16

@benzap: glad others also think this way; upon more reflection, I just realixed the following: the idea of macros calling macros scares me (and I"ve never tried it) functions calling functions = easy as a result, by pushing it into a -helper function, it's easy to get better decomposition too

Empperi11:01:12

hmm, I have a weird problem with aleph. If I execute my app with Java 8 I get:

SEVERE: error in manifold.utils/future-with
java.lang.NoSuchMethodError: java.nio.ByteBuffer.flip()Ljava/nio/ByteBuffer;
	at byte_streams$fn__4369.invokeStatic(byte_streams.clj:704)
	at byte_streams$fn__4369.invoke(byte_streams.clj:676)
	at byte_streams.protocols$fn__3310$G__3305__3319.invoke(protocols.clj:11)
	at byte_streams$fn__4189$f__3992__auto____4191$fn__4193.invoke(byte_streams.clj:452)
	at clojure.lang.LazySeq.sval(LazySeq.java:40)
	at clojure.lang.LazySeq.seq(LazySeq.java:49)
	at clojure.lang.RT.seq(RT.java:528)
	at clojure.core$seq__5124.invokeStatic(core.clj:137)
	at clojure.core$map$fn__5587.invoke(core.clj:2738)
	at clojure.lang.LazySeq.sval(LazySeq.java:40)
	at clojure.lang.LazySeq.seq(LazySeq.java:49)
	at clojure.lang.RT.seq(RT.java:528)
	at clojure.core$seq__5124.invokeStatic(core.clj:137)
	at clojure.core$empty_QMARK_.invokeStatic(core.clj:6126)
	at clojure.core$empty_QMARK_.invoke(core.clj:6126)
	at manifold.stream.seq.SeqSource.take(seq.clj:41)
	at manifold.stream.graph$sync_connect$f__914__auto____2402.invoke(graph.clj:255)
	at clojure.lang.AFn.run(AFn.java:22)
	at io.aleph.dirigiste.Executor$3.run(Executor.java:318)
	at io.aleph.dirigiste.Executor$Worker$1.run(Executor.java:62)
	at manifold.executor$thread_factory$reify__806$f__807.invoke(executor.clj:44)
	at clojure.lang.AFn.run(AFn.java:22)
	at java.lang.Thread.run(Thread.java:748)
but it works just fine with Java 9

Empperi11:01:23

and to be exact, this error occurs when a HTTP response is being processed

Empperi11:01:07

and to be even more exact, my server gets HTTP request, it does it’s job, then passes return value to aleph which then should create the response, then boom

Empperi11:01:35

oh, no wait. I might have an idea where to look at

Empperi12:01:40

more debugging on the problem: this only happens when following conditions are met: 1) app run from uberjar 2) using java 8 3) HTTP request tries to send a file which reference has been created with (io/resource path)

Empperi12:01:57

so basically it’s trying to serve a file from within the jar

Empperi12:01:23

connected nrepl to my uberjarred app and the file is there, I can open a reader to it and slurp the contents no problemo

Prakash12:01:45

java.lang.NoSuchMethodError: java.nio.ByteBuffer.flip()Ljava/nio/ByteBuffer; Is it possible that the signature of flip method has changed from jdk8 to jdk9?

Empperi12:01:09

that would be a breaking change

Empperi12:01:17

Java tries to avoid those like a plague

Empperi12:01:44

trying to downgrade aleph next

Empperi12:01:01

been using 0.4.5-alphas but I’ll try 0.4.4

Prakash12:01:37

@niklas.collin there is a change though - `

Prakash12:01:12

This problem also happens when you build the software with JDK 9 with target JDK 8, because in JDK9 ByteBuffer.flip() returns a ByteBuffer... It used to return a Buffer in JDK8.

Prakash12:01:47

try casting it to a buffer

Empperi12:01:08

so it only happens if you compile with jdk 9

Prakash12:01:35

u should get this error if run a jar built with jdk9 on jdk8 or less, or if u run your project on jdk8 or less

Empperi13:01:25

yeah, makes sense

tjtolton13:01:50

True or False: having more threads in your threadpool than you have cores on your machine (like 200 max threads) is beneficial for network IO intensive applications

val_waeselynck13:01:04

@U37NPE2H0 well yes, if threads spent a lot of time waiting for I/O, you'd better have more threads than cores (a lot more). The tradeoff is between memory consumption and throughput: too many threads eat up too much RAM, too few threads will decrease your throughput.

val_waeselynck13:01:39

in any case, non-blocking IO is preferred for IO-intensive applications

tjtolton13:01:45

right, yeah, I'd certainly rather go core.async. Was just curious about whether or not the JVM was able to allocate threads above its core count and why it would ever even attempt that. Answer is I suppose that an 8 core processor can allocate a 9th thread if all 8 threads are waiting for network IO.

markmarkmark14:01:49

I’m pretty sure the jvm will allocate as many threads as it can address or fit into memory

tjtolton14:01:05

ah, yeah I suppose I'm not using the right language here. I knew it could assign threads, but I had previously assumed that an assigned 9th thread would still have to wait until one of the other 8 threads would have to release before any work could begin on thread 9. I didn't realize net IO was a condition to release a processor to work on another thread

tjtolton14:01:54

although I probably should have. I assume this is "how computers work 101" stuff I'm asking, 😅

val_waeselynck14:01:37

@U37NPE2H0 this is not JVM specific: the OS allocates much more threads than cores, and orchestrate time slicing between all those threads

tjtolton14:01:49

gotcha. makes sense. Thanks for all your help (again), val!

tjtolton15:01:39

ahh, nice, thanks, that's exactly the vocabulary I was missing

tjtolton13:01:04

I've been told that a new thread will be freed up if all existing threads are blocked on network IO. Seems plausible, I suppose.

markmarkmark13:01:13

there are ThreadPool configurations that will create new threads whenever you submit a task and all other threads are already blocked/in use, but if you set 200 as the max that’s the most it will create

markmarkmark13:01:37

it would probably be worth looking into whether or not you can use asynchronous io instead of spawning a ton of threads

tjtolton14:01:26

> it would probably be worth looking into whether or not you can use asynchronous io word.

darnok14:01:53

Hi, I’m trying to use (clojure.core/compile) function and I get “CompilerException java.io.IOException: No such file or directory, compiling:(clojure/core/specs/alpha.clj:1:1)“.

luskwater15:01:11

@johanatan Help me understand: what was the problem with > Oh, the thing I ran into was that s/keys et al can’t be applied to a a dynamically sized collection Or, better, I’m not sure about what you mean by a dynamically sized collection. Can you give me an example?

tbaldridge15:01:11

@luskwater The problem is that if you have a vector of keys, let's call this V, and you want a spec where (s/keys :req V), you have to use a macro.

tbaldridge15:01:37

That call to s/keys won't work, because the s/keys macro expects a vector of keys, not a binding or a var to a vector of keys

tbaldridge15:01:20

So macros become infectious:

(defmacro my-keys [V]
   `(s/keys :req ~V))

tbaldridge15:01:35

Or:

(eval `(s/keys :req ~V))

luskwater15:01:03

Ah, OK. I’ve used spec where I have varying numbers of keys for records of various shapes and fields, but that’s using s/merge and s/multi-spec to make magic happen

luskwater15:01:16

Different issue that you’re facing

tbaldridge15:01:29

right there's several ways around this problem

tbaldridge15:01:41

This falls under the "Code is data (not vice versa)" part of the rationale: https://clojure.org/about/spec

luskwater15:01:43

Maybe that’s part of the .alpha in the namespace… don’t give up hope yet…

tbaldridge16:01:33

Well yes, specs-from-data is something that has been mentioned as a future goal of spec, so I think it will be solved fairly soon, but still it's a pain point.

bmaddy17:01:42

I could have sworn there was a clojure function to grab specific keys from a map like this:

> (vals-at {1 :a 2 :b 3 :c} [1 3])
[:a :c]
I can't seem to remember the name of the function though. Does that not exist? Am I going crazy?

bronsa17:01:02

select-keys

bmaddy17:01:13

That returns a map

bronsa17:01:19

ah, sorry, didn't pay attention to the example

bmaddy17:01:20

I thought...

bronsa17:01:24

just use map then

bronsa17:01:32

or mapv if you want a vector

bmaddy17:01:43

I would, but numbers aren't functions.

bronsa17:01:48

no but maps are

bmaddy18:01:06

Ah! Of course!

bmaddy18:01:16

Thanks, I really felt like I was going crazy there for a minute. 😄

eriktjacobsen19:01:51

Has there been any resolution regarding clojureverse not backing up clojurians anymore since 11-17? Given that we lose messages so quickly, this seemed like an integral part of having a community based on slack: https://clojurians-log.clojureverse.org/clojure/2017-11-17.html

puzzler23:01:06

What's a good Clojure library for generating reports about data with charts and graphs? I'm aware there's some stuff built into Gorilla REPL. Any other options I should be looking at?