This page is not created by, affiliated with, or supported by Slack Technologies, Inc.
2018-01-02
Channels
- # adventofcode (2)
- # bangalore-clj (1)
- # beginners (26)
- # boot (7)
- # cider (21)
- # clara (45)
- # cljs-dev (1)
- # cljsrn (2)
- # clojure (168)
- # clojure-berlin (1)
- # clojure-india (4)
- # clojure-italy (7)
- # clojure-nl (1)
- # clojure-russia (1)
- # clojure-spec (10)
- # clojure-uk (12)
- # clojurescript (31)
- # datascript (2)
- # datomic (28)
- # defnpodcast (9)
- # emacs (2)
- # events (4)
- # fulcro (193)
- # hoplon (127)
- # hypercrud (1)
- # jobs (1)
- # jobs-discuss (38)
- # keechma (1)
- # luminus (5)
- # off-topic (16)
- # onyx (4)
- # parinfer (9)
- # portkey (2)
- # portland-or (1)
- # precept (5)
- # re-frame (9)
- # reagent (8)
- # remote-jobs (7)
- # rum (3)
- # shadow-cljs (2)
- # spacemacs (19)
- # specter (2)
- # testing (1)
- # unrepl (34)
I don't know that I've seen rich discuss the rationale for chunking, just that it is a performance optimization, but not why it is and how it works
My guess is it has more to do with the jvm's jit than memory accesses, e.g. it transforms traversal of 32 elements in to a tight loop for the jit to chew on, but I don't know
Interesting guess about the JIT. I understand how the optimization works: less I/O (obviously) plus in most cases when lazy-seqs are chunked they're also cached, e.g. composing transducers.
In the thread, our best guess is it's just based on the branching factor (determined through experimentation to be ideal on the JVM) of most data structures that will be converted to lazy-seqs and otherwise arbitrary.
I just meant transducers return chunked lazy-seqs (mainly to cache them when composing several) where not all lazy-seqs are chunked.
people get really confused about what things to label as being "transducers", in part because transducers didn't introduce a new named type or interface or collection.
I'm abusing terminology. Am I correct that all functions that can return transducers also return their output as chunked lazy-seqs? And that these two things are related in purpose, i.e. you want that behavior when threading a seq through several transducers ending with into
?
if you look at the docstring for map it says "Returns a transducer when no collection is provided."
they are not related at all, it was just convenient to overload the names in clojure.core
I think it's pretty clear that the use cases for transducers are what call for passing chunked lazy-seqs between them...
so there is an arity of map that returns a function (when no collection is given) and an arity that returns a seq when a collection is given
When collections are given they return lazy-seqs, though. And same when transducers are applied.
when a transducer is applied, it is applied to reducing function and returns a new reducing function, and if you reduce with it you can build up whatever type you build up with the reducing function
Ah, ok. I have done that before, but it's not generally the use case I think of: threading between several transducers and calling into
at the end.
This has gotten really far from my original question of "why chunk at 32 elements?" btw...
if you want to argue semantics about it, lazy-seqs extend seqs, so saying "no seqs" covers lazy-seqs
sure, but in the discussion of that issue your remarks lead me to believe you had a misunderstanding about what transducers are and how they work. so I am trying to explain to you that transducers have nothing to do with seqs, lazy-seqs, chunked seqs, or cached values for seqs
(Recent experience w/ spec and the “contagious” nature of macros has lead me to believe that macros are a huge missed opportunity in Clojure. Homoiconicity is arguably the killer feature of Lisp and it seems that the existence of the macro/function distinction (and the lack of an eval-less apply
for macros) and its contaminating effect are holding us back from a lot of true power).
I never had much of an experience defining and using macros in lisp in general so I dont have really an idea of how usefull them are. What are your favourites macros that are not in the standard lib?
Unfortunately due to the aforementioned issues, I have to pretty much avoid macros (and most Clojure teams I’ve encountered do likewise).
i.e., in practice, Clojure falls short of allowing widespread application of the enlightenment one obtains from readings of SICP or the various Lisp-related koans/ zen stories
macros introduce new syntax, and there's always risk from that
Once you change the syntax/semantics of your language it becomes harder to read
So I'm not sure what the huge missed opportunity is that you're referencing @johanatan
it seems strange to me to say that having macros and functions be separate violates a lisp's homoiconicity in some way.
the avoidance of macros usually winds up being the result of being able to do it in a cleaner way with plain functions, or at least that's my experience
and classical lisp advocacy highlights the macro features of lisps. Paul Graham basically argues that lisp is great because its a substrate to build a language that has concepts for exactly what you want to do. Lisp is for language building the perfect language. Or so goes the argument
i believe some very old lisps actually had multiple ways to define functions. the definition style that didn't eval its args turned into macros, because that's what everyone was using them for.
looking around, it looks like Lisp 1.5 did not have macros, but it did have the FEXPR which you'd call with a list of unevaluated arguments and the environment.
@tbaldridge the missed opportunity is that most people avoid macros because they do not mix well with functions (particularly higher order ones)
but why use macros when you have higher order functions?
Clojure has a different philosophy: use data, manipulate data, and use that data to drive function composition.
as a last resort you can drive code generation with the data and call eval.
That's a holdover from JS, imo
and perhaps from old JVMs that cached generated code. JVMs >= 7 don't
The one thing I didn't like about SICP was how outdated it was, and like all schemes when all you have is a cons, everything looks like a cell 😄
i wasn't advocated that necessarily. just pointing out the argument. I love reading articles about lisp and PG's stuff, but I'm not sure I'd like a big codebase of it
So the SICP stuff is awesome, because it's so simple, but in 2018 we have things like hashmaps, types, JITs, blazing fast GCs, and a lot of the stuff in those older texts aren't quite as applicable today.
yeah. it seems like a lot of scheme is defining getters and setters that are all giving function names to the 7th cons cell of a list
@dpsutton you don't like caaar and cddddr?
(or whatever they named those)
There is a certain appeal to growing your own language and I think unfortunately that is being missed by people avoiding macros and eval. And it’s hard to get right as well; I can think of a few clj libs that don’t get it quite right (and I’m not sure I can offer a counterexample that I would say is done right)
i would love to work with a good lisp hacker in a code base in scheme or common lisp it just doesn't seem like the opportunity will come up. and the notions of testability and what is paradigmatic isn't really what modern testable code looks like these days to me
there's an appeal to growing your own language, but trying to work on a codebase where someone else tried to grow a language is hell
@noisesmith precisely! 😀
(except for the rare case where the implementing coders were qualified to make a new language, so far I haven't been that lucky)
@johanatan but growing your own language doesn't take macros and eval
look no further than get-in
for that. It's a DSL that uses neither eval nor macros 😄
swap! + get-in / update-in - double mini language (a nice one to work with)
For that matter, most of core.logic doesn't need macros, they're more for adding new syntax
@tbaldridge that’s true. I think good names + higher order functions get you 90% there but some things are not possible without macros
That all depends though. I recently finished a job working on a custom query language in Clojure. Spent about a year working 100% of my time on it, and it had optimizers, profiling, JIT-ing, etc. And I don't think it used any macros. Just instaparse and Clojure data manipulation. Output was sexprs that got run through Eval
Although it could have also output functions and composed them. Using eval was more about controlling the JVM JIT than limitations in Clojure
So you mentioned spec, what about spec would you see changing with different macro limitations?
Oh, the thing I ran into was that s/keys
et al can’t be applied to a a dynamically sized collection
RE: @noisesmith on "trying to work on a codebase where someone else tried to grow a language is hell" -- There's a saying I've grown quite fond of: "Hell is someone else's abstractions"
(Unless you use a macro + eval [which means that any callers needing to apply that will also need to be macros etc etc])
So if s/keys
weren’t a leaky abstraction for example (either a function or a macro that for all intents and purposes behaves like a function [which might not be technically possible]), then it’d be fine
I wonder if this project would alleviate some of the pain associated with macros: https://github.com/brandonbloom/metaclj/blob/master/README.md
There's nothing that stops s/keys from being a function
Except from the initial design of spec, which took a code first approach (mentioned in the spec rationale)
Yea, I think I heard wind of some initiative to functionify spec so maybe that’s already underway but the bigger problem is that it can happen with any given import (depends on the whims of the implementors).
I’m not so sure that “code first” or “data first” should even be things in Lisps though (given the whole “code is data; data is code” maxim).
I've developed this weird technique which has eliminated most of my pains with macros instead of writing
(defmacro foo [...]
... lots of cod ehere ...)
I do
(defn foo-helper [...]
... lots of code here ... )
(defmacro foo [...]
(foo-helper ...))
I'm not sure if it's 100% psychological of what, but I've found writing / debugging functions to be MUCH MUCH easier than dealingwith (macroexpand-1
(...))`@qqq when I was trying to figure out macros, there was a clojure book I got which talks specifically about macros. I think this is one of the first things it suggested was to try and include most of the functionality in functions, and break it out into macros only when you really want or need to. I still find myself avoiding macros most of the time 🙂
@benzap: glad others also think this way; upon more reflection, I just realixed the following: the idea of macros calling macros scares me (and I"ve never tried it) functions calling functions = easy as a result, by pushing it into a -helper function, it's easy to get better decomposition too
btw, an awesome book https://leanpub.com/lisphackers/read
hmm, I have a weird problem with aleph. If I execute my app with Java 8 I get:
SEVERE: error in manifold.utils/future-with
java.lang.NoSuchMethodError: java.nio.ByteBuffer.flip()Ljava/nio/ByteBuffer;
at byte_streams$fn__4369.invokeStatic(byte_streams.clj:704)
at byte_streams$fn__4369.invoke(byte_streams.clj:676)
at byte_streams.protocols$fn__3310$G__3305__3319.invoke(protocols.clj:11)
at byte_streams$fn__4189$f__3992__auto____4191$fn__4193.invoke(byte_streams.clj:452)
at clojure.lang.LazySeq.sval(LazySeq.java:40)
at clojure.lang.LazySeq.seq(LazySeq.java:49)
at clojure.lang.RT.seq(RT.java:528)
at clojure.core$seq__5124.invokeStatic(core.clj:137)
at clojure.core$map$fn__5587.invoke(core.clj:2738)
at clojure.lang.LazySeq.sval(LazySeq.java:40)
at clojure.lang.LazySeq.seq(LazySeq.java:49)
at clojure.lang.RT.seq(RT.java:528)
at clojure.core$seq__5124.invokeStatic(core.clj:137)
at clojure.core$empty_QMARK_.invokeStatic(core.clj:6126)
at clojure.core$empty_QMARK_.invoke(core.clj:6126)
at manifold.stream.seq.SeqSource.take(seq.clj:41)
at manifold.stream.graph$sync_connect$f__914__auto____2402.invoke(graph.clj:255)
at clojure.lang.AFn.run(AFn.java:22)
at io.aleph.dirigiste.Executor$3.run(Executor.java:318)
at io.aleph.dirigiste.Executor$Worker$1.run(Executor.java:62)
at manifold.executor$thread_factory$reify__806$f__807.invoke(executor.clj:44)
at clojure.lang.AFn.run(AFn.java:22)
at java.lang.Thread.run(Thread.java:748)
but it works just fine with Java 9and to be even more exact, my server gets HTTP request, it does it’s job, then passes return value to aleph which then should create the response, then boom
more debugging on the problem: this only happens when following conditions are met:
1) app run from uberjar
2) using java 8
3) HTTP request tries to send a file which reference has been created with (io/resource path)
connected nrepl to my uberjarred app and the file is there, I can open a reader to it and slurp the contents no problemo
java.lang.NoSuchMethodError: java.nio.ByteBuffer.flip()Ljava/nio/ByteBuffer;
Is it possible that the signature of flip method has changed from jdk8 to jdk9?
@niklas.collin there is a change though -
`
This problem also happens when you build the software with JDK 9 with target JDK 8, because in JDK9 ByteBuffer.flip() returns a ByteBuffer... It used to return a Buffer in JDK8.
u should get this error if run a jar built with jdk9 on jdk8 or less, or if u run your project on jdk8 or less
True or False: having more threads in your threadpool than you have cores on your machine (like 200 max threads) is beneficial for network IO intensive applications
@U37NPE2H0 well yes, if threads spent a lot of time waiting for I/O, you'd better have more threads than cores (a lot more). The tradeoff is between memory consumption and throughput: too many threads eat up too much RAM, too few threads will decrease your throughput.
in any case, non-blocking IO is preferred for IO-intensive applications
right, yeah, I'd certainly rather go core.async. Was just curious about whether or not the JVM was able to allocate threads above its core count and why it would ever even attempt that. Answer is I suppose that an 8 core processor can allocate a 9th thread if all 8 threads are waiting for network IO.
I’m pretty sure the jvm will allocate as many threads as it can address or fit into memory
ah, yeah I suppose I'm not using the right language here. I knew it could assign threads, but I had previously assumed that an assigned 9th thread would still have to wait until one of the other 8 threads would have to release before any work could begin on thread 9. I didn't realize net IO was a condition to release a processor to work on another thread
although I probably should have. I assume this is "how computers work 101" stuff I'm asking, 😅
@U37NPE2H0 this is not JVM specific: the OS allocates much more threads than cores, and orchestrate time slicing between all those threads
@U37NPE2H0 see https://en.wikipedia.org/wiki/Preemption_(computing)#Preemptive_multitasking
I've been told that a new thread will be freed up if all existing threads are blocked on network IO. Seems plausible, I suppose.
there are ThreadPool configurations that will create new threads whenever you submit a task and all other threads are already blocked/in use, but if you set 200 as the max that’s the most it will create
it would probably be worth looking into whether or not you can use asynchronous io instead of spawning a ton of threads
> it would probably be worth looking into whether or not you can use asynchronous io word.
Hi, I’m trying to use (clojure.core/compile)
function and I get “CompilerException java.io.IOException: No such file or directory, compiling:(clojure/core/specs/alpha.clj:1:1)“.
@johanatan Help me understand: what was the problem with
> Oh, the thing I ran into was that s/keys
et al can’t be applied to a a dynamically sized collection
Or, better, I’m not sure about what you mean by a dynamically sized collection. Can you give me an example?
@luskwater The problem is that if you have a vector of keys, let's call this V, and you want a spec where (s/keys :req V)
, you have to use a macro.
That call to s/keys won't work, because the s/keys macro expects a vector of keys, not a binding or a var to a vector of keys
So macros become infectious:
(defmacro my-keys [V]
`(s/keys :req ~V))
Or:
(eval `(s/keys :req ~V))
Ah, OK. I’ve used spec
where I have varying numbers of keys for records of various shapes and fields, but that’s using s/merge
and s/multi-spec
to make magic happen
right there's several ways around this problem
This falls under the "Code is data (not vice versa)" part of the rationale: https://clojure.org/about/spec
Well yes, specs-from-data is something that has been mentioned as a future goal of spec, so I think it will be solved fairly soon, but still it's a pain point.
I could have sworn there was a clojure function to grab specific keys from a map like this:
> (vals-at {1 :a 2 :b 3 :c} [1 3])
[:a :c]
I can't seem to remember the name of the function though. Does that not exist? Am I going crazy?Has there been any resolution regarding clojureverse not backing up clojurians anymore since 11-17? Given that we lose messages so quickly, this seemed like an integral part of having a community based on slack: https://clojurians-log.clojureverse.org/clojure/2017-11-17.html