This page is not created by, affiliated with, or supported by Slack Technologies, Inc.
2019-03-28
Channels
- # aleph (48)
- # announcements (3)
- # bangalore-clj (1)
- # beginners (131)
- # cider (30)
- # cljdoc (6)
- # cljs-dev (53)
- # cljsrn (24)
- # clojure (312)
- # clojure-austin (2)
- # clojure-europe (4)
- # clojure-finland (6)
- # clojure-nl (24)
- # clojure-spec (24)
- # clojure-uk (66)
- # clojurescript (185)
- # core-async (46)
- # cursive (10)
- # data-science (9)
- # datomic (15)
- # devcards (2)
- # emacs (50)
- # fulcro (28)
- # jobs (1)
- # jobs-discuss (2)
- # kaocha (11)
- # lein-figwheel (12)
- # nyc (1)
- # off-topic (105)
- # other-languages (80)
- # pedestal (6)
- # re-frame (50)
- # reagent (5)
- # reitit (1)
- # remote-jobs (2)
- # ring (10)
- # rum (1)
- # shadow-cljs (10)
- # spacemacs (19)
Trying an 'official' reducers example, but adding a println:
(->> (range 100000)
(r/map (fn [x]
(println (Thread/currentThread))
x))
(r/filter even?)
r/foldcat)
...how come it always prints the same thread? I'd expect things to use all available coresShould have googled 🙂 https://stackoverflow.com/a/26291136/569050
how do I get reducers to process each member of the coll in a separate thread?
I tried with (fold 1 ...)
but no luck. The forkjoin pool is used, but each member is processed in the same thread, which proves sequentiality
(I might be better off with pmap. But reducers might give some flexibility)
Maybe this is my answer: > . Similarly, reducers is elegant, but we can't control its level of parallelism. https://github.com/TheClimateCorporation/claypoole
The key to this is to get the 'right' chunk size for fork/join packets. If it is too small you will just context switch. Too large (for your collection size) won't make enough packets to cover the available cores. Have a look at this https://github.com/jsa-aerial/aerial.utils/blob/d451f81fc308a7fbae2ca1de22890c2e4f726880/src/aerial/utils/coll.clj#L468, it has worked very well for me.
Hi, Im new to docker and have to add a clojurescript project to an existing Dockerfile. It uses the ubuntu:latest. Adding leiningen
to apt-get install fails. Does anyone have a basic configuration to show how to do this in ubuntu image?
here is a dockerfile that has leiningen + nodejs https://github.com/ipfs-shipyard/cube/blob/master/Dockerfile
if you're planning to use in production, you probably want to lock down the dependency versions, as they are not in the example linked above
RUN wget https://raw.githubusercontent.com/technomancy/leiningen/stable/bin/lein RUN chmod +x lein RUN mv lein /usr/local/bin
I like your way better... but someone made a ppa...
sudo apt-add-repository ppa:/mikegedelman/leiningen-git-stable sudo apt-get update sudo apt-get install leiningen
It just tracks the stable lein repo on github
@UEJ5FMR6K is 100% correct. Lock the clojure version, lein version and dependencies versions... or you could be in dependency hell if just 1 library version changes
I need something like this: http://jsondb.io/ with a focus on performance. But with the ability to query JSON/EDN without parsing the whole database up front (performance). And the ability to write objects very fast, without re-indexing the whole database. Right now I’m doing the naive thing: read a giant EDN file, query it and write a giant EDN file.
One way of optimizing this is to split the EDN file into multiple partitions and only read/write the necessary partitions, but there might already be a tool which does this. Another requirement: it must run with GraalVM. I already tried nippy/codox which currently doesn’t run with that.
smartest function composition I ever wrote:
(defn- wrap-dedupe-args [f]
(let [dedupe-fn ((dedupe) (fn [_ args] (apply f args)))]
(fn [& args]
(dedupe-fn nil args))))
(let [f (wrap-dedupe-args prn)]
(f 2)
(f 2)
(f 2)
(f 3)
(f 3))
2
3
=> nil
It is formatted very nicely... but I don't understand what it does
What does prn mean?
@UGQJZ4QNR prn
is similar to println
which prints it's arguments, and the difference is prn
prints objects so they can be read back. In practice it usually means that prn
prints some additional data that helps to understand what exactly is being printed. For example, (prn false "false")
will print false "false"
, making it easy to understand what exactly these printed values are (compare to (println false "false")
that will print false false
).
ahh, it makes it more readable
Does anyone know of a library that normalises local variables? clojure.tools.analyzer.passes.jvm.emit-form/emit-hygienic-form
seems to be the closet, but just appends a __#<int>
to the var labels. I would like to rewrite variables into the form x__#<int>
where x
is an arbitrary constant. This would mean that (defn f [a b] (let [a (+ a b)] a))
would become (def f (fn* [x__#0 x__#1] (let* [x__#2 (+ x__#0 x__#1)] x__#2)))
you could walk the ast and rename them probably? take inspiration from tools.analyzer.passes.uniquify
yeah, that’s exactly what I have been doing, was having some problems with dynamic bindings so I just thought I would see if someone had already written this…
Is dynamic binding the same as calling a function by variable name?
How does it know what nth is?
It is basically a more robust way to encapsulate lexical scope of vars
And then return control back to the original scope
Or no?
binding
is used when you want to redefine the value of a variable within a code block. The variable needs to have been the ^:dynamic
meta - so lets say you have a variable that is defined in some other namespace. I think this example should make sense:
analysis.core> (def ^:dynamic hello "hello")
#'analysis.core/hello
analysis.core> (defn print-hello []
(println hello))
#'analysis.core/print-hello
analysis.core>
analysis.analysis> (binding [analysis.core/hello "bye"]
(analysis.core/print-hello))
bye
nil
analysis.analysis>
It is thread local - the macro expansion makes this clearer:
analysis.analysis> (macroexpand '(binding [analysis.core/hello "bye"]
(analysis.core/print-hello)))
(let*
[]
(clojure.core/push-thread-bindings
(clojure.core/hash-map #'analysis.core/hello "bye"))
(try (analysis.core/print-hello) (finally (clojure.core/pop-thread-bindings))))
In ruby we use { } for a code block. It has a local scope but also has access to outside scope (closure). But I don't think Ruby can access different namespace scopes like Clojure bind appears to be doing.
So instead of passing parameters to a function, you just bind new variable values to the namespace before running code block or functions. It's almost like a global var but probably immutable?
hey everyone, i was wondering if there a consensus on the best pattern for a fn defn that can: either perform an operation on one thing or an operation on a collection of those things
i guess the front runner in my mind is defn operation [ & args ]
and then either (operation thing)
or (apply operation things)
IIRC it boils down to the fact that you are creating short-lived lists in runtime, adding pressure to the GC
my advice would be not to do that
in that, every time I've ever done that, I regretted it later
either pick "one thing" and use it with seq ops like map/filter etc or make it take a coll
Does '''apply''' operate 1 by 1, on each thing?
Or just apply the new list of arguments to the function?
Reading the docs... it looks like it applies new args to the function
Function overloading? Argument overloading
(apply + 1 2 3)
that wouldn't work
you need to do (apply + [1 2 3])
which is the same thing as (+ 1 2 3)
ahhhh ok, so it does apply the arguments as a whole
not 1 by 1
I was thinking that apply operated on each argument, 1 at a time
e.g. apply the function to each argument
yeah you will also see sometimes (reduce + [1 2 3])
which is something like (+ (+ 1 2) 3)
generally i’m looking for the optimal way to write one or more functions that do the same thing but either operate on one item or several
you can use multi arity functions for that
you need a map
i don’t think multi arity functions would work; i should have specified the ‘several’ is a collection
map is used to work on collections
@gtzogana I think the idiomatic thing to do is to define a function that operates on one, and then compose that with the various seq operations like map
reduce
transduce
etc.
@alexmiller suggested map
right, i guess normally that would be the answer. i guess my question is too abstract. i’m specifically looking at database inserts.
gotcha not sure what you're trying to do maybe i missed an earlier convo
so you have a fn like insert-item!
that you might call like (insert-item! item1)
or (insert-item! [item1 item2])
?
personally, I thought you were just trying to apply a new set of arguments to the same function… method overloading
but you mean a collection
in the case of the single item, i’d be looking at insert multi! [item]
, in the other insert-multi! items
personally I think that “overloading” your function signature like that is usually a bad idea
I agree… be careful with overloading, especially multi arity
in my opinion, multi arity is really object orienated style
I would prefer it to always operate on colls. I agree with Alex, I’ve done stuff like that before and usually come to regret it 😬
could you just put the single item into a collection?
so that the function always takes the same type
could just convert it, inside the function? no?
yup. but i would also like to avoid seq? item
. which, again, is kind of a personal aesthetic preference, i guess.
run!
is like map
, but for applying the same side-effecting function
are you trying to do a bulk insert operation?
@alexmiller, that’s cool, didn’t know that. i would have reached for doseq
.
instead of inserting 1 by 1
well that's the assumption i would make if i saw something called insert-multi
what about looking for a batch insert? are you just trying to make the map for efficient?
in ruby, we would batch insert with map
since ruby is very slow
just in case you have millions of items in the collection, and don’t want to suck up all your JVM’s memory… since JVM can use a ton of memory
can I ask a question though? why would a type check be bad in this case?
one thing is i just find that, if i can completely avoid it, it reduces the number of code paths i could be looking at
ya, the type check is also coupling… still thinking
my thought, comparing this to OOP, is to use dependency injection
but only use that pattern if you find that you need lots of duplicate functions
build yourself a to_collection function, that can turn any other single data into collection type
but that’s only going to be good, if it’s readable, and 1 to_collection function can save you from writing dozens of duplicate code
so it’s a little premmature to use the pattern
in general, Clojure rewards "thinking in collections". you have a huge library of seqable->seqable functions for doing so, threading macros like ->> built to create pipelines, etc
many functions tend to be a predicate (takes a single element and returns logical result) or a transformation (element -> element)
generally I find you should mostly do those, and use sequence functions to transform from collections of elements to collections of elements
when I've created functions that take 1 or N, it's usually because I haven't thought enough about how the function is going to be used. and when I have created it, I have ultimately had it break down because my single element actually turns out to be a collection itself and I can no longer distinguish
if the function you're writing is "insert-multi", I can't imagine it makes sense to take anything but a coll
which, imo, is false
much better aesthetically to have a single calling convention than 1 or N
Not being able to later distinguish between 1 vs. N case, because the 1 case is itself a collection, is certainly a bad place to end up.
I always end up there
one place that makes this stuff visible is creating specs
if writing the spec for the function args is hard, then that's a good sign that you have possibly introduced unnecessary complexity
On the comment earlier of "could be called as (do-join [thing]), but would prefer not to remember that", I don't want to sound too forceful, but if the doc of the function says that it always takes a sequence of 'thing', then in Clojure the syntactic overhead of putting thing into a sequence like a vector is very very light, and common in Clojure code.
I'll +1 Alex here. Early on in our Clojure development (2011), we wrote quite a few functions that could either take a collection or a single element -- and that code has been an albatross ever since, until we rewrote it to either always take a collection (and pass [item]
in calls that had a single item) or to always take a single item (and call map
on it for places that wanted a collection processed). In other words, Stuart Sierra's "heisenparameter" recommendation is what you should be doing 🙂 (or rather "don't").
I still have angst about stest/instrument for example https://clojure.github.io/spec.alpha/clojure.spec.test.alpha-api.html#clojure.spec.test.alpha/instrument
arguably done in the sake of aesthetics, but it makes me deeply uncomfortable
arguably multi is more primary here, so I'd probably argue just doing instrument
that takes a coll
I am new. So it’s better to just convert the single item into a collection type? then pass it to the function. So the function always expects the same type (collection)?
In general, yes, but if you have a function that just calls map
on its argument, it's probably better to name the function that is being applied, and to have callers use map
on that instead.
and rely on function doc comments to know what to pass to it?
I definately in favor of functions taking only 1 type and returning 1 type
Because functions calling other functions... if just 1 function in the chained calls, stack, messes up, you have to trace it, then possibly refactor code all over the place
Func1.func2.func3.fun4 oops... func4 wanted a collection, not a string
In ruby... they say... just don't use more than 3 chained calls :rolling_on_the_floor_laughing:
I started writing the return type into my function names, to make code readable...
func_coll.func_str.func_int
So its ok for Clojure functions to handle multiple types? Please correct me
it's possible. whether it's "ok" is entirely contextual
certainly there are many polymorphic functions in the core lib
for comparison, OCaml has different math functions for floating point vs. fixed point (calling +
on a float is an error, calling +.
on an int is an error)
clojure is pretty deeply polymorphic, it's one of my favorite things about the language
OCaml is pretty strict on types?
extremely so - ML set the paradigm for modern type-safe languages
Everyone keeps praising Haskell strict types.
But then people say opposite. That inferred types is bad
I have no idea. I do think a lack of polymorphic duct typing is annoying. With the + operator
type inference is just a convenience for strict typing - if your system is strict enough the compiler can know which type has to be used
OCaml has an object system that is 100% duck typed, but the percentage of people writing OO code in OCaml is small
I think interface dispatch / polymorphism is the good part of OO and I'm glad Clojure embraces it while avoiding concrete inheritance and enterprise design patterns
they also have different fold (aka reduce) for different sequential collection types - very annoying
yeah, F# was explicitly an attempt to copy OCaml for the CLR
wasn't it even a strict superset originally?
that sounds right but I'm not sure
but back to Clojure - polymorphism is normal, and it works well - especially the way of writing code to interfaces / protocols
Regarding earlier... I think he just needs an implicit type... e.g. single string converts into collection with 1 string element.
Can clojure do implicit conversions? Or is that bad idea?
Hmmm. Scala and C++ seem to be against implicit type conversions. Not sure about Clojure.
it’s basically a lack of type classes what makes this annoying (if I’m not mistaking?)
nor a string into a list of strings, nor vice versa. I certainly wouldn't want it to automatically do so.
Ya... (+ "a" "b") gives me a cast error
I kinda don’t like functions who deal with either 1 thing or a list of things. It begets a lot of checks like: am I dealing with a collection here?
I like to work with a single unit and find a way to arrange that function in a way that can process more volume if necessary. Sometimes just map of course 🙂
(defn f [])
(def g #())
Is there any reason why calling g
returns a clojure.lang.PersistentList$EmptyList
where as calling f
returns nil
?for #(+ 1 2)
the following form is (+ 1 2)
which is invoking the function + with the args 1 and 2
coworker had a question about reduce-kv
: is this purely convenience? Seems like a strange function for core (despite us using it a bunch)
clojure.lang.IPersistentMap
(kv-reduce
[amap f init]
(reduce (fn [ret [k v]] (f ret k v)) init amap))
is in core. isn't this the naive thing?sorry for breaking the discussion but Stuart talked about the (if (coll? arg) arg [arg])
antipattern here https://stuartsierra.com/2015/06/10/clojure-donts-heisenparameter
so, I can start a rebel-readline like this:
clojure -Sdeps '{:deps {com.bhauman/rebel-readline {:mvn/version "0.1.4"}}}' \
-m rebel-readline.main
and an nrepl like this:
clj -Sdeps '{:deps {nrepl {:mvn/version "0.6.0"}}}' -m nrepl.cmdline --interactive
but, how do I do an nrepl + rebel-readline?is there an inline way to express it without needing to make changes to deps.edn
?
do you mean like clj -A:1.7:bench:test...
where 1.7, bench, and test are three separate aliases you have set up?
you just use the -A flag
in my case, I have a docker container with clojure/clj binaries in it and no access to the filesystem and can't mound things in ... if I want to string together multiple deps and two main entrypoints (e.g.: rebel-readline and nrepl mains), how would I do that?
I see what you mean now, you are referring to the command line argument for the entrypoint for each dep. what is your reason for trying to use two entrypoints at the same time?
Could you specify your own entry point and call into your dependencies' entry points from there?
well, I want the readline capabilities of rebel-readline and I want the nrepl port listening at the same time
so, I guess maybe that's where that fancy -e
work would need to come in
It's probably a subjective question, when and where to start nrepl/rebl - so you may just want to ask again in the main chat 🙂
there's probably something fancy you can do with -e
but it's not going to be fun to type
What is the idiomatic way of generating a list or map from a starting value that has some function applied to it iteratively? This would be the equivalent of an imperative for loop where the end condition is that the initial value has reached or dropped below 0.
For example, a credit card payment calculator
Ok, so I looked over the docs and they aren't very clear in how exactly it works. Specifically I have an issue where the examples use partial
, but that seems to only work when the unknown argument is not the first argument. In a simple function where one number is subtracted from the other this won't work because it's the first number that is unknown, not the second.
For example, let's create a simple function for paying off a 0 interest balance so the function would be (- balance payment)
. It's balance
that needs to be replaced; not payment
.
I just gave a specific example
@whoneedszzz how about:
user=> (let [payment 2] (take-while pos? (iterate #(- % payment) 11)))
(11 9 7 5 3 1)
?Ended up with something very similar, but instead of pos?
I'm using (>= 0 %)
Though it actually makes more sense to use pos?
thinking about it more as depending on the last balance and payment amount it may be less than the usual monthly payment
user=> (doc iterate)
-------------------------
clojure.core/iterate
([f x])
Returns a lazy sequence of x, (f x), (f (f x)) etc. f must be free of side-effects
https://dev.clojure.org/jira/browse/CLJ-1906 is also a thing
> Is there a way to control the name of a reify
class? I like to have that for profiling to be able to see which implementation I have in the profiler.
@akiel have you found any workaround?
I think at that point you can just use deftype
right? or does it lack some advantage reify has?
(it doesn't capture scope like reify, but you can translate that into field(s))
I've never thought about it, but longish classnames generated by reify
are indeed inconvenient. Yeah, no scope capturing would be a downside. Maybe one could also try overriding toString
inside reify
, but then it's probably impossible to defmethod print-method
for such instance..
and yeah, overriding toString does also work iirc
Is there a way to use partial
when the first argument is the one you want to replace?
user=> (reify java.io.Closeable (close [this]) (toString [this] "my closable"))
#object[user$eval138$reify__139 0x37ddb69a "my closable"]
yeah, cool! not ideal, but better than the default
@whoneedszzz you can define it, the hard part is a good name
fair enough - a cute name for it that I've seen is qartial (since q is a backward p)
I can't take credit, and in practice I came back and found it used and forgot what it meant and expected it to be some kind of polynomial function
are they conceptually the same but different implementations, or cover entirely different use cases?
@borkdude maybe this helps - seems like nippy is trying to be faster https://github.com/ptaoussanis/nippy/blob/master/src/taoensso/nippy/benchmarks.clj
it tests and records speed of both
nippy is like: what if we made bytes out of things instead of printing them and gzipped those bytes
so fressian is better for reading/writing EDN to disk and nippy may be a bit faster than that?
fressian is like: lets take a survey of serialization formats, find one that matches what we want for clojure, and include some features like caching to improve performance
if I have a command line app that needs to read and write EDN data from disk and there is no long-running process, does fressian still make sense?
I have only used nippy in the context of using libraries that brought nippy along for the ride, and in ever case formed an unfavorable opinion of those libraries, which has rubbed off on nippy
the max edn file I’ve seen right now is about 2MB. but I’m going to split it into multiple EDN files anyway, so probably smaller
but very likely somewhere between pr and transit I would switch to using apache derby
I think Stu's fressian talk from years ago did a pretty good job at highlighting the goals and tradeoffs of fressian (pre-dated transit, which has different tradeoffs)
Can I just say how much I love tap>
? ❤️
I did a quick comparison between pr-str, fressian and transit. Reading: EDN 100ms, Fressian 20ms, Transit 20ms Writing: EDN 74ms, Fressian 50ms, Transit between 1000 and 2000 ms. So it seems that Fressian is the clear winner here, and I don’t know why Transit writing takes that long…
@ghadi https://github.com/borkdude/clj-kondo/blob/fressian/src/clj_kondo/main.clj#L14
clojure is not scheme, def always creates a global name, no matter where it is nested
it is, that’s the point of this tool: https://github.com/borkdude/clj-kondo/#rationale 🙂
I know that it’s doing that, I wanted to inspect the value after the run, this is throwaway code
I'd suggest also trying Jsonista
like, it is entirely possible to build and operate large complex clojure code bases without inline defs to debug things
the idea that you do it so often that you need a tool to make it easier is just, I dunno, what are we doing here
I don’t make it easier, I just don’t want to commit it if I’ve done it. I find it easy to capture e.g. requests and then inspect them. Works for me. You can also use an atom for this, or scope-capture, I don’t care.
I updated the benchmark so I can run it with either edn, fressian or transit from a cold JVM as a cmd line invocation. These are the numbers (code is above it): https://github.com/borkdude/clj-kondo/blob/fressian/src/clj_kondo/main.clj#L59 This is the test file: https://www.dropbox.com/s/qkhcm5x9fkhvw1f/test.edn?dl=0 I’m seeing the same thing with transit.