This page is not created by, affiliated with, or supported by Slack Technologies, Inc.
2019-08-27
Channels
- # announcements (10)
- # bangalore-clj (1)
- # beginners (130)
- # calva (8)
- # cider (66)
- # circleci (2)
- # clojure (197)
- # clojure-europe (2)
- # clojure-italy (8)
- # clojure-nl (5)
- # clojure-spec (14)
- # clojure-uk (35)
- # clojurescript (46)
- # code-reviews (5)
- # cursive (4)
- # datomic (88)
- # duct (1)
- # emacs (2)
- # figwheel-main (15)
- # fulcro (20)
- # graalvm (1)
- # graphql (3)
- # jackdaw (2)
- # leiningen (2)
- # off-topic (64)
- # pathom (53)
- # re-frame (52)
- # reagent (12)
- # reitit (43)
- # rewrite-clj (1)
- # shadow-cljs (38)
- # spacemacs (3)
- # sql (17)
- # tools-deps (6)
- # vim (30)
@m131 I have a tiny test coverage tool
that does this with namespace finding and then alter-var-root
. You could have a look.
@alfred.xiao cool - one small thing I notice reading - you could just use clojure.core/run! instead of your doseq-with function - it's designed for exactly that type of usage (though it takes the args in flipped order)
also, since nothing can access the return value of a finally clause, you could use run! there too (instead of the mapv in evaluate-test-coverage)
Thanks @noisesmith Didn’t know run!
was there
yeah, IMHO it's underutilized
36 open issues on gh and last commit was 1/29/2016 - that looks close to it
flambo has been updated more recently https://github.com/sorenmacbeth/flambo
via interop
I don't know if onyx is still live, but that was a distributed processing system made in clojure
thanks for the link @alfred.xiao - looks interesting
https://github.com/ahungry/determinism - that's what I am thinking/working on
with results that are trackable (and reflective of runtime code paths) I think you could easily trace a path and see if a potential return value of some fn is incompatible with the input value of another fn to safeguard against regressions
@m131 somewhat related, i have been working on something to modify / transform source code in various ways. one motivator was to produce detailed execution traces (e.g. by running code after inserting tap> or some other logging form at the beginning of function bodies), save them, and analyze them for various purposes. source modification was the approach considered because it seemed likely to work for jvm / clr clojure as well cljs.
This maybe a dumb question, but keen to see how others’ view on this: From a pure functional approach perspective, should a map
over a structure, like a vector or a set, return the same structure? In Clojure, obviously (map inc #{1 2})
returns a seq rather than a set.
Well, mapv
coerces the output to a vector. You could (def maps (comp set map))
if you wanted something that coerces to a set.
In some languages (Haskell) a map returns the same type as the input. But in Clojure a sequence is returned, so this may be something to get used to. Sequences are lazy in Clojure, while sets, maps, vectors are not, so that may be part of the reason -- but I'm guessing.
That behavior (in Haskell) is more natural to me. map over a Tree you still see a Tree, map over a Graph, you still see a Graph
I agree, especially when you're surprised by it initially, but I think Clojure's approach -- sometimes using sequences and laziness, and sometimes not -- is very practical and makes sense once you get used to it.
It is not necessary a trouble, for example, a math function that maps from int set to bool set, you would know you would not have a output set bigger than 2 elements.
Sure, it depends on what you need I suppose. If you need to answer “does this set contain even numbers?“, mapping over #{1 2 3 4 5}
would tell you that (though map
seems like the wrong way to answer that question). If you need to answer, “which numbers are even?“, returning a set would strip that information. Of course, you could cast the set into something else before mapping over it to get around it.
Absolutely, there is no definite right or wrong here. The second question sounds more like a filter
question rather than map
question though.
Yeah, true. Pointless unless you also need the odd numbers, and you need them in the same seq. I.e., (map #(vector % (even? %)) #{1 2 3 4 5})
Which incidentally gives groups of vectors, since the entire thing needs to be realized anyhow. Yeah, Clojure certainly isn’t about beauty or purity as much as pragmatic compromises.
@alfred.xiao Rich rephrased the "100 functions" quote to be about a data abstraction rather than a data structure. And in Clojure the core abstraction/interface is the seq interface. map
is defined for seqable collections, and vectors, maps, lists, sets are all seqable. map
immediately gets the seq view of the input before doing anything. So it effectively takes a seq and returns a seq. Most core functions are written so that you don't need to call seq
manually on them, so you get the impression that you map over a set, but you actually map over a seq.
@U8J6W2SC8, that's a really clear way to look at it, thanks.
@U8J6W2SC8 thanks for pointing that out and it is helpful in understanding it. However, this does not necessary mean it is the best way of thinking of things though. What I am trying to say is how much abstraction we need is a balancing art and subjective. For example, in OO/Java concept, everything is a object/class, is that good?
@alfred.xiao I'm sorry, I'm lost.
@U8J6W2SC8 Sorry but never mind, just wanna raise question of whether the usefulness of seq
is being exaggerated
I’ve been thinking about the design decisions in Clojure recently, and I realize I’m not experienced enough to judge what particulars of Clojure are due to how the underlying VM works, and what are there due to them being a good decision regardless of platform constraints. Maybe everything would have been lazy if it had been possible? Maybe it’s better to have one a separate collection type that encodes laziness (as it is right now)? I’m not sure.
As for the comparison to everything being an object (in Java, or Ruby), I think that’s the kind of idealism that Clojure specifically doesn’t strive for. I.e., everything is not a lazy sequence in Clojure.
@alfred.xiao How is it exaggerated (or by whom)?
@U06B8J0AJ To make everything lazy (not just sequences) would be a huge change. E.g. PureScript is not lazy because it would require a runtime on top of JS.
Yeah, definitely. It’s also hard to imagine what a how a lazy map would behave, if not returned as a sequence.
@U8J6W2SC8 The official Clojure guide says ‘Clojure defines many
algorithms in terms of sequences (seqs).’ This implies most algos/funcs accept, return, in other words, optimized for`seq`. E.g. It is not uncommon to see real world code that map over a (hash) map then into
a {}
to make it a (hash) map again. Actually in this process, seq
plays a key role and is pretty much unavoidable if we do map
. Perhaps it is really unavoidable for cases where laziness is a must - e.g. infinite range. I like the logical sense that everything can be viewed as a sequence, but real code always need to concern data structure, hence many times a mix of seq
and concrete data structure, like (hash) map, set, etc.
@U06B8J0AJ I guess it would require (?) everything to be lazy, as in Haskell.
@alfred.xiao If you want to avoid (into {} (map ...))
you can write (as many have) a map-vals
function. Perhaps you'd want to use reduce-kv
instead though, if you're using an unary function. (Works for vectors as well, although if you know it's a vector then you can use mapv
.)
@U8J6W2SC8 Yeah, map-vals is a solution. I guess what I am really thinking is that seq is sort of invasive in a processes like this: Given a (hash)map, I need to filter, add one entry, then map the values to something else, this process should still give output as a (hash)map. By invasive I mean, the programmer has to know where seq plays a part, and when to convert back to a concrete data structure, because many funcs/algos accepts various data structures but returns seq only. It is good that they accept various data structures as input, but they also strip some key information from input.
@alfred.xiao I think the key here is that you see it as it strips key information from the input. map
is a seq function. It's not a map/vector/set function.
Many times you want the return value from the function given to map to be not a map entry.
Filtering the entries of a map is rather unusual I would say. While it would be handy to have it stitched back into a map in the end, filtering is a sequential operation when you come down to it. For more advanced data structure manipulation, https://github.com/redplanetlabs/specter might come in handy. It’ll navigate your data and do transformations while leaving the structure intact, and will often do so in a more performant way than the core library functions.
Out of curiosity, if you have something like,
{:hello :world
:some-things [{:a [1 2]}
{:b [3 4]}
{:c [1 2 3 4 5]}]}
In Haskell, will it update the stuff in say [:some-things 2 :c]
in a lazy manner?@U06B8J0AJ Filtering is not super uncommon though. I've done it several times, and in PureScript I've used https://pursuit.purescript.org/packages/purescript-ordered-collections/1.6.1/docs/Data.Map#v:update which is a combo of map and filter.
Regarding the Haskell question, I'm not sure exactly what you mean. Generally everything is lazy. For instance, if you have
then f
will never be called. So it's lazy everywhere. So if you update something deep in a structure, then yes, it would typically be lazy.
Correction, I meant https://pursuit.purescript.org/packages/purescript-ordered-collections/1.6.1/docs/Data.Map#v:mapMaybe . The other one is for a specific key (that's useful too!).
I'm not sure what you mean by "From a pure functional approach perspective"? If you give Clojure map
a pure function, then map
's operation is also a pure function.
I was not referring to side effect in this context but more like from a math or category perspective. My thought for map
was: Given a collection of things, apply a change to each thing only. e.g. Given a Tree of nodes, map over the Tree returns the same Tree structure but each node has changed somehow.
Are you asking whether other functional programming languages have an operation like map
that returns the same collection type that it is given as input?
If I map over a set I’d expect to have one result per item in the set. For example, if I did (map (fn [x] (odd? x)) #{1 3 5 8})
I’m not sure I’d want the result #{true false}
as opposed to (true true true false)
Good point on set. I was thinking a Tree kind of thing, where the mapped outcome is still a tree.
You can find fmap
implementations that do this though
part of the deal with map
is that it is lazy, which is extremely handy. For instance, you can map
over an infinite sequence (useful for when you're dealing with input from a server/user) and for deferred computation. Most of the other datatypes are eager, so they can't be used this way.
@alfred.xiao I find it useful to think of the seq functions to have the “type” seqable? to seqable?. Seq functions (like map) call seq on their arguments and don’t bother converting the return value back into a concrete type, since that’s a waste when you are composing lots of seq functions
Have never used/seen transducers in projects worked/working in, any other compelling reason to use it other than the reason mentioned above?
managing resources (lazyiness is often a problem when trying to control the scope of when a resource can be closed)
Could spark be monkey patched to accept clojure forms on workers instead of serialized bytecode?
@emccue Probably worth checking out powderkeg https://github.com/HCADatalab/powderkeg
hi, if I repl into a running jar to execute a long running command. would that continue to run if my ssh connection to the box dies, and thus the repl connection dies?
what stops on EOF is the procedure consuming your input (the read / eval / print loop), anything started in that session already exists and isn't auto-destroyed when that stream closes
rephrase command as in: just call a function, not something in a separate thread or go loop
I’m looking to add server sent events to an existing application that uses ring and good old blocking I/O… However, I clearly don’t want to serve the events from the existing server with long poll requests eating a thread per connection. I only need to send a few simple events from the app, so I’d like something pretty minimal. Ideally with relatively few dependencies (to minimise conflicts), and to be relatively lightweight in terms of overall size. I’d also like to run both the existing jetty server and the SSE server in the same JVM process. Last time I did SSE in clojure was many years ago; so not sure what people would recommend these days… http-kit, aleph, pedestal?
ahh interesting pedestal can work with a jetty backend… was wondering if I might be able to leverage the async capabilities of the current jetty server
@rickmoynihan My experience about 2 years ago with pedestal SSE was fantastic. Lots of things like replaying events to catch a client up after a dropped connection "Just Worked".
i have a strange question, is it possible for clojure to work on really low level architectures in the future? what's the minimum amount of architectural complexity required to provide the luxury of map, filter, and reduce?
having map, filter and reduce doesn’t mean you have clojure. The minimum required for the luxury of map, filter and reduce is essentially something like the 7 special forms, and high order functions. So you probably want to look into minimal lisp dialects that target small machines.
https://clojure.org/about/rationale#_languages_and_platforms is a good kind of summary of the facilities that clojure rests on top of
There are some very low-level "almost-clojure" implementations though: https://ferret-lang.org/
And languages like carp also go in the direction of high performance, low level code from a high level lisp, didn't do too much with it to judge though
Ferret can run on 2KB of ram! what!
so cool
the jvm was originally created for embedded devices, and a computer with gigs of ram and multiple cores fits in your pocket today

My dev workflow involves hosting a socket repl and then piping code to the repl via my editor using ncat
.
I get an exception if I run the following code twice in a row via my repl:
Is this a bug/known-issue?
(ns socket-test.core-test)
(do
(require 'socket-test.core-test :reload)
(run-tests))
Exception:
socket-test.core-test=> Execution error (SocketException) at java.net.SocketOutputStream/socketWrite (SocketOutputStream.java:118).
Socket closed
Minimal repro:
https://gist.github.com/djtango/f5b81183d269eaf5eaa73122796cc061Essentially, it seems clojure.test/run-tests
and/or (require 'ns :reload)
don't seem to be compatible with the socket-repl
that execution error means that nc is exiting while the java process is trying to write its output back right?
the socket repl starts up, which if I recall spins up the future threadpool, which can stop the jvm for exiting from a minute or two, but unless you have something keeping some main threads running it will exit
ahh, so you could fix it with the @(promise)
trick or something
(.join (Thread/currentThread))
;)
but something else likely is, or something else is stopping the jvm from immediately exiting
and the threads the socket repl creates are specifically marked as daemon threads, so they don't stop the jvm from exiting if nothing else is going on
btw, there is a flag to choose whether they should be daemons or not
actually two flags, :server-daemon
for whether the acceptor thread is a daemon and :client-daemon
for whether the socket client threads are daemons
I'm so sorry, I lie:
a vanilla lein repl
with socket enabled doesn't reproduce the issue
The issue seems to be an interaction with rebel-readline
lein repl starts a non-daemon nrepl server thread
What’s a “canonical” way to deal with a problem where there [f1 f2 f3...]
are functions that need to be applied to [v1 v2 v3]
? Assume that none of those f1…
are in a list but you need the result of all those applications in order to compute a final result? Is it better to just stick a let [a (f1 v1) b (f2 v2)]
or use a (map #(%1 %2) fs vs)
This happens a lot. juxt
for instance applies multiple functions to one argument(s). What if I want a parallel set of functions applied to a parallel set of arguments If you know what I mean…?
@srijayanth one approach is to use juxt
@noisesmith - juxt doesn’t solve this problem
nothing stops you from mapping juxt across args
But I want each of those functions applied to different args
Meaning f1 v1, f2 v2 etc
Not (f1 v1 v2 v3) etc
oh, then your map example is how I usually do that
@noisesmith - that’s what I resort to as well, however, is there a word we can give it? Its a pattern that occurs too commonly
zip-apply might be a decent name, or zip-invoke
Yeah. I think some apply variant indeed. zip-apply sounds nice. I’ve dabbled with apply-against and apply-to
In fact, if we name the #(%1 %2) then we get a lot of benefits. The (map apply-to)
would turn into a transducer that can be quite pliable
except we don't have many transducing contexts that zip together multiple collections
sequence is the only one I can think of
it is the only one
Which is still useful.
Any idea why I get an unbound Var when i def
the result of a call?
(def _r (cci/concordion-run (org.concordion.api.Resource. "/UsageCharges.md") "./UsageCharges.md"))
=> #'charges-test/_r
charges-test/_r ;; eval the just created var
=> #object[clojure.lang.Var$Unbound ...
? Clojure calls work (e.g. (def _x (range 3))
) Thank you!
Update: This works: (def _r (bean (cci/concordion-run (org.concordion.api.Resource. "/UsageCharges.md") "./UsageCharges.md")))
=> #'charges-test/_r
_r
=> {:forExample false, ...}
If I replace bean
with identity
then evaluating _r
fails as above.the only thing I can think is that cci/concordion-run might be returning the value of an unbound var, but that seems like a strange behavior
or that you are evaluating _r before the cci//concoridion-run returns
Thanks, I haven't even thought of that. But it doesn't seem to be the case, I now waited a while an _r
does evaluate but calling e.g. bean
on it fails:
_r
=> #object[org.concordion.internal.SummarizingResultRecorder 0x6fc7e8e2 "org.concordion.internal.SummarizingResultRecorder@6fc7e8e2"]
(bean _r)
=> {:class clojure.lang.Var$Unbound}
in order for it to change from a bound value to Var$Unbound something has to be explicitly setting it to that, or some auto-reload process is recreating the namespace in the background and you are catching it at an awkward moment
those are the only explanations I can think of
thank you!
(eg. if using clojure via editor tooling)
I have a coll like [1 2 3]
and I want to verify that every element in the vector is a pos-int?
so I am doing (every? pos-int? [1 2 3])
. I noticed though that (every? pos-int? [])
would also yield true
. Is there a built in that does this, but would return false because the vector is empty? Just curious if I am missing something.
according to logic, every member of [] is pos-int?, you can just use (and (seq %) (every (pos-int? %))
no, don't do that
oh sorry, I read that as a spec, that's fine
(the every
spec has built-in support for min-count)
(s/every pos-int? :min-count 1)
that's pretty nice. it will be really interesting to see the new spec when it comes out.
Well that’s just spec 1, nothing new there
to be cute, it could be (and (pos-int? (count %)) (every? pos-int? %))
good catch
the every? vacuous truth
people don't like editing xml
people want to define build tasks using clojure
compiling to aot, test running, compiling cljs to js, linting
in boot or lein it's easy to pull in clojure code for those tasks (and that's how it's typically done via plugins or libraries)
oh yeah, the clojure versions are more compact too
you can use deps.edn in lein via a plugin to replace it's regular dep management
lein still defines a lot of tasks / plugins with no cli equivalent (but that's changing fast as more teams adopt cli)
I wish there was a lein 3 that required the project.clj file not be evaluable but more static like deps.edn
do that many projects end up using evaluated inline code?
but that only matters at your current project - code in your deps isn't run
you just load their pom
doesn't cider-nrepl's project.clj only matter if you are building cider-nrepl? it shouldn't effect a project using it as a dep or plugin? - or is it injecting executed code into the project.clj via the plugin?
i think that's what's preventing them from being targets like that. the description of the project must be evaled
lein doesn't run code from your deps though - I must be misunderstanding you
oh, right, it can only use it via the pom / deployed artifact
You can always specify :deps/manifest :deps
to "force" tools.deps to treat it as a deps.edn
project with an empty list of dependencies, and then provide any transitive deps yourself manually.
(but cider-nrepl
has a deps.edn
file -- so what repo are you talking about?)
ah i forgot someone added that recently and the def inside the project.clj is gone. it was just an example. i think maybe my last job had some evaluation stuff in one of the project.cljs
Yeah, we used to do some calculations in project.clj
back when we still used Leiningen... coming up on four years ago 🙂