This page is not created by, affiliated with, or supported by Slack Technologies, Inc.
2019-04-08
Channels
- # bangalore-clj (4)
- # beginners (160)
- # calva (132)
- # cider (18)
- # clara (1)
- # cljsrn (2)
- # clojure (129)
- # clojure-boston (1)
- # clojure-europe (5)
- # clojure-italy (5)
- # clojure-losangeles (1)
- # clojure-nl (33)
- # clojure-uk (49)
- # clojurescript (88)
- # cursive (20)
- # datomic (5)
- # duct (3)
- # fulcro (33)
- # graphql (7)
- # jobs (3)
- # kaocha (3)
- # nrepl (41)
- # off-topic (58)
- # pathom (18)
- # re-frame (1)
- # reagent (5)
- # shadow-cljs (148)
- # spacemacs (7)
- # tools-deps (7)
Can someone help me understand if I have found an error in Brave Clojure?
It gives an implementation of partial
:
(defn my-partial
[partialized-fn & args]
(fn [& more-args]
(apply partialized-fn (into args more-args))))
I wrote this function to use with it:
(defn print-args
[& args]
(loop [index 0]
(println (format "arg #%d: %s" index (nth args index)))
(if (< index (dec (count args)))
(recur (inc index)))))
But it doesn’t seem to behave like the built-in partial
and gives arguments out of order:
=> ((my-partial print-args "Today" "was" "a") "good" "day")
arg #0: day
arg #1: good
arg #2: Today
arg #3: was
arg #4: a
nil
As opposed to:
=> ((partial print-args "Today" "was" "a") "good" "day")
arg #0: Today
arg #1: was
arg #2: a
arg #3: good
arg #4: day
nil
I don’t understand why this happens because the implementation of my-partial
seems correct to me. I don’t understand why the arguments are given out of order. The relevant line is (into args more-args)
.into
uses the semantics of whatever collection it’s collecting into. & args
is a seq
@lilactown And seqs have stuff pushed onto them?
@lilactown Hmm. So the implementation in the book is actually wrong?
@lilactown Here is the usage example in the book:
(def add20 (my-partial + 20))
(add20 3)
; => 23
This example works because addition is commutative, right? So either the author flubbed it or purposefully didn’t complicate the implementation and purposefully used a simple example where order didn’t matter.Yeah I would hope so because I’ve sank way too much time into trying to understand why I don’t get it. It’s probably a mistake.
@lilactown I was about to submit a message to the repository of the book but it has dozens of issues and pull requests that haven’t been addressed in years, doesn’t seem worth the time. Now I’m starting to doubt the quality of this resource. But it’s been helpful so far so I guess I’ll stick with it a bit longer.
I’m trying to use print-stack-trace
with GraalVM, but that doesn’t work since it has reflection issues (which can be resolved using type hints)
Does anybody know what is the motivation for having single-argument =
arity in Clojure which always returns true
? Is this a technical detail or is there a mathematical reason for it?
I suppose it's so you can do things like (apply = coll)
to mean "are all elements in coll
equal?" without having to treat the trivial 1-element case differently
However, you still have to handle 0-element case explicitly, so I don't know if the victory is huge here.
yeah, I was about to say that "mathematically" it makes sense to have a 0-argument arity that always returns true also
which should make even less sense unless you think of it as saying "are my arguments all in ascending order?"
What you describe is actually different, it's "Doesn't the collection contain distinct elements?"
There is a place for a function that behaves like that, but it's a stretch to link that and =
> are my arguments all in ascending order? That interpretation of < makes sense, so I don't question it.
well, the alternative would be to throw an exception, which makes things harder to reason about
> I think it's also convention among other lisps to define =
in this way
Yeah, that is a valid argument.
how to spec a hashmap, whose keys are not specs themselves? say I need to provide spec for a hashmap whose keys are string literals and not keywords? should I be using something other than spec/keys?
I am using a multimethod that dispatches based on deriving the spec of an incoming message.
should I just roll my own spec (with contains?) or is there something in the std library or a simple way of working with spec/keys that i am not aware of?
Right now, we don’t have anything to specifically support keys as strings
> (-> '[^int? ^some? a] first meta)
{:tag int?}
Is ^some?
swallowed (lost) here?
(I realise it is not expected to pass functions as metadata, but I'm developing a little sth)well, ^foo bar
is the same as ^{:tag 'foo} bar
, and ^foo bar baz
is the same as ^{:tag 'foo} ^{:tag 'bar} baz
, which is semantically (with-meta (with-meta baz {:tag 'bar}) {:tag 'foo})
I’m trying to pass a custom deftype over a core.async channel, and it seems to coming out of the other side as a seq, losing its type information and structure.
doesn't ring a bell
user=> (require '[clojure.core.async :as a])
nil
user=> (deftype Foo [x])
user.Foo
user=> (def c (a/chan 1))
#'user/c
user=> (a/>!! c (->Foo 1))
true
user=> (def c2 (a/<!! c))
#'user/c2
user=> (class c2)
user.Foo
user=> (ancestors (class c2))
#{clojure.lang.IType java.lang.Object}
user=> (.-x c2)
1
looks fine to me with a simple test
That seems to be enough for it to treated as one — if I remove that implementation it makes it through intact
user=> (->Segment 0 :foo [1 2 3])
(1 2 3)
user=> (class (->Segment 0 :foo [1 2 3]))
user.Segment
print-dup or whatever is responsible for printing here, I can never remember what does what
this actually cropped up because I’m trying to match on it (using core.match) on the other side of the channel
well you've told clojure that Segment is a seq so it's printing it using the seq printer
if the type is definitely getting lost maybe you have something that tries to pr-str/read-string the value?
I guess that explains why a lot of things implement a “to seq” style rather than directly implement seq
I remember a comment in core match about destructing problems since the types implemented both seq and assoc protocols
yes, the general advice is to make types ISeqable instead of ISeq, when you care about them being more than simple seqs
which doesn't really make the type less powerful since most collections operate on seqables instead of just on seqs
Did anyone ever have a problem that JVM doesn't flush SoftReferences even when the heap is 99% occupied, and everything is spinning in FullGC most of the time? I'm using hot Java recompilation heavily which inflates DynamicClassLoader's classCache
, and it never clears.
well DCL only clears cache when you define a new class
due to Java recomp are you never defining new classes?
and thus never triggering a clear?
> due to Java recomp are you never defining new classes?
It's the opposite, I define new classes all the time with defineClass
.
I'm using Virgil, here's how a class gets defined there: https://github.com/ztellman/virgil/blob/master/src/virgil/compile.clj#L129
In fact it might not even be related to Virgil/Java compilation as I'm using tools.namespace too, and the main contributors to inflated heap seem to be some big data objects that are referenced by Clojure function classes, which are in turn kept around by classCache
.
I took a heap dump of the application, and MAT couldn't find any live paths to GC roots from those big objects. FullGC was doing nothing. However, as soon as manually triggered OOM, those softrefs successfully disappeared.
when you say "explicitly triggering an outofmemoryerror" what mechanism are you using to do that?
@alexmiller Interestingly enough, right now the DynamicClassLoader.rq
is empty, but those SoftReferences are still there.
and on what thread are you raising that exception, and what happens if you raise a different exception on that thread?
I learned that trick here: https://stackoverflow.com/questions/3785713/how-to-make-the-java-system-release-soft-references/3810234#3810234
it sounds like you are allocating small enough amounts of memory that the full gc recovers a very small amount about that seems to be enough so refs aren't cleared, and then you allocate again, so full gc runs again, etc
I doubt changing the heap size would work here, it's currently set to 7g for devtime, and the leak manifests itself very slowly, it takes hundreds of reloads before the heap becomes occupied enough to cause lags and hangups. And it never goes past it, I was at 100% OldGen util, and it still refused to give up those SoftRefs.
> (or fiddle with GC tuning params) That one could possibly work, yet it is still strange to me that such behavior could be the default. Maybe switching a GC would help, but I don't want to move away from ParGC during development.
if it is what I described the issue is in the relationship between the gc cycles and the mutator cycles, it isn't an issue of heap size per se, but changing the heap size will change the gc cycle and hopefully break the relationship
I prefer a predictable GC in development, but this is subjective and another story. Thanks for the help, anyway.
in java 12, I know the new ZGC actually has support for parallel class unloading, which might be helpful for you
in general though, I'd say this is a level of gc tuning that I am aware of, but not an expert in :)
Interesting, I didn't link this problem to class unloading. They could indeed be related. I gonna study the GC logs, see if there's a mention of it there.