This page is not created by, affiliated with, or supported by Slack Technologies, Inc.
2022-02-19
Channels
- # announcements (1)
- # architecture (8)
- # babashka (8)
- # beginners (68)
- # biff (1)
- # calva (2)
- # clj-kondo (13)
- # cljs-dev (2)
- # clojure (71)
- # clojure-art (26)
- # clojure-europe (14)
- # clojure-nl (10)
- # clojure-uk (4)
- # clojurescript (96)
- # community-development (6)
- # conjure (1)
- # datalog (2)
- # emacs (6)
- # fulcro (20)
- # hugsql (7)
- # lsp (6)
- # nextjournal (13)
- # off-topic (7)
- # portal (1)
- # reagent (3)
- # reveal (8)
- # sci (50)
- # shadow-cljs (8)
- # spacemacs (2)
- # tools-deps (9)
- # xtdb (6)
There are a few different clojure repls, the two main ones are clojure.main/repl and nrepl
How do I convert "123" to int 123 in clojure
("123" "456") => [123 456]
(Integer. "123")
?
If you're using the new 1.11 beta or rc versions you can check (apropos "parse")
and see that there are some handy new parsing functions for exactly this purpose. (parse-long "123")
. But this is only available in the upcoming release, and not in the stable releases 1.10.3 or below.
(if a foo1 foo2)
hello, does clojure.tools.logging can log the line number and the function name of the code
Take a look at mulog, which can support that kind of feedback. function name is one of the defaults and line number could be added as a key/value in the log call https://cljdoc.org/d/com.brunobonacci/mulog/0.8.1/doc/readme
@haiyuan.vinurs Logging can produce filename and line number but that adds an overhead -- see the docs for the particular (Java) logging implementation you are using. It won't be able to figure out the Clojure function name because logging generally extracts that information from stacktraces and they'll be formatted for the compiled-to-Java-bytecode names at best.
At work, we include the namespace in the log format and we try to ensure that messages in logging calls are unique enough within a namespace to be able to find them easily.
I've never written tests for functions returning floating point values, so I'm wondering how I should test that a value is close enough to 0 or N.
I wrote this simple function
(defn polar-to-cartesian
"polar to cartesian coordinates"
[radius angle]
[(* radius
(m/cos angle))
(* radius
(m/sin angle))])
And this test
(deftest polar-to-cartesian-test
(testing "Points on a circle at various angles"
(is (= [1.0 0.0]
(polar-to-cartesian 1 0)))
(is (= [0.0 1.0]
(polar-to-cartesian 1 (/ m/PI 2))))))
The problem is with that last expression
(polar-to-cartesian 1 (/ m/PI 2)) => [6.123233995736766E-17 1.0]
I get a value for the x coordinate that is close to, but not zero. So I tried stealing the ulp=
function from clojure.math
but I guess the value isn't close enough to zero?
(defn ulp=
"Tests that y = x +/- m*ulp(x)
Borrowed from: "
[x y ^double m]
(let [mu (* (m/ulp x) m)]
(<= (- x mu) y (+ x mu))))
The ulp bounds for math ops only specific to one op - when you are combining ops, the differences could be larger
How close is the value you're getting?
I'm new to testing floating point stuff in general, so maybe a good question for you experts is "How close to a target value is conventionally considered close enough?"
In general, you define a delta
value, and check for:
(> delta (abs (- expected actual))
@ericdlaspe Expectations has an approximately
predicate for things like that: https://github.com/clojure-expectations/clojure-test/blob/develop/src/expectations/clojure/test.cljc#L475-L480 so you could write something like:
(defexpect polar-to-cartesian-test
(expecting "Points on a circle at various angles"
(expect (more-of [x y] (approximately 1.0) x (approximately 0.0) y)
(polar-to-cartesian 1 0))
...))
So, when I'm doing FP math generally, I need to think about how many operations are done at once and decide when and how to do rounding to keep the errors acceptable?
The value of delta
can vary, depending on the operation. A single operation, and you can make delta
very small. But as you increase how many operations are involved, then the value for delta
will need to grow. A single cos
or sin
should just need something in the order of 1e-15
There are entire courses on this subject. Acceptable errors, numerical stability, etc. just pointing out there's no simple answer and you're done
It depends a lot on the particular situation. For example, in accounting, there are specific rules for rounding.
I'm just playing around with graphics for now, but I'll keep that in mind and try to adhere to some kind of best practices... after I read more on the subject.
A general rule for rounding would be: "Use the maximum precision you can for most of the calculations, and round only on 'output'".
Rounding has its own issue, independent of the imprecision errors we were discussing earlier.
I'll study those a bit. Coming from C++, I've heard tales of various floating point oddities, but I was always safe with my area just being bit twiddling and integer math. I assume every language has similar but different FP challenges, so I'll just have to get familiar with Clojure's.
There may be some oddities to specific languages, but in general it’s the same. Once upon a time, every system did their own thing, but the IEEE-754 standard was created in 1985 to deal with this. Since then hardware and software has all adopted this standard, so it’s reasonably consistent everywhere
Engineering and physics teaches students about “significant digits”, which tells you how many decimal places are useful. For instance, if I add 15.001 and 2.3 then the answer is NOT 17.301! Why? Because when I say “2.3" that could be 2.34 and rounded down, or 2.27 and rounded up. You don’t know. That level of rounding completely swamps the 0.001, so there’s no point in even mentioning it. It’s irrelevant information, and misleads people into thinking that you have accuracy that you isn’t there.
This is why in engineering and physics we will often see numbers like: “2.300” This tells us that we have 4 significant figures. It means that we have a number that can be as low as 2.29950000…., or as high as 2.300499999….
When you use numbers of different numbers of significant figures together, then you always have to move to the one that has the lower number
So, if I’m adding 15.001 (5 significant figures, and 3 decimal places of accuracy) 2.3 (2 significant figures, and 1 decimal place of accuracy) The answer is: 17.3
It’s often covered in https://courses.lumenlearning.com/physics/chapter/1-3-accuracy-precision-and-significant-figures/. It’s a different issue to precision in floating point math, but it’s also talking about the same thing, which is why I think it’s useful to know about.
Thanks for the explanation, quoll. I am familiar with significant figures from my HS and undergrad physics/math courses. I guess the thing I'll have to learn is how to determine how many significant figures these operations result in. E.g., if I do (def HALFPI (/ math.PI 2))
how many figures do I get? And same for (math.cos HALFPI)
.
If you havent already, read this canonical reference. https://docs.oracle.com/cd/E19957-01/806-3568/ncg_goldberg.html
And this, for Clojure https://clojure.org/guides/equality#numbers
And with that, I'm off to read. Thank you all, @alexmiller @quoll @seancorfield @dpsutton and @dorab!
Hi. I’d like to take-while
but include the first item that fails the predicate. What’s the right way to do that?
I’d hesitate to say “right” way. There are lots of ways to do it. Sometimes the simplest way is the best.
One way would be to use take-while
into a collection that can be appended to, and then get the next item:
(conj (into [] (take-white pred collection)) (first (drop-while pred collection)))
But maybe you don’t want to process the seq twice. So consider what you see when you do (source take-while)
and just look at the 2 arity version:
=> (source take-while)
(defn take-while
"Returns a lazy sequence of successive items from coll while
(pred item) returns logical true. pred must be free of side-effects.
Returns a transducer when no collection is provided."
{:added "1.0"
:static true}
;; ---- Deleted the transducer version ----
([pred coll]
(lazy-seq
(when-let [s (seq coll)]
(when (pred (first s))
(cons (first s) (take-while pred (rest s))))))))
That can be duplicated, except instead of a when
to detect the end and return nil
you can use an if
:
(defn take-until
[pred coll]
(lazy-seq
(when-let [s (seq coll)]
(if (pred (first s))
(cons (first s) (take-until pred (rest s)))
(first s)))))
That (first s)
looks redundant, so I’d use a let
block (or when-let
in this case):
(defn take-until
[pred coll]
(lazy-seq
(when-let [[first-s & rest-s] (seq coll)]
(if (pred first-s)
(cons first-s (take-until pred rest-s))
first-s))))
I was a bit surprised that split-with
is literally just [(take-while pred coll) (drop-while pred coll)]
rather than being something more efficient but I guess that's required to keep both parts lazy...
(let [[p q] (split-with pred)]
(conj (vec p) (first q)))
but still that double-walk.(and my example loses the laziness on the first part so it's not equivalent anyway)
hmm so nothing short of basically desugaring take-while
which is fine by me—I have a vague and maybe imaginary notion of a similar operator in some other language that optionally includes the first to fail.
well, take-while
is a convenience. You can see that it’s not a complex function at all. Rewriting it to do what you want is not a significant effort
After all, it doesn’t need to be in core. You could probably have written it yourself without much effort
indeed
Okay, sounds good. Mainly I didn’t want to find out that there’s a do-not-keep-taking-after
that I could’ve used