This page is not created by, affiliated with, or supported by Slack Technologies, Inc.
- # beginners (144)
- # boot (40)
- # cljsjs (1)
- # cljsrn (30)
- # clojure (190)
- # clojure-india (1)
- # clojure-poland (7)
- # clojure-russia (13)
- # clojure-spec (2)
- # clojurescript (2)
- # component (23)
- # css (6)
- # emacs (3)
- # events (5)
- # garden (4)
- # hoplon (2)
- # jobs-discuss (2)
- # klipse (1)
- # lein-figwheel (1)
- # off-topic (36)
- # re-frame (28)
- # reagent (2)
- # ring (7)
- # ring-swagger (2)
- # rum (3)
- # test-check (4)
- # untangled (4)
hey, I do not know too much about java and leiningen. Can someone tell me how I can put one project I develop on in another one that relies on it? I would like to automate it, not always creating jar and copying it around. thanks!
you usually won't put a project into another, just depend on it like any other 3rd party lib.
you can install your dependent project into your local maven cache instead of deploying it to a repo server somewhere
(defproject my.project/foo "0.2.4" ... )
(defproject my.project/bar "0.1.1" :dependencies [[my.project/foo "0.2.4"]] ... )
you can install the dependent project's artifact into your local maven cache using
depends on what project you're working on really.... running alpha-ware in production is probably not the best idea, there are still bugs being fixed here and there with 1.9
@akiroz In general this is true, in clojure's case, not so much, as it has a history of high quality alpha releases
there's exceptions - eg. 1.5.0 wasn't truly ready when it hit stable, and had to be replaced by 1.5.1 - but compared to other software it's remarkable that I can only think of that one serious example
hi, I've started reading a book called Clojure Programming and was trying to make the average function variadic. Also I want to be able to pass vectors, lists and single numbers. My current code throws a LazySeq to Numbers cast error
here's the code
(defn sum [& args] (apply + args)) (defn average [& args] (let [args (flatten args)] (/ (sum args) (count args))))
(sum [1 2 3 4] 10) gives
((1 2 3 4 10)) as args to sum, I think this is the problem
there's a bit of blind-leading-the-blind here since I'm also a clj beginner, but I took a crack at it and ended up with this:
(defn sum [& args] (reduce #(if (sequential? %2) (+ %1 (apply sum %2)) (+ %1 %2)) 0 args)) (sum [0 1 2 3 4] 10) ;; => 20
so first off, I don't actually know what the proper check is supposed to be since Clojure has
I don't think the /clj bot will even let you do defn - you could use letfn or let to put an example in one form
you can get StackOverflowError exception if you have too many nesting levels in your arguments ( :
trying to write a function which calculates the euclidean distance of dimension n, but I am having some trouble
(defn euclidean-distance [v1 v2] (if (not (= (count v1) (count v2))) nil (let [v (map vector v1 v2)] (println v) (apply #(Math/pow (- (first %1) (last %1)) 2) v))))
I am passing in vector v1
[1 1 1 1 1] and vector v2
[0 0 0 0 0] and zipping it to
([1 0] [1 0] ...).
But isn't the argument to the anonymous function [1 0] or is apply collecting many arguments and giving it to the anonymous fn?
not very readable though. Maybe I'm not clojurian enough?
(defn euclidean-distance [v1 v2] (if (not (= (count v1) (count v2))) nil (let [v (map vector v1 v2)] (Math/sqrt (apply + (map #(apply - %1) v))))))
@emil.a.hammarstrom - can you share an example input/output so I can be sure I'm refactoring it correctly? trying to come up with a more idiomatic version
OK - it can't calculate negative distance (and nor can mine) but this does what yours does, as far as I can tell
(defn euclidean-distance-2 [v1 v2] (when (= (count v1) (count v2)) (Math/sqrt (apply + (map - v1 v2)))))
fixing it so it can calculate a negative distance, it gets more complex again:
(defn euclidean-distance-2 [v1 v2] (when (= (count v1) (count v2)) (let [diff (apply + (map - v1 v2)) abs-diff (Math/abs diff)] (* (Math/signum (double diff)) (Math/sqrt abs-diff)))))
but, if you accept the premise that having extra dimensions means we can assume those dimensions are identical in the item where they were not specified (arguable, if not totally wrong) we could eliminate the check for equal count and it would just not calculate a difference for any dimension missing in one of the vectors
for many varargs functions, reduce and apply give the same answer, but apply will typically perform better (clojure.core has arity optimizations that won't work with reduce)
@mobileink reduce applies fn on 2 consecutive items, apply applies fn to an argument list
right - if you use clojure.repl/source you will see that apply usually ends up using reduce, for functions that take N args
with + it's a wash, but as a habit "when in doubt use apply" usually turns out best
any performance implications? (this is kinda embarrassing - i've spent a lot of time with clojure, but mainly metaprogramming stuff; i feel like i should instantly know the answer to this, but alas. heh)
as I said, clojure.core functions often have optimizations for specific lower arity counts, so that using apply will perform better than reduce in those cases
@mobileink a better example is str (apply str args) and (reduce str args) have the same result, but apply is much faster because all the concatenations can share a single StringBuilder object
I would say you should usually use reduce over apply, but really it depends and if it matters, you should test rather than guess
i guess this is like any language - if you want hardcore optimization, you'd better be on good terms with complexity.
clojure does a great job of ensuring the simple thing is more likely to be correct, but yeah, if you need to really know which thing is faster you need to start measuring things and you need to learn more of the messy details - optimization has an intrinsic complexity for sure (in the cases where the better performing thing is the simple thing, we don't need optimization as a process, it's often when optimization increases complexity that the need for it comes into play...)
Optimization requires measurement. Modern pipelined architectures are un-intuitive enough that just guessing at what will be faster, once you're down to the real micro-optimization level, will often lead you astray.
idle question: is optimization easier in clojure? as opposed to other langs targeting the jvm.
I spent quite a lot of time optimizing some expensive algorithms for computer animation written in C89 a couple of years ago and...
@mobileink often no, because of issues like boxing and reflection that are much simpler to deal with in plain java - but the time saved on everything else in clojure gives you the time to do that stuff right
One of the things I learned from that was that my intuitions were often wrong. Modern architectures are so fast at floating point math that caching computerd values is often a pessimization, if it leads to branch mis-prediction.
i suppose all bets are off for a lang->jvm language once you need to get behind the surface.
@mobilelink The first step in optimization is to make sure your algorithms are reasonable.
and clojure can help with that - the extra costs it applies tend to be the constant-time type (boxing, reflection, indirection)
@mobilelink So if using Clojure makes your development more easy to the extent that it frees you up to think harder about that, then in that case it might make optimization easier, IMHO.
Clojure would probably not be the correct choice. Nothing on the JVM would. But note that I'm talking about stuff that is not very common.
@tagore a good and maybe under-appreciated point. much easier to optimize if you have good reason to think what you're optimizing is correct.
It turns out that that about 90% of Java projects that are slow turn out to be so because of bad string-handling....
but otoh, clojure is a,language, and languages do not have a speed. for that it's all about the implementation. it's actually kinda silly to compare "language" pwrformance
you can easily have languages with features that are intrinsically bad for performance
And that's why the software I have written that has to be really fast is written in C89.
eg. the ability to redefine fields and methods at runtime is terrible for caching and branch prediction
In theory you could get equal perfomance in other languages. In practice, you just do C or Fortran, if you need to truly micro-optimize.
on hardware that can exist in the real world, code that requires checking if your field still has the right value and if the method still exists / has the same code, with synchronization between threads, is expensive
I'm inclined to think that a good C compiler is smarter than me about certain optimizations.
Until you bring the whole Von Neuman debate back, what would a hardware designed around the lambda model of computation be like, and then maybe Lisps would be better tailored to optimize against those.
"Giving up on assembly language was the apple in our Garden of Eden: Languages whose use squanders machine cycles are sinful. The LISP machine now permits LISP programmers to abandon bra and fig-leaf."
One of the rare occasions on which Perlis turned out to be wrong, or at least a bit misguided.
Gping back to optimization he also said " A LISP programmer knows the value of everything, but the cost of nothing." There's still some truth to that 😉.
Of course modern machines are so fast that we can do without fig-leaves 99% of the time.
@tagore At the time, they were going for optimizing instructions though, I think today, its the memory model which hurts Lisp's performance the most.
@didbus Yep, one of the things I learned while micro-optimizing that software is that in a lot of cases only two things matter for performance: cache misses, and branch-misprediction.
You can do a lot of floating point math in the tie it takes to go out to main memory and retrieve something.
Today I think it's more like you only have performance issues if you can't fit your data in cache.
@tagore Ya, I think that's true. I remember reading this blog post about how memory access in modern hardware isn't O (1), and should be seen more as O(sqrt (N))
Also, everything we call "optimisation" today, that's not research, is actually focused on optimizing the constant factor. In which case, reducing instruction count, switching to instructions that are faster, and limiting cache misses is all that can help. Some O (n) algorithms which make better use of caches could actually perform better than a O (logn) that misses the cache at every turn, given a certain data set.
The first level is about not being dumb, and that's all that most applications need. Just don't do stupid things with strings and you'll likely be fine. At qorst you might have to throw a bit more hardware at the problem, and in some cases that's cheaper than not being dumb.
Then there's the second level, where you're using good algorithms, not being dumb, and you still need to somehow find a 10x speedup.
That's where you start measuring things obsessively, worrying about different levels of cache, worrying about branch mis-prediction, etc.
@didibus Quite a while back I spent a year and some making a reasonably good living trouble-shooting systems for enterprises in NYC who were apparently unaware that profiling is a thing.
@didibus Occasionally they'd have a real problem that was hard, mainly around concurrency, etc.
@didibus But it was much more common for them to have performance problems, and about 90% of the time it was easily traceable to doing dumb things with strings. In quite a lot of cases it was literally just a matter of Schlemiel the Painter's algorithm.
@didibus I know that's hard to believe, but the truth is that in a lot of enterprise apps being dumb about strings is the main cause of performance problems. This was all quite a while ago, and I certainly hope people have gotten a bit smarter about this since.
@didibus At any rate, I can't complain- I was paid a significant amount to tell people to use StringBuffers... that's an oversimplification, but not much of one.
@tagore I'm just worried now I'm doing dumb things with strings that I don't know are dumb. Do you have any example? Do you mean like not using StringBuffers?
The implementations might be smarter now, but back in the day that required doing the same work over and over.
Or, for instance, hiding information in strings, so that you had to parse it out constantly...
"The string is a stark data structure and everywhere it is passed there is much duplication of process. It is a perfect vehicle for hiding information."