Fork me on GitHub
#beginners
<
2017-04-16
>
timo13:04:08

hey, I do not know too much about java and leiningen. Can someone tell me how I can put one project I develop on in another one that relies on it? I would like to automate it, not always creating jar and copying it around. thanks!

akiroz13:04:46

you usually won't put a project into another, just depend on it like any other 3rd party lib.

akiroz13:04:43

you can install your dependent project into your local maven cache instead of deploying it to a repo server somewhere

akiroz13:04:14

dependent project:

(defproject my.project/foo "0.2.4"
  ...
  )
depending project:
(defproject my.project/bar "0.1.1"
  :dependencies [[my.project/foo "0.2.4"]]
  ...
  )

akiroz13:04:12

you can install the dependent project's artifact into your local maven cache using lein install.

akiroz13:04:56

hope this helps @timok

timo13:04:34

thanks a lot! I will try that out.👍

timo16:04:30

would you recommend working with clojure 1.9 and spec? or stay with 1.8 without spec?

sveri16:04:26

👍 for working with 1.9 and spec

akiroz16:04:27

depends on what project you're working on really.... running alpha-ware in production is probably not the best idea, there are still bugs being fixed here and there with 1.9

akiroz16:04:56

but if it's not a huge project then by all means go for it 👍

sveri16:04:18

@akiroz In general this is true, in clojure's case, not so much, as it has a history of high quality alpha releases

noisesmith16:04:09

there's exceptions - eg. 1.5.0 wasn't truly ready when it hit stable, and had to be replaced by 1.5.1 - but compared to other software it's remarkable that I can only think of that one serious example

emil.a.hammarstrom16:04:52

hi, I've started reading a book called Clojure Programming and was trying to make the average function variadic. Also I want to be able to pass vectors, lists and single numbers. My current code throws a LazySeq to Numbers cast error

emil.a.hammarstrom16:04:58

here's the code

(defn sum [& args]
  (apply + args))

(defn average [& args]
  (let [args (flatten args)]
    (/ (sum args) (count args))))

emil.a.hammarstrom16:04:40

(sum [1 2 3 4] 10) gives ((1 2 3 4 10)) as args to sum, I think this is the problem

noisesmith16:04:26

@emil.a.hammarstrom sum takes its args and turns them into a list

noisesmith16:04:29

that's what & does

noisesmith16:04:05

if you turned (sum args) into (apply sum args) it would work

noisesmith16:04:32

apply takes a collection and uses it as the positional args to a function

emil.a.hammarstrom16:04:07

guess I'll just (apply + args) and skip the sum function

noisesmith16:04:17

that works too!

rgdelato17:04:46

there's a bit of blind-leading-the-blind here since I'm also a clj beginner, but I took a crack at it and ended up with this:

(defn sum [& args]
  (reduce
    #(if (sequential? %2)
        (+ %1 (apply sum %2))
        (+ %1 %2)) 
    0 args))

(sum [0 1 2 3 4] 10) ;; => 20

noisesmith17:04:26

that only works with one level of nesting though

noisesmith17:04:40

oh, wait, no, it does work with nesting that goes deeper

rgdelato17:04:00

so first off, I don't actually know what the proper check is supposed to be since Clojure has seq?, coll?, sequential?, etc.

rgdelato17:04:43

and secondly, I have no idea how to get /clj bot to run multiple expressions o.o

noisesmith17:04:00

I don't think the /clj bot will even let you do defn - you could use letfn or let to put an example in one form

featalion17:04:35

you can get StackOverflowError exception if you have too many nesting levels in your arguments ( :

noisesmith17:04:37

flatten has the same problem

noisesmith17:04:29

it's remarkably easy to SO in clojure - see also concat-bombs

jl17:04:07

Is there a better way to rewrite (defn one [] 1) to #()-syntax than #(identity 1)?

noisesmith17:04:48

(constantly 1) is best

noisesmith17:04:14

#(do 1) works but is silly

emil.a.hammarstrom20:04:32

trying to write a function which calculates the euclidean distance of dimension n, but I am having some trouble

emil.a.hammarstrom20:04:37

(defn euclidean-distance
  [v1 v2]
  (if (not (= (count v1) (count v2)))
    nil
    (let [v (map vector v1 v2)]
      (println v)
        (apply
          #(Math/pow (- (first %1) (last %1)) 2) v))))

emil.a.hammarstrom20:04:17

I think the apply is throwing the current ArityException (5) error

emil.a.hammarstrom20:04:17

I am passing in vector v1 [1 1 1 1 1] and vector v2 [0 0 0 0 0] and zipping it to ([1 0] [1 0] ...).

noisesmith20:04:28

if v has more than one element, the anonymous function will error

emil.a.hammarstrom20:04:14

But isn't the argument to the anonymous function [1 0] or is apply collecting many arguments and giving it to the anonymous fn?

noisesmith20:04:28

apply uses each element in v as an argument

emil.a.hammarstrom20:04:37

hmm, not really what I want

emil.a.hammarstrom20:04:38

did it!

(defn euclidean-distance
  [v1 v2]
  (if (not (= (count v1) (count v2)))
    nil
    (let [v (map vector v1 v2)]
      (Math/sqrt (apply + (map #(apply - %1) v))))))
not very readable though. Maybe I'm not clojurian enough?

noisesmith20:04:30

@emil.a.hammarstrom - can you share an example input/output so I can be sure I'm refactoring it correctly? trying to come up with a more idiomatic version

noisesmith20:04:37

OK - it can't calculate negative distance (and nor can mine) but this does what yours does, as far as I can tell

(defn euclidean-distance-2                                                      
  [v1 v2]                                                                       
  (when (= (count v1) (count v2))                                               
    (Math/sqrt (apply + (map - v1 v2)))))

noisesmith20:04:27

fixing it so it can calculate a negative distance, it gets more complex again:

(defn euclidean-distance-2                                                      
  [v1 v2]                                                                       
  (when (= (count v1) (count v2))                                               
    (let [diff (apply + (map - v1 v2))                                          
          abs-diff (Math/abs diff)]                                             
      (* (Math/signum (double diff))                                            
         (Math/sqrt abs-diff)))))

noisesmith20:04:31

but, if you accept the premise that having extra dimensions means we can assume those dimensions are identical in the item where they were not specified (arguable, if not totally wrong) we could eliminate the check for equal count and it would just not calculate a difference for any dimension missing in one of the vectors

mobileink20:04:01

reduce v. apply? whassa diff?

emil.a.hammarstrom20:04:23

(euclidean-distance [1 1 1 1 1] [0 0 0 0 0]) => sqrt(5)

noisesmith20:04:35

cool, I get the same answer

noisesmith20:04:24

for many varargs functions, reduce and apply give the same answer, but apply will typically perform better (clojure.core has arity optimizations that won't work with reduce)

emil.a.hammarstrom20:04:29

@mobileink reduce applies fn on 2 consecutive items, apply applies fn to an argument list

emil.a.hammarstrom20:04:45

according to my interp of docs

noisesmith20:04:09

right - if you use clojure.repl/source you will see that apply usually ends up using reduce, for functions that take N args

noisesmith20:04:03

with + it's a wash, but as a habit "when in doubt use apply" usually turns out best

mobileink20:04:30

any performance implications? (this is kinda embarrassing - i've spent a lot of time with clojure, but mainly metaprogramming stuff; i feel like i should instantly know the answer to this, but alas. heh)

noisesmith20:04:28

as I said, clojure.core functions often have optimizations for specific lower arity counts, so that using apply will perform better than reduce in those cases

mobileink20:04:07

but you said apply usually ends up using reduce. i missed sth.

noisesmith20:04:26

@mobileink usually - not always- and not in the simple way every time

noisesmith20:04:06

@mobileink a better example is str (apply str args) and (reduce str args) have the same result, but apply is much faster because all the concatenations can share a single StringBuilder object

mobileink20:04:12

heh, i luv clojure. supremely simple except when it isnt.

Alex Miller (Clojure team)20:04:21

I would say you should usually use reduce over apply, but really it depends and if it matters, you should test rather than guess

mobileink20:04:33

i guess this is like any language - if you want hardcore optimization, you'd better be on good terms with complexity.

mobileink20:04:27

reduce does have the advantage of conceptual clarity.

noisesmith21:04:15

clojure does a great job of ensuring the simple thing is more likely to be correct, but yeah, if you need to really know which thing is faster you need to start measuring things and you need to learn more of the messy details - optimization has an intrinsic complexity for sure (in the cases where the better performing thing is the simple thing, we don't need optimization as a process, it's often when optimization increases complexity that the need for it comes into play...)

tagore21:04:23

Optimization requires measurement. Modern pipelined architectures are un-intuitive enough that just guessing at what will be faster, once you're down to the real micro-optimization level, will often lead you astray.

mobileink22:04:38

idle question: is optimization easier in clojure? as opposed to other langs targeting the jvm.

tagore22:04:47

I spent quite a lot of time optimizing some expensive algorithms for computer animation written in C89 a couple of years ago and...

mobileink22:04:10

guessing they're all pretty hairy.

noisesmith22:04:13

@mobileink often no, because of issues like boxing and reflection that are much simpler to deal with in plain java - but the time saved on everything else in clojure gives you the time to do that stuff right

noisesmith22:04:26

that is, often not easier in clojure

tagore22:04:50

One of the things I learned from that was that my intuitions were often wrong. Modern architectures are so fast at floating point math that caching computerd values is often a pessimization, if it leads to branch mis-prediction.

mobileink22:04:31

i suppose all bets are off for a lang->jvm language once you need to get behind the surface.

tagore22:04:41

@mobilelink The first step in optimization is to make sure your algorithms are reasonable.

noisesmith22:04:29

and clojure can help with that - the extra costs it applies tend to be the constant-time type (boxing, reflection, indirection)

mobileink22:04:57

yeah, clojure should be good at that.

tagore22:04:03

@mobilelink So if using Clojure makes your development more easy to the extent that it frees you up to think harder about that, then in that case it might make optimization easier, IMHO.

tagore22:04:44

There is actually very little code in the world that really needs to be fast.

tagore22:04:06

But, having worked on things that did need to be so...

tagore22:04:37

Clojure would probably not be the correct choice. Nothing on the JVM would. But note that I'm talking about stuff that is not very common.

mobileink22:04:11

@tagore a good and maybe under-appreciated point. much easier to optimize if you have good reason to think what you're optimizing is correct.

tagore22:04:22

It turns out that that about 90% of Java projects that are slow turn out to be so because of bad string-handling....

mobileink22:04:13

but otoh, clojure is a,language, and languages do not have a speed. for that it's all about the implementation. it's actually kinda silly to compare "language" pwrformance

tagore22:04:20

I'd guess that Clojure would eliminate quite a lot of that, hopefully.

tagore22:04:42

Right- it's about implementations.

tagore22:04:52

But languages often imply implementations.

mobileink22:04:34

hah, i believe it! but we can't bame java for that.

noisesmith22:04:41

you can easily have languages with features that are intrinsically bad for performance

noisesmith22:04:58

(not that I'd say clojure is egregious on that account by any means)

tagore22:04:10

And that's also true... some features are very hard to make performant.

mobileink22:04:31

intrinsically? not so sure about that.

tagore22:04:50

And that's why the software I have written that has to be really fast is written in C89.

noisesmith22:04:51

eg. the ability to redefine fields and methods at runtime is terrible for caching and branch prediction

mobileink22:04:21

only true until somebody figgers out how to make it go fast.

tagore22:04:37

In theory you could get equal perfomance in other languages. In practice, you just do C or Fortran, if you need to truly micro-optimize.

mobileink22:04:05

i always liked assembler. 😉

noisesmith22:04:07

on hardware that can exist in the real world, code that requires checking if your field still has the right value and if the method still exists / has the same code, with synchronization between threads, is expensive

tagore22:04:11

But like I said, very little software has to be that fast.

tagore22:04:43

I'm inclined to think that a good C compiler is smarter than me about certain optimizations.

mobileink22:04:45

totally agree.

mobileink22:04:35

then again things continue to change at a truly mind-boggling pace.

didibus22:04:54

Until you bring the whole Von Neuman debate back, what would a hardware designed around the lambda model of computation be like, and then maybe Lisps would be better tailored to optimize against those.

tagore22:04:38

@didibus Well, the Lisp chip was a thing at one point, but....

tagore22:04:08

At least at the time it turned out to be faster to just make general-purpose chips.

tagore22:04:29

There's a nice Perlisisms abotu that though... let me find it....

tagore22:04:47

"Giving up on assembly language was the apple in our Garden of Eden: Languages whose use squanders machine cycles are sinful. The LISP machine now permits LISP programmers to abandon bra and fig-leaf."

tagore22:04:40

One of the rare occasions on which Perlis turned out to be wrong, or at least a bit misguided.

tagore22:04:36

Gping back to optimization he also said " A LISP programmer knows the value of everything, but the cost of nothing." There's still some truth to that 😉.

tagore22:04:09

Of course modern machines are so fast that we can do without fig-leaves 99% of the time.

didibus22:04:03

@tagore At the time, they were going for optimizing instructions though, I think today, its the memory model which hurts Lisp's performance the most.

tagore22:04:50

@didbus Yep, one of the things I learned while micro-optimizing that software is that in a lot of cases only two things matter for performance: cache misses, and branch-misprediction.

tagore22:04:48

So memory layout turns out to be very, very important.

tagore22:04:09

You can do a lot of floating point math in the tie it takes to go out to main memory and retrieve something.

tagore22:04:04

I'm reminded of my time working with databases, back in the day of small memory.

tagore22:04:27

You only had performance issues if you couldn't fit your data in memory, pretty much.

tagore22:04:07

Today I think it's more like you only have performance issues if you can't fit your data in cache.

didibus22:04:24

@tagore Ya, I think that's true. I remember reading this blog post about how memory access in modern hardware isn't O (1), and should be seen more as O(sqrt (N))

didibus23:04:41

Also, everything we call "optimisation" today, that's not research, is actually focused on optimizing the constant factor. In which case, reducing instruction count, switching to instructions that are faster, and limiting cache misses is all that can help. Some O (n) algorithms which make better use of caches could actually perform better than a O (logn) that misses the cache at every turn, given a certain data set.

tagore23:04:55

@didibus Yep, that article is about exactly what I was getting at.

tagore23:04:30

And as far as optimization goes- there's two levels to it.

tagore23:04:46

The first level is about not being dumb, and that's all that most applications need. Just don't do stupid things with strings and you'll likely be fine. At qorst you might have to throw a bit more hardware at the problem, and in some cases that's cheaper than not being dumb.

tagore23:04:43

Then there's the second level, where you're using good algorithms, not being dumb, and you still need to somehow find a 10x speedup.

tagore23:04:05

That can often be done, but....

tagore23:04:56

That's where you start measuring things obsessively, worrying about different levels of cache, worrying about branch mis-prediction, etc.

tagore23:04:05

The vast bulk of software only requires the first level.

didibus23:04:38

@tagore Ya, 100% agree. What do you mean by stupid things with strings?

tagore23:04:09

@didibus Quite a while back I spent a year and some making a reasonably good living trouble-shooting systems for enterprises in NYC who were apparently unaware that profiling is a thing.

tagore23:04:36

@didibus Occasionally they'd have a real problem that was hard, mainly around concurrency, etc.

tagore23:04:52

@didibus But it was much more common for them to have performance problems, and about 90% of the time it was easily traceable to doing dumb things with strings. In quite a lot of cases it was literally just a matter of Schlemiel the Painter's algorithm.

tagore23:04:19

@didibus I know that's hard to believe, but the truth is that in a lot of enterprise apps being dumb about strings is the main cause of performance problems. This was all quite a while ago, and I certainly hope people have gotten a bit smarter about this since.

tagore23:04:29

@didibus At any rate, I can't complain- I was paid a significant amount to tell people to use StringBuffers... that's an oversimplification, but not much of one.

didibus23:04:40

@tagore I'm just worried now I'm doing dumb things with strings that I don't know are dumb. Do you have any example? Do you mean like not using StringBuffers?

tagore23:04:23

I mean like in Java doing s1 + s2 + s3 + s4

tagore23:04:09

The implementations might be smarter now, but back in the day that required doing the same work over and over.

tagore23:04:32

Or, for instance, hiding information in strings, so that you had to parse it out constantly...

tagore23:04:41

Speaking of Perlisisms...

tagore23:04:34

"The string is a stark data structure and everywhere it is passed there is much duplication of process. It is a perfect vehicle for hiding information."

tagore23:04:08

It just turns out that in many enterprise apps being sloppy in your handling and use of strings makes things far slower than they should be.