Fork me on GitHub
#adventofcode
<
2017-12-15
>
minikomi03:12:20

I started trying to read this as a solution to day14 and my brain started to hurt lol.. need more morning coffee

minikomi04:12:26

another nice use of tree-seq, @bhauman!

minikomi04:12:38

i went the loop/recur route again for my flood

grzm05:12:18

Is it wrong that I as someone who likes coding have an aversion to binary numbers? Two days in a row 😞

minikomi05:12:12

40,000,000.. lol

minikomi05:12:18

hmm .. xor for speedup maybe?

minikomi05:12:05

ok, not too slow.. I'm sure there's room for improvement though!

grzm05:12:29

Wow. I love clojure.

fellshard06:12:28

The binary isn't too bad here, thankfully.

borkdude08:12:48

today was easy, luckily

ihabunek09:12:02

i'm learning that lazy sequences are slow

ihabunek09:12:15

and recursion is preferred for this kind of problem

ihabunek09:12:27

but not as nice to look at

borkdude09:12:38

@ihabunek what kind of performance are you looking at?

borkdude09:12:42

with lazy seqs

erwin09:12:38

for me lazy-seq + count is 20 seconds range

borkdude09:12:54

I got 11 for part 1, 7 for part 2

borkdude09:12:00

I’m using loop recur + iterate

erwin09:12:38

for reference: pypy loop with generators is instantaneous

borkdude09:12:05

like below 10ms?

erwin09:12:27

no, 0.7 seconds for part-2, with pypy day15.py so including startup time

ihabunek09:12:19

I'm getting around 30s for both parts. Using lazy-seq for generators

minikomi09:12:55

I'm getting 7s for first and 5s for 2nd using lazy; ~1s using recursion for first part

ihabunek09:12:41

Huh. Can I see your code?

ihabunek09:12:32

Have not used iterate before. Will look into it

minikomi09:12:54

good one to use for these kinds of "feedback" problems

minikomi09:12:24

oh, i like your lower-bits fn, stealing that 😛

erwin09:12:25

@minikomi your implementation is not correct

erwin09:12:54

try it on A: 512 and B: 191

erwin09:12:58

for part2

erwin09:12:24

should be 323, yours gives 301

minikomi09:12:15

what's gone wrong?

minikomi09:12:55

for part 1 - 567?

erwin09:12:49

I had the same problem in my code, took me some time to find the problem 😞 (and some attempts typing the wrong answer 8))

minikomi09:12:55

Can you explain conceptually what the difference is?

erwin09:12:26

first: part1 is also not correct but, there it doesn't matter, because there is no filtering involved, and the start values do not have the same 16 lower bits. Your (and my) code generates 277, (* 277 16807) and so on, but the start value should not be in the generated steps (check with example in problem). For part 2, the modulo 8 and 4 checks allows the start value for input 512 and the pairs are of by 1 for the complete range

minikomi09:12:40

oh, i see, i want to drop the first generated value - since they're seeds and not generated, but by having the drop where i had it, it drops the first filterted generated value

erwin09:12:12

no it compares the wrong values for the complete range

minikomi09:12:36

Ah, because 512 gets grouped with the first generated b value.. right

borkdude09:12:29

Btw, here’s my code: https://github.com/borkdude/aoc2017/blob/master/src/day15.clj I think it could be optimized if I would loop/recurify the ranges as well

minikomi09:12:16

ok, moved the drops inside the filters -- fixes the part2 for the seeds you gave

karlis09:12:25

day 15 from me: https://github.com/skazhy/advent/blob/master/src/advent/2017/day15.clj currently ~ 18 seconds for 1st / 10 for 2nd.

minikomi09:12:32

just got lucky with my seeds i guess 😛

erwin09:12:30

I learned that iterate doesn't give (f x) (f (f x)) but x (f x) 🤓

ihabunek11:12:31

@borkdude looking at your implementation of multiple-of, what's wrong with (zero? (mod x y)) ?

borkdude11:12:45

@ihabunek nothing, but this is slightly faster 🙂

ihabunek11:12:54

ok, i guessed as much 🙂

borkdude11:12:07

Could be even faster when you inline 4 and 8

ihabunek11:12:12

hm, quick-bench sounds interesting

ihabunek11:12:27

would partial application be as fast as inlining?

borkdude11:12:39

no, it would still do the calculation when you call the partial

orestis11:12:12

My day14 part 2 (flood fill) is at 7 seconds, using mostly plain sets.

ihabunek11:12:25

i "cheated" on day14... used a transient set, and it's well under 1s 🙂

ihabunek11:12:42

i guess localized mutability is ok though

orestis11:12:05

For today, 5s and 7s.

borkdude11:12:43

@orestis cool. Wonder why my part 2 is faster than part 1, but it may be the input

orestis11:12:45

Ah, iterate was what I should use.

orestis11:12:16

I was googleing “clojure reduce infinite sequence” 🙂

ihabunek11:12:27

iterate is my function of the day as well

borkdude11:12:32

@ihabunek Better example:

boot.user=> (defn inc* [x] (println "inc") (inc x))
#'boot.user/inc*
boot.user=> (defn f [x y] (let [x' (inc* x)] (+ x' y)))
#'boot.user/f
boot.user=> (f 1 1)
inc
3
boot.user=> (def g (partial f 1))
#'boot.user/g
boot.user=> (g 1)
inc
3
In other words: partial does nothing for inlining stuff

ihabunek11:12:57

yes, i understand

ihabunek11:12:05

makes sense

ihabunek11:12:33

basically it only binds one of the inputs

borkdude11:12:44

I wonder if there is a faster way of comparing the lowest 16 bits than (== (unchecked-short ..) (unchecked-short ..)), maybe something using xor, or maybe it already does that on a lower level

ihabunek11:12:42

i think my code is slow because of the way i use sequences more than comparing bits.

ihabunek11:12:12

does this slack have a bot which runs clojure code like the irc channel does?

borkdude11:12:27

@ihabunek how does that work again?

ihabunek11:12:05

/clj (println 1)

ihabunek11:12:46

takes a little while though

ihabunek11:12:42

and now let's try /clj (range) 🙂

ihabunek11:12:10

i'll be good and not do that 🙂

ihabunek11:12:22

using unchecked-short instead of (bit-and a 0xffff) does nothing for my performance

ihabunek11:12:40

actually, it's slightly worse 🙂

borkdude11:12:37

@ihabunek it’s probably ok

borkdude11:12:33

sorry… I can’t remove it…

ihabunek11:12:46

now you know how to DOS this slack

borkdude11:12:01

who is the admin of this channel? 😉 @pvinis can you remove the range…

ihabunek11:12:56

@borkdude what do you get by prefixing things with ^long

ihabunek11:12:08

just so it doesn't use int?

borkdude11:12:24

@ihabunek evaluate the settings from the comment section at the bottom, then you’ll see warnings about boxing

borkdude11:12:27

so it’s to prevent boxing

borkdude11:12:34

Terribly sorry for the long output by range… didn’t know it would consume that much estate… pinging an admin who can remove it.

ihabunek11:12:26

i disovered parinfer for sublime text and don't know how i ever managed to code clojure before that

ihabunek11:12:34

i'm guessing half people here are on emacs

Miķelis Vindavs12:12:54

I’m using Cursive in IntelliJ and it also has parinfer

Miķelis Vindavs12:12:03

Although not the latest and greatest version 3

robert-stuttaford12:12:18

all cleaned up 👼

borkdude12:12:24

thanks rob 🙂

robert-stuttaford12:12:35

y’all have fun now - merry conjmas!

borkdude12:12:48

xmas… x for transducers you know 😉

ihabunek13:12:49

switching between advent of clojure in the morning and python for my actual job is messing with my brain

borkdude13:12:36

it’s a nerd snipe

bhauman14:12:36

just went for the straightforward approach today nothing special

mfikes17:12:10

I messed around with macros to get it down to around 236 ms for part 1 and 675 ms for part 2.

mfikes18:12:03

Ahh cool. I forgot about unchecked-math and boxing. Fixing that results in 165 ms for part 1 and 295 ms for part 2.

borkdude18:12:11

Very cool Mike!

borkdude18:12:31

Good idea to use macros for inlining.

borkdude23:12:15

@mfikes Am I right that the macro approach only helps for inlining var values, but otherwise doesn’t help very much?

mfikes10:12:51

My initial motivation was to inline the arguments. I think you had also mentioned that inlining the 4 and 8 resulted in faster performance than if they were passed as arguments. I was seeing a similar effect. But var inlining is probably another effect. One odd thing I never figured out was that if I macroexpanded the solution to part 1, the cleaned up expansion inexplicably ran slower.