Fork me on GitHub
#adventofcode
<
2018-12-11
>
mfikes02:12:19

This is an interesting way to calculate the “convergence time”: https://blog.jle.im/entry/shifting-the-stars.html

potetm17:12:56

😂 it’s…. it’s beautiful

mattly05:12:57

eff, I had ann off-by-one in my initial pass for part 1 of day11

quoll06:12:21

I’m brute forcing part 2, which always makes me feel iffy. Maybe it’s because it’s late and I should have done this in the morning, but I’m not getting any sense of intuition that there’s a faster way (except for memoizing the power value for grid points)

baritonehands06:12:36

I just printed when the max changes, and eventually it sat at the same value for awhile

baritonehands06:12:45

it was correct when I entered it

baritonehands06:12:57

Didn't bother to wait until it was all done

taylor06:12:00

I’m melting my laptop while trying to come up with a way to reuse calculated regions from previous sizes

quoll06:12:26

oh, I like that!

quoll06:12:11

so, size 10 at x,y is the same as size 9 at x,y, with another 19 values

norman06:12:14

I just did brute force with shared computations too. Is it was running, I did notice that after a certain size, the larger squares always decreased the power

norman06:12:43

So in retrospect, I could have used that to figure out the likely answer more quickly.

taylor06:12:46

yeah, I noticed the same when trying sizes in decreasing order

taylor06:12:59

I’m going to start using massive EC2 instances so I can brute force all the remaining problems 🙂

taylor06:12:46

you can rent big computers from AWS

Average-user06:12:15

I limited to calculate squares of maximum size 20x20, and it worked

taylor06:12:43

I just got my answer using same approach as baritonehands, way faster than my brute force solution would’ve ever reached

Average-user06:12:58

How much time takes part1?

norman06:12:33

“Elapsed time: 2553.747123 msecs”

taylor06:12:17

I’m getting ~350ms for part 1

taylor06:12:28

I pre-calculate the grid into a map where the key is the X coord, and the value is a vector of the cell powers indexed by Y coord, then do calcs in a loop and use max-key to find largest

quoll07:12:55

memoizing the power function should be similar, right?

taylor07:12:15

yeah I think so

taylor07:12:35

just two different ways of storing the same info in memory I guess

taylor06:12:31

this problem felt way easier than the past few days’ problems, and I’m glad b/c now I can go to sleep 💤

norman06:12:59

I thought yesterday (stars) was much easier than today, but by far that day 9 marble was the slowest for me.

quoll07:12:43

yay… divide and conquer

quoll07:12:58

Thank you @taylor. That helped me a lot

👏 4
quoll07:12:12

I continue with squares of 3 or less as I was. But for anything larger, if it was an odd size, I recursed on the (dec size) and then added the numbers around the bottom and right edges, and if it was even, I split it into 4, and recursed on each of the quadrants, adding the results

💡 4
quoll07:12:24

everything is memoized, so it gets those smaller squares from earlier

quoll07:12:21

With memoization when calculating those smaller squares, it’s like they say on TV: “Here’s one we did earlier” https://www.youtube.com/watch?v=K_zpdyBz3VM

quoll07:12:15

part 1: 932.224412 msecs part 2: 389148.15998 msecs (6min 29sec)

quoll07:12:29

not brilliant, but it gets me to bed! 🙂

helios08:12:06

i'm also brute forcing it to start

helios08:12:14

and when i'll get the right answer, i'll optimize

helios08:12:23

(unless getting the answer takes longer than a few minutes 😄 )

fellshard08:12:55

Memory management is vital.

borkdude08:12:45

My CLJ solution works, but the CLJS one is crapping slow

pesterhazy09:12:08

Brute-forcing is really slow - 8s for square size 20 (never mind 100)

pesterhazy09:12:28

Adding up numbers is not Clojure's forte

ihabunek11:12:16

i also bruteforced the solution and later found this on reddit

borkdude12:12:31

I have an idea how to optimize. already brought it down significantly, but need some time to generalize it

magic_bloat12:12:25

This is my day 11, every (except the first) square is calculated from an adjacent (overlapping) neighbour. Its not fast, but did all 300 square sizes in less than the time it took me to eat lunch 🙂 https://github.com/bloat/aoc2018/blob/master/src/net/slothrop/aoc2018/day11.clj

helios14:12:19

@pesterhazy did you also set unchecked math when doing operations?

helios14:12:24

i understood that it has quite a big impact

helios14:12:03

ps: I rembered about the seldom used pmap, I think in this case can be very helpful 😄 (but my solution is still slow AF)

helios14:12:24

now i wish i was using a desktop with a nice AMD threadripper 😆

benoit14:12:38

Using summed-area tables (as suggested by @ihabunek) https://github.com/benfle/advent-of-code-2018/blob/master/day11.clj Got me to 8s for part two.

benoit14:12:53

I tried first to improve the brute force approach with a dynamic programming algorithm but that was still very slow.

pesterhazy15:12:25

@helios yeah I did, no dice

borkdude15:12:10

I get this time now:

Testing aoc.y2018.d11.borkdude
part-2 took 75854.05 msecs
part-1 took 0.46 msecs
I’ll leave it at that

borkdude15:12:29

The approach I took was to memoize divisions and when you need a bigger area, you cut it in parts that you already calculated

misha15:12:59

yeah, adding rows to (dec n) for "prime" ns is slow as f

genmeblog15:12:30

finally done day 9 part 2 in 6-7 seconds 😕

misha15:12:38

but the "total sum stops to grow at some point" feels like a guess to me. good enough to submit an answer, but not ok for "learning purposes", unless there is a known math behind it

taylor15:12:50

yeah I think it's dependent on the distribution of negative powers

namenu16:12:21

https://github.com/namenu/advent-of-code-2018/blob/master/src/year2018/day11.clj#L17 I've used memoize for summarized-table which is having a closure, and now I have to reload my REPL every time when I change the binding (`grid-serial`)... 😓 Can anyone give me an advise to cache my table without reloading?

borkdude16:12:20

@namenu when I want to refresh the memoized fn I simple evaluate the whole namespace

borkdude16:12:21

not sure what you mean actually

namenu16:12:23

What if I want to memoize a function like,

(def efficient-f
  (memoize (fn [x] (+ x y))))
with various y? Is it possible ?

borkdude16:12:50

No, but you can memoize (fn foo [x y] (+ x y))

borkdude16:12:10

you have to pass down the argument that varies

namenu16:12:47

yes, i'll have to do that. maybe I can curry out y. thanks!

borkdude16:12:05

you can never curry out something from the right

borkdude16:12:23

so maybe a good reason to move the serial to the first position

borkdude16:12:27

I did exactly that

borkdude16:12:43

(although I didn’t make use of it eventually)

😅 4
Ben Grabow16:12:44

You guys thought of it while I was testing it out. Seems to work fine if you swap the arg order and use partial.

(def memo-test 
  (memoize (fn [y x] 
             (do (Thread/sleep 1000)
                 (+ x y)))))

(def par-y (partial memo-test 10))
(par-y 5) ; => (wait 1 sec) 15
(par-y 5) ; => (no wait) 15

4
namenu17:12:20

Okay, I found super interesting idea called Y combinator and switched to it. 😊 https://blog.klipse.tech/lambda/2016/08/07/pure-y-combinator-clojure.html

misha16:12:18

"Elapsed time: 2826829.717059 msecs"
tatatananana

16
potetm17:12:21

I know day 10 has come and gone

potetm17:12:47

So here’s the solution in clojure

potetm17:12:56

(defn centralize [pnts]
  (matrix/sub pnts
              (matrix/div (reduce matrix/add
                                  pnts)
                          (count pnts))))

(defn sum-of-dots [xs ys]
  (reduce +
          (map matrix/dot
               xs
               ys)))

(defn the-t [stars]
  (let [vs (centralize (map :vel stars))
        xs (centralize (map :pos stars))]
    (long (- (/ (sum-of-dots xs vs)
                (sum-of-dots vs vs))))))

potetm17:12:24

(expects a list of {:pos [x y], :vel [x y]}

helios17:12:52

@misha only 50 minutes!? 😄

ccann17:12:57

would anyone be willing to take a look at my day 11 part 2 solution and tell me why it’s so SLOW? https://gist.github.com/ccann/fe69ba05140566e5a04855a5c96380ba

pesterhazy18:12:59

I ended up pre-calculating "hblocks", all blocks of size 1x1, 2x1, 3x1, etc.. up to 100x1

fellshard21:12:45

Oooh, that's another good way to break it down

misha18:12:58

@helios kappa did not cache pairs, only rectangles, so "prime"-width squares calculation was killing me. did not bother to rewrite again yet.

misha18:12:57

@pesterhazy that's what I'd do next, or may be precalculate row/col triplets. another idea is to use subtraction, rather than only addition. but that requires to think through order of calculation, so when you calc 19x19, you have not only 18x18 + row + col, but 20x20 as well, from which you ca subtract 18x18-and-change

pesterhazy18:12:30

@mfikes how long does your solution take with or without pmap?

mfikes18:12:13

With pmap about 80 seconds, but I have a dual hexacore. I haven’t done it without that, but am letting the ClojureScript version run now.

pesterhazy18:12:14

I'm running your code right now and my laptop it sounding like the Concorde

misha18:12:38

future is now

borkdude18:12:35

I have a version that does it in 76 seconds without pmap on a Macbook Pro, but it’s heavily memoized

borkdude18:12:01

oh yours computes all solutions, that’s impressive

pesterhazy18:12:52

the (->> (for []...) (apply max-key first)) idiom comes up quite often

pesterhazy18:12:05

that can't be efficient but .. hm maybe (reduce #(max-key first a b)) is better

pesterhazy18:12:17

even better would be if I didn't have to create a vector on each iteration

mfikes18:12:43

I had a version that pre-calculated everything as vectors. It was only marginally faster for some reason.

pesterhazy18:12:20

my hblocks version is definitely faster than brute forcing

helios18:12:33

My pmap version just outputs "Davide, do we really need to spend the evening computing?"

bmo 4
Average-user19:12:31

@me1740 Uploaded a version using summed-area table

Average-user20:12:44

I'm gonna try to implement my own version now

pesterhazy20:12:08

i'd really like to see a comparison to Java

pesterhazy20:12:25

The 8s for the Clojure version seem way too slow

benoit20:12:24

Yeah numerical methods in Clojure is not my forte 🙂 I would love to see improvements on this approach.

gklijs20:12:52

I don't even going to try porting my current java solution to java, part 2 is done in 200ms now, will probably be 20 seconds in clojure..

Average-user20:12:31

And btw, your 8s solution takes 35s to me

Average-user20:12:59

adventofcode-clj-2018.day11> (time (part-1))
"Elapsed time: 645.790007 msecs"
[[[243 16] 3] 31]
adventofcode-clj-2018.day11> (time (part-2))
"Elapsed time: 36696.154969 msecs"
[[[231 227] 14] 129]
This is what I managed to do so far

Average-user20:12:22

which is about the same times that takes @me1740s solution to me

fellshard23:12:00

Thinking back on my solution again, and I think you could optimize it by keeping track of the last two 'layers' only - so at size 10, you only need sizes 9 and 8.

Add: <x,y>, <x+1,y>, <x,y+1>, <x+1,y+1> in layer 9
Subtract: 3 * <x+1,y+1> in layer 8
Currently I hold onto way too many old layers, because I keep track of my unit-size squares and floor(size/2) squares on up, hence my memory management problem. (This is definitely a standard 'convolution' problem, and I can't help but wonder if there's some tools to be drawn from that world...) And now thinking on this, this is pretty much one step removed from the summed-area listed above, which uses a constant amount of memory... now I need to dig deeper!