This page is not created by, affiliated with, or supported by Slack Technologies, Inc.

## 2018-12-11

## Channels

- # adventofcode (116)
- # aleph (10)
- # announcements (2)
- # beginners (67)
- # boot (3)
- # calva (17)
- # cider (8)
- # cljdoc (27)
- # cljsrn (6)
- # clojure (144)
- # clojure-austin (3)
- # clojure-boston (1)
- # clojure-dev (25)
- # clojure-europe (4)
- # clojure-italy (26)
- # clojure-losangeles (4)
- # clojure-nl (28)
- # clojure-russia (1)
- # clojure-uk (34)
- # clojurescript (130)
- # cursive (20)
- # datomic (69)
- # emacs (14)
- # figwheel-main (2)
- # fulcro (31)
- # graphql (3)
- # hyperfiddle (3)
- # jobs (1)
- # jobs-discuss (1)
- # kaocha (1)
- # leiningen (2)
- # lumo (2)
- # nrepl (1)
- # off-topic (182)
- # onyx (5)
- # re-frame (88)
- # reagent (12)
- # reitit (2)
- # ring-swagger (13)
- # shadow-cljs (136)
- # tools-deps (28)
- # vim (4)

This is an interesting way to calculate the “convergence time”: https://blog.jle.im/entry/shifting-the-stars.html

I’m brute forcing part 2, which always makes me feel iffy. Maybe it’s because it’s late and I should have done this in the morning, but I’m not getting any sense of intuition that there’s a faster way (except for memoizing the power value for grid points)

I just printed when the max changes, and eventually it sat at the same value for awhile

I’m melting my laptop while trying to come up with a way to reuse calculated regions from previous sizes

I just did brute force with shared computations too. Is it was running, I did notice that after a certain size, the larger squares always decreased the power

So in retrospect, I could have used that to figure out the likely answer more quickly.

I’m going to start using massive EC2 instances so I can brute force all the remaining problems 🙂

I just got my answer using same approach as baritonehands, way faster than my brute force solution would’ve ever reached

I pre-calculate the grid into a map where the key is the X coord, and the value is a vector of the cell powers indexed by Y coord, then do calcs in a loop and use `max-key`

to find largest

this problem felt way easier than the past few days’ problems, and I’m glad b/c now I can go to sleep 💤

I thought yesterday (stars) was much easier than today, but by far that day 9 marble was the slowest for me.

I continue with squares of 3 or less as I was. But for anything larger, if it was an odd size, I recursed on the `(dec size)`

and then added the numbers around the bottom and right edges, and if it was even, I split it into 4, and recursed on each of the quadrants, adding the results

With memoization when calculating those smaller squares, it’s like they say on TV: “Here’s one we did earlier” https://www.youtube.com/watch?v=K_zpdyBz3VM

My terrible attempt (but I'm too impatient): https://github.com/pesterhazy/advent2018/blob/master/src/advent/puzzle11.clj#L61

Added two test metadata options to advent-cljc: https://github.com/borkdude/advent-of-cljc/blob/master/README.md#tests

I have an idea how to optimize. already brought it down significantly, but need some time to generalize it

This is my day 11, every (except the first) square is calculated from an adjacent (overlapping) neighbour. Its not fast, but did all 300 square sizes in less than the time it took me to eat lunch 🙂 https://github.com/bloat/aoc2018/blob/master/src/net/slothrop/aoc2018/day11.clj

ps: I rembered about the seldom used `pmap`

, I think in this case can be very helpful 😄 (but my solution is still slow AF)

Using summed-area tables (as suggested by @ihabunek) https://github.com/benfle/advent-of-code-2018/blob/master/day11.clj Got me to 8s for part two.

I tried first to improve the brute force approach with a dynamic programming algorithm but that was still very slow.

I get this time now:

```
Testing aoc.y2018.d11.borkdude
part-2 took 75854.05 msecs
part-1 took 0.46 msecs
```

I’ll leave it at thatThe approach I took was to memoize divisions and when you need a bigger area, you cut it in parts that you already calculated

but the "total sum stops to grow at some point" feels like a guess to me. good enough to submit an answer, but not ok for "learning purposes", unless there is a known math behind it

https://github.com/namenu/advent-of-code-2018/blob/master/src/year2018/day11.clj#L17
I've used `memoize`

for summarized-table which is having a closure, and now I have to reload my REPL every time when I change the binding (`grid-serial`)... 😓
Can anyone give me an advise to cache my table without reloading?

@namenu when I want to refresh the memoized fn I simple evaluate the whole namespace

What if I want to memoize a function like,

```
(def efficient-f
(memoize (fn [x] (+ x y))))
```

with various `y`

?
Is it possible ?You guys thought of it while I was testing it out. Seems to work fine if you swap the arg order and use `partial`

.

```
(def memo-test
(memoize (fn [y x]
(do (Thread/sleep 1000)
(+ x y)))))
(def par-y (partial memo-test 10))
(par-y 5) ; => (wait 1 sec) 15
(par-y 5) ; => (no wait) 15
```

Okay, I found super interesting idea called Y combinator and switched to it. 😊 https://blog.klipse.tech/lambda/2016/08/07/pure-y-combinator-clojure.html

but that link @mfikes posted has haunted me: https://blog.jle.im/entry/shifting-the-stars.html

```
(defn centralize [pnts]
(matrix/sub pnts
(matrix/div (reduce matrix/add
pnts)
(count pnts))))
(defn sum-of-dots [xs ys]
(reduce +
(map matrix/dot
xs
ys)))
(defn the-t [stars]
(let [vs (centralize (map :vel stars))
xs (centralize (map :pos stars))]
(long (- (/ (sum-of-dots xs vs)
(sum-of-dots vs vs))))))
```

would anyone be willing to take a look at my day 11 part 2 solution and tell me why it’s so SLOW? https://gist.github.com/ccann/fe69ba05140566e5a04855a5c96380ba

Here's my Day 11: https://github.com/pesterhazy/advent2018/blob/master/src/advent/puzzle11.clj#L40

`Elapsed time: 186939.868152 msecs`

I ended up pre-calculating "hblocks", all blocks of size 1x1, 2x1, 3x1, etc.. up to 100x1

Day 11: https://github.com/mfikes/advent-of-code/blob/master/src/advent_2018/day_11.cljc

@helios did not cache pairs, only rectangles, so "prime"-width squares calculation was killing me. did not bother to rewrite again yet.

@pesterhazy that's what I'd do next, or may be precalculate row/col triplets. another idea is to use subtraction, rather than only addition. but that requires to think through order of calculation, so when you calc 19x19, you have not only 18x18 + row + col, but 20x20 as well, from which you ca subtract 18x18-and-change

With `pmap`

about 80 seconds, but I have a dual hexacore. I haven’t done it without that, but am letting the ClojureScript version run now.

I have a version that does it in 76 seconds without pmap on a Macbook Pro, but it’s heavily memoized

e.g. https://github.com/pesterhazy/advent2018/blob/master/src/advent/puzzle11.clj#L60

@pesterhazy https://github.com/mfikes/advent-of-code/blob/master/src/advent_2018/day_11.cljc#L8

I had a version that pre-calculated everything as vectors. It was only marginally faster for some reason.

My pmap version just outputs "*Davide, do we really need to spend the evening computing?*"

Once again Reddit knows the real solution: https://en.m.wikipedia.org/wiki/Summed-area_table

This c++ version runs in 58ms for me: https://www.reddit.com/r/adventofcode/comments/a53r6i/2018_day_11_solutions/ebjogd7/

Yeah numerical methods in Clojure is not my forte 🙂 I would love to see improvements on this approach.

I don't even going to try porting my current java solution to java, part 2 is done in 200ms now, will probably be 20 seconds in clojure..

```
adventofcode-clj-2018.day11> (time (part-1))
"Elapsed time: 645.790007 msecs"
[[[243 16] 3] 31]
adventofcode-clj-2018.day11> (time (part-2))
"Elapsed time: 36696.154969 msecs"
[[[231 227] 14] 129]
```

This is what I managed to do so farThinking back on my solution again, and I think you could optimize it by keeping track of the last two 'layers' only - so at size 10, you only need sizes 9 and 8.

```
Add: <x,y>, <x+1,y>, <x,y+1>, <x+1,y+1> in layer 9
Subtract: 3 * <x+1,y+1> in layer 8
```

Currently I hold onto way too many old layers, because I keep track of my unit-size squares and `floor(size/2)`

squares on up, hence my memory management problem.
(This is definitely a standard 'convolution' problem, and I can't help but wonder if there's some tools to be drawn from that world...)
And now thinking on this, this is pretty much one step removed from the summed-area listed above, which uses a constant amount of memory... now I need to dig deeper!