Fork me on GitHub
#adventofcode
<
2018-12-05
>
baritonehands04:12:32

I did this for the padded numbers:

(defn to-int [s]
  (Integer/parseInt
    (if (= (.charAt s 0) "0")
      (.substring s 1)
      s)))

baritonehands04:12:46

surprised that my char to string comparison worked

athos04:12:37

(= \0 "0") ;=> false So, your function is equivalent to (Integer/parseInt s)

baritonehands04:12:46

(Integer/parseInt "09")
=> 9

baritonehands04:12:52

accidentally on purpose

😉 4
potetm06:12:36

Anyone getting subsecond timings on part 1?

3Jane11:12:57

fwiw (since my result is incorrect) locally my solution works in ~100 msec

3Jane12:12:30

Two string builders acting as stacks. “Right” initialised to original string, “left” empty. Action consists of examining heads of stacks and then: - if reducible, remove both - if non-reducible, move char from “right” to “left” Repeat until “right” is empty.

3Jane12:12:53

…so yeah, I forgot to trim the input 🙂 now it works.

3Jane12:12:18

(it’d probably be even more performant if I was operating at the last element of the “left” sb)

potetm12:12:55

I finally got mine working.

potetm12:12:03

One string builder - 40ms.

3Jane12:12:28

deleting from the middle is efficient?

potetm12:12:33

StringBuilder is mutable, so it mostly doesn’t matter.

3Jane12:12:31

…right, well, it matters how the memory is actually allocated

3Jane12:12:39

but it sounds like it works just fine 😄 thanks!

potetm14:12:47

right, if they’re having to shift mem around

potetm14:12:21

So it’s back by System.arrayCopy — so towards the start of the string it’s having to do a fair number of (native) copy operations

potetm14:12:45

So… @U82DUDVMH you should look at @me1740’s solution! More in line with what you were saying.

potetm06:12:13

I’m basically bashing on a StringBuilder in a loop/recur and coming up w/ ~4 sec.

potetm06:12:40

seriously no idea how I could speed that up

potetm06:12:52

but I hear of others in the ms range

Mario C.06:12:36

Not having state is making these puzzles really difficult for me. 😅

potetm06:12:06

You should join the twitch stream! That’s part of why I’m doing it!

potetm06:12:31

alas, I’ve devolved into a PLOP animal to get sweet speed

Mario C.06:12:09

Are the videos saved?

potetm06:12:45

yep!

👍 4
ClashTheBunny06:12:55

I couldn't get speed this time at all: https://gitlab.com/randall.mason/advent-of-code/blob/master/src/advent_of_code_2018/day05.clj Ended up with 845 seconds for all tests on my Pixel Slate.

potetm06:12:38

that’s some cond 🙂

potetm06:12:03

ah atom bashing as well I see

ClashTheBunny06:12:27

Yeah, that shouldn't be needed, but I get a smaller answer without it.

potetm06:12:31

yeah I didn’t know what else to do for this one besides bash some memory around

ClashTheBunny06:12:40

I'll refine it on the train tomorrow.

rymndhng07:12:02

wow i went through a lot of pain there, i had a newline at the end and I was off by 1 for #5 facepalm that said, I switched from a functional approach to using an ArrayList to make removal easier, and was able to run the algorithm in < 1s on my mac https://github.com/rymndhng/advent-of-clojure/blob/master/src/advent_2018/05.clj

☝️ 4
👍 4
pesterhazy08:12:40

Not the fastest 🙂

pesterhazy08:12:45

Make it work, make it right, make it fast, right? Except I won't make it right or fast because it's just a kata...

rymndhng08:12:26

totally haha, it's pretty cool to see the different approaches folks went for

pesterhazy08:12:32

Honestly the main source of ugliness in my solution is the lack of cond-let in clojure.core

pesterhazy08:12:38

I went for a pure function, which is never going to be super fast for this type of problem

rymndhng08:12:40

i agree with you for the pure functional part -- I ended up going with an arraylist because removing elements by index via subvec + concat is too cumbersome

pesterhazy08:12:31

I ended up setting them to nil and skipping over nils when considering neighbors

helios08:12:05

it's not super fast but i'm happy i got to use partition-all 😛

3Jane11:12:54

guys, have you maybe got a regexp that I could check against my results?

3Jane11:12:31

something’s wrong in that I’ve got too many characters, but I can’t tell what. superficial examples seem fine.

rmprescott12:12:50

I really enjoyed reading @mfikes solution for day4. A few things that stuck out: - the concision of naming a map in the domain space: guard-id->minutes - learned a new trick: destructuring in a let block when extracting the key and value of interest - how much this approach reads like a proof

3Jane12:12:05

trim :woman-facepalming:

😱 4
☝️ 4
potetm12:12:44

So I did eventually find a way to get part 1 down to ~40ms.

athos12:12:50

The combination of sorted-map and subseq works very nicely 😃

potetm12:12:52

took a few minutes tho

borkdude12:12:15

Clojure:

Testing aoc.y2018.d05.borkdude
part-2 took 5546.88 msecs
part-1 took 160.69 msecs
CLJS:
Testing aoc.y2018.d05.borkdude
part-1 took 107.00 msecs
part-2 took 1896.00 msecs

borkdude12:12:28

funny enough, Node is faster than the JVM here…

genmeblog12:12:48

@athos really fast solution... nice

borkdude12:12:57

@athos where can we see the time for your solution?

potetm12:12:56

til: subseq

athos12:12:59

On my MBP:

user=> (time (day05/solve1 (slurp (io/resource "input05.txt"))))
"Elapsed time: 414.086682 msecs"
9238
user=> (time (day05/solve2 (slurp (io/resource "input05.txt"))))
"Elapsed time: 1936.757214 msecs"
4052

borkdude12:12:51

oh you’re using pmap. no can do in cljs

borkdude12:12:01

with pmap I get on clj:

Testing aoc.y2018.d05.borkdude
part-2 took 2900.71 msecs
part-1 took 160.96 msecs

borkdude12:12:31

Hmm, I see one of my tests get killed now regularly on CircleCI:

Testing aoc.y2018.d04.dfuenzalida
part-2 took 2706.55 msecs
Killed
Maybe they use some sort of time out?

borkdude13:12:08

could be a memory issue.

ihabunek13:12:29

Advent of code 2018, day05
Part 1 "Elapsed time: 107.586455 msecs"
Part 2 "Elapsed time: 2453.459493 msecs"

ihabunek13:12:03

now to look what everyone else did, cause I'm certain this can be faster

ihabunek13:12:16

i'm half tempted to solve it using a transient vector but not sure i want to

borkdude13:12:05

@ihabunek cool, our solutions are almost identical

ihabunek13:12:12

yeah, just wanted to comment 🙂

ihabunek13:12:28

pop and peek would make mine less verbose

ihabunek13:12:05

i first did mine in elixir, and used regex search&replace, but it took 6 minutes for part 2 ^^;

ihabunek13:12:11

weird... i just tried to optimize by converting the input to list of ints instead of chars which I had my logic was that i wouldn't have to convert to int when checking if two chars react but it takes 4 times as long ...

benoit13:12:29

17ms for part 1 and 350 ms for part 2.

benoit13:12:05

I think the key to a fast functional solution is to build the reduced polymer in reverse to make the backtracking faster.

ihabunek13:12:14

will try it out

pesterhazy13:12:43

what the devil is subseq?

pesterhazy13:12:18

Clojure's API surface is positively huge

athos13:12:48

clojure.core/subseq
([sc test key] [sc start-test start-key end-test end-key])
  sc must be a sorted collection, test(s) one of <, <=, > or
  >=. Returns a seq of those entries with keys ek for
  which (test (.. sc comparator (compare ek key)) 0) is true

athos13:12:52

I used it to find the greatest key that is less than a certain key in a sorted map.

genmeblog13:12:05

I took @me1740 idea and got 220ms for both using pmap

norman14:12:37

Hmm. 17ms/350ms is quite fast. My naive solution was way too slow, (I left it running over night, but it probably ended up finishing after an hour or so). I optimized while it was running and got it down to 500ms/20s. But, I guess I still missed something to get those kinds of speeds

genmeblog14:12:21

without pmap I have around 450ms

potetm14:12:40

@me1740 wtf is this black magic???

benoit14:12:58

divide and conquer + being more careful about data structure when the first approach was too slow

potetm14:12:36

I feel like I just got completely schooled.

potetm14:12:15

Me, a simpleton: “Linked lists aren’t fast.” @me1740: “Hold my beer.”

potetm14:12:54

So you can get similar performance w/o having to reverse if you use a vector as your “reduced queue.”

benoit14:12:05

Yes but I would have had to make sure all my internal operations return a vec 🙂

benoit14:12:31

But it's probably better all vec, yes.

potetm14:12:46

nah, pretty much the same

potetm14:12:15

perhaps even faster as a linked list because there’s never a re-allocation

potetm14:12:20

(guessing)

potetm14:12:53

You can pretty much change that reduced queue at will tho. It’s a local and only uses conj.

potetm14:12:19

criterium says that of [] PersistentQueue/EMPTY and '()'() is between 30-40% faster

potetm14:12:14

Thanks @me1740! A valuable lesson was taught today.

benoit14:12:16

But conj doesn't behave the same for all those types.

potetm14:12:37

right, so it’s a matter of whether you reverse at the end or not

Björn Ebbinghaus14:12:03

@me1740 Are you timing it multiple times?

benoit14:12:28

I think if you switch from list to vec, you will have a few things to change in the code to make sure all the internal ops return vecs for example.

potetm14:12:37

I ran criterium on it: getting an avg of 9ms on my sample in 66 calls

adammiller14:12:06

Just in case anyone else has this issue, there is an extra space (or possibly return char) at the end of their input file. All my tests were working but I was getting the wrong answer and starting to get a bit frustrated before I checked that!

potetm14:12:24

@me1740 ah I did one more thing!

adammiller14:12:27

Guess I should've verified my input!

benoit14:12:35

@mroerni No as soon as I get the result in a "reasonable" amount of time, I'm done.

potetm14:12:42

I changed rest and first to pop and peek respectively

potetm14:12:09

so everything returns the same data structure

potetm14:12:32

(just on operations on the reduced variable`)

potetm14:12:20

@mroerni ah so, perhaps because you just return the count instead of allocating a new string?

potetm14:12:32

it’s just faster

potetm15:12:12

the quick-bench of yours is 3ms

Björn Ebbinghaus15:12:30

Using pmap for task 2, halved the time. (on a 2/4 Core machine)

potetm15:12:33

very clever btw

potetm15:12:23

pre-emptive conj on tail, then compare and drop — ensures a single pass across the input

potetm15:12:41

no backtracking

potetm15:12:58

@me1740 and @mroerni I will be mentioning both of you on Friday’s stream. I’ll probably code Björn’s implementation actually.

benoit15:12:57

:thumbsup:

ihabunek15:12:24

pure recursion, no "stepping back", under 1 s including VM startup

potetm15:12:53

I’m surprised that Elixir solution doesn’t blow the stack

potetm15:12:34

I must not understand it. It looks like it recurs to the end of the string and unrolls.

potetm16:12:19

found a pythonista that did a reduction version of Björn’s algo

pesterhazy16:12:09

maybe it just has a large stack!

ihabunek16:12:34

tried doing the same in clojure, but blew the stack

ihabunek16:12:46

so yeah, large stack i guess

ClashTheBunny16:12:27

Tail recursion never blows the stack. Did you try to call the function, or did you recur?

potetm16:12:23

I don't think this could be TCO

☝️ 4
potetm16:12:35

each stack relies on the result of the next

potetm16:12:13

perhaps why^?

heyarne16:12:40

i'm solving day 5 and it's super slow at the moment (part 1 takes ~30 seconds) + it causes a stack overflow error, even though i'm not sure where i even build up such a large stack. if anybody wants to take a look at it I'd appreciate that https://github.com/heyarne/advent-of-code-2018/blob/86360a4/src/day_5.clj

genmeblog17:12:39

@arne-clojurians SO can be caused by concat in your case.

taylor17:12:05

yeah, b/c when you recursively construct sequences with concat, it keeps building up unrealized "thunks" of the lazy sequences. It's definitely not something that looks obviously problematic

mfikes17:12:49

Hah. Seeing all the great timings for this one in the backlog. I may revisit my solution (it is dog slow).

taylor17:12:30

my first solution used trampoline and part 1 took ~9s...

benoit18:12:56

@mfikes The fixed-point thingy is nice. And I found your solution the most readable so far 🙂

benoit18:12:14

But it means you do multiple pass I believe.

mfikes18:12:16

Thanks. I'm sitting here pondering the fundamental algorithm, trying to avoid gleaning anything from the above. But yeah, I'm zeroing in on the concept that I'm making way too many multiple passes, without even any ability to backtrack a little.

benoit18:12:38

Ok I shut up 🙂

mfikes18:12:01

Mostly, I want to get it to be fast enough to contribute to Advent of CLJC. (Otherwise, it could suck up 5 minutes of CI time.)

potetm18:12:04

Posting cause I’m not sure I saw anyone w/ that particular approach.

ClashTheBunny19:12:16

I'm much happier with my 2nd version of day 5, but it is still very slow and still has a bunch of cond: https://gitlab.com/randall.mason/advent-of-code/blob/master/src/advent_of_code_2018/day05.clj

pesterhazy19:12:13

@mfikes my solution ended up very slow as well (33s for part II) - I wouldn't have predicted it

borkdude20:12:55

Testing aoc.y2018.d05.bgrabow
part-1 took 33.00 msecs
part-2 took 359.00 msecs

Testing aoc.y2018.d05.borkdude
part-1 took 60.00 msecs
part-2 took 3392.00 msecs

Testing aoc.y2018.d05.mfikes
part-1 took 3505.00 msecs
part-2 took 75629.00 msecs

borkdude20:12:34

I haven’t looked at bgrabow’s solution yet, but it’s killing all other solutions 😉

borkdude20:12:11

on JVM:

Testing aoc.y2018.d05.bgrabow
part-2 took 110.05 msecs
part-1 took 12.13 msecs

Testing aoc.y2018.d05.borkdude
part-2 took 4697.16 msecs
part-1 took 166.39 msecs

Testing aoc.y2018.d05.mfikes
part-2 took 24970.50 msecs
part-1 took 1035.49 msecs

ClashTheBunny20:12:33

What are the specs on the machine that runs that?

Ben Grabow20:12:06

@borkdude Wait, I'm making it faster! Borrowing heavily from the discussion above though.

genmeblog20:12:25

faster than now? 🙂 wow

ClashTheBunny21:12:48

It just says 4GB of ram and two vCPU.

borkdude21:12:38

that’s all I know

Ben Grabow21:12:51

I'm trying to introduce parallelism to part 1, but nothing I'm trying is faster than the single-threaded approach. I'm guessing it's because there are some reducible polymer chains that are very long, so they stretch beyond the boundaries of a parallel chunk.

Ben Grabow21:12:29

So there is quite a bit of work to do in joining the chunks, which must be done serially.

fellshard21:12:33

I'd considered fork-join for this, but yeah, that'd only be suitable if there's a lot of local adjacent pairs throughout the entire polymer.

jduhamel21:12:57

@mfikes So I’m learning clojure by reading the advent of code stuff. Had a question about the “:log” in your day 4. also why {:log []} What does that do?

jduhamel21:12:37

if there is a place you can point me to, that would be great.

mfikes21:12:47

There's an accumulator in there which essentially holds several things. When it is running the accumulator will have something like:

{:log [{:guard-id 1 :start 5 :end 12} {:guard-id 7 :start 7 :end 13}]
 :guard-id 3
 :start 12}

mfikes21:12:18

So there's a vector under the :log key that grows

jduhamel21:12:44

got it, so the first (:log kicks off the key.

jduhamel21:12:00

the {:log [] } is the arg to the reducer. Thanks a bunch

mfikes21:12:09

Yep, it just sets it up with a vector in there

jduhamel21:12:03

Thanks back to work on day5.

adammiller22:12:53

I have part 1 around 8ms but part 2 is still up in the 500's....Have some ideas to improve that but not sure if i'll get around to it today.

borkdude22:12:26

@adammiller always welcome to make a PR with improvements, even next year

adammiller22:12:31

looks like on the latest test it ran part 1 in 7.53ms.... Should definitely be able to improve part 2 since i use part 1 in that.

mfikes22:12:44

Hammock time led me to the kernel of single pass idea. That took me a while to see. Presumably others in the backlog are using similar approach. (Will read solns.) https://github.com/mfikes/advent-of-code/blob/master/src/advent_2018/day_05.cljc

👌 4
metal 4
Ben Grabow22:12:12

Love the use of reduce in react/`add-unit`.

rymndhng00:12:41

single pass is a very elegant solution 😉 definitely the right choice of data structure influences a good solution

ClashTheBunny02:12:52

I was interested in how you did lower-case before the set, so I benchmarked it a bit and it seems like it is significant to set twice with a lower-case inbetween:

advent-of-code-2018.day05=> (cr/with-progress-reporting (cr/bench (into #{} (map s/lower-case (into #{} (parse input)))) :verbose))
Warming up for JIT optimisations 10000000000 ...
  compilation occurred before 1 iterations
  compilation occurred before 390 iterations
  compilation occurred before 779 iterations
  compilation occurred before 1168 iterations
  compilation occurred before 1557 iterations
  compilation occurred before 1946 iterations
  compilation occurred before 2335 iterations
  compilation occurred before 2724 iterations
  compilation occurred before 4669 iterations
  compilation occurred before 5447 iterations
  compilation occurred before 5836 iterations
Estimating execution count ...
Sampling ...
Final GC...
Checking GC...
Finding outliers ...
Bootstrapping ...
Checking outlier significance
x86_64 Mac OS X 10.13.6 8 cpu(s)
Java HotSpot(TM) 64-Bit Server VM 25.192-b12
Runtime arguments: -Dfile.encoding=UTF-8 -Xmx8g -Dclojure.compile.path=/Users/ranmason/code/advent-of-code-2017/target/classes -Dadvent-of-code.version=0.1.0-SNAPSHOT -Dclojure.debug=false
Evaluation count : 29640 in 60 samples of 494 calls.
      Execution time sample mean : 2.123692 ms
             Execution time mean : 2.124454 ms
Execution time sample std-deviation : 96.420077 µs
    Execution time std-deviation : 96.989667 µs
   Execution time lower quantile : 1.940056 ms ( 2.5%)
   Execution time upper quantile : 2.234693 ms (97.5%)
                   Overhead used : 2.083859 ns
nil
advent-of-code-2018.day05=> (cr/with-progress-reporting (cr/bench (into #{} (map s/lower-case (parse input))) :verbose))
Warming up for JIT optimisations 10000000000 ...
  compilation occurred before 154 iterations
  compilation occurred before 307 iterations
  compilation occurred before 460 iterations
  compilation occurred before 1072 iterations
Estimating execution count ...
Sampling ...
Final GC...
Checking GC...
Finding outliers ...
Bootstrapping ...
Checking outlier significance
x86_64 Mac OS X 10.13.6 8 cpu(s)
Java HotSpot(TM) 64-Bit Server VM 25.192-b12
Runtime arguments: -Dfile.encoding=UTF-8 -Xmx8g -Dclojure.compile.path=/Users/ranmason/code/advent-of-code-2017/target/classes -Dadvent-of-code.version=0.1.0-SNAPSHOT -Dclojure.debug=false
Evaluation count : 5040 in 60 samples of 84 calls.
      Execution time sample mean : 12.258769 ms
             Execution time mean : 12.263469 ms
Execution time sample std-deviation : 437.470024 µs
    Execution time std-deviation : 443.078234 µs
   Execution time lower quantile : 11.783289 ms ( 2.5%)
   Execution time upper quantile : 13.224535 ms (97.5%)
                   Overhead used : 2.083859 ns

Found 3 outliers in 60 samples (5.0000 %)
	low-severe	 3 (5.0000 %)
 Variance from outliers : 22.2472 % Variance is moderately inflated by outliers
nil

adammiller22:12:04

interesting that mine was significantly slower than @borkdude and @ben.grabow on the cljs side though

adammiller22:12:59

that looks pretty similar to what I had done on part 1 @mfikes

mfikes22:12:22

Cool. My part 2 is naive reapplication of part 1.

borkdude22:12:57

@mfikes clean solution!

adammiller22:12:01

also, used this example to play with the bit- fns

mfikes22:12:14

@ben.grabow If you end up optimizing for ClojureScript, == is much faster than = for numerics. (It could help numeric-anti-pair? perf.)

mfikes22:12:59

I wonder if Clojure is also faster if using == on numerics... Hrm.

ihabunek22:12:12

i'm not seeing any speedups if i change my solution to use ==

ihabunek22:12:58

~120ms for part 1 on my laptop

Ben Grabow22:12:05

Thanks @mfikes. I see about 30% speed up across the board with == but who knows how reliable that is across iterations.

mfikes22:12:13

Nice trick Ben employed, pre-purging for part 2; makes sense 🙂

fellshard22:12:54

Oh wow, you're right, you can leapfrog from part 1's solution into part 2

fellshard22:12:31

Now I'm hyped to head home and tinker more, that's the biggest macro-level optimization I can think of for what I've got right now

mfikes22:12:03

The reason == is so much faster in ClojureScript is that it is essentially a macro, letting things compile straight down to direct numeric comparisons in JavaScript.

adammiller22:12:14

made pretty big difference in my cljs times....didn't affect my clj times much.

mfikes22:12:24

Wow, that pre-purging trick leads to a speedup of 5.6 if applied to my naive aproach. 🙂

ihabunek22:12:15

i'll be stealing that as well 😄

adammiller22:12:47

yes, going to modify mine for that later this evening

fellshard22:12:07

I'm sure you could map that onto category theory somehow...

mfikes22:12:34

Hah. Yeah, a proof that that is a legitimate optimization is right at the edge of where my mind is.

fellshard22:12:11

collapse . filter-unit = collapse . filter-unit . collapse ...

fellshard22:12:07

A. collapse is already fixed-point B. filter-unit is structure-preserving, to the extent that you can run it at any point in the process without changing the end result

fellshard22:12:42

But I don't have anywhere near the terminology to express what that actually means XD

mfikes22:12:14

The single pass idea also could really use a proof. It felt a little sketchy, but I was convinced.

fellshard22:12:11

Is that doing a sort of combinatoric split each time you ingest a letter, one with the letter in-place, and the other with it removed?

fellshard22:12:43

Starts to remind me of... wait. Pumping lemma? Stack-based automata?

mfikes22:12:09

To me the single pass idea feels like adding a stream of single units to the end of a growing/shrinking polymer, doing the reaction as you add each individual unit.

fellshard22:12:47

Yeah, it's just a stack-based automata at that point, if even that, given it'd just be one node

fellshard22:12:59

Which is what your solution looks like

mfikes22:12:22

Yeah, Ben's does some interesting things with two ends and a reverse mixed in there 🙂

mfikes22:12:19

Or something with a left and right. I haven't grokked it yet.

fellshard22:12:05

Have a link to Ben's?

fellshard22:12:40

Ah nvm, I see it in the cljc

Ben Grabow22:12:51

@mfikes It's the same thing you're doing but more opaque! My left is your reduce's accumulator.

fellshard22:12:46

Yeah; left is the accumulating stack, right is the remaining string stack

fellshard22:12:07

Though I don't think the reverse at the end earns you anything, since the answer doesn't call for preserving the order

mfikes22:12:32

You could probably prove the single pass leads to a fully reacted polymer by using proof by induction

Ben Grabow22:12:27

True! Reverses are not necessary since a polymer reacts the same forwards and backwards. Feels dirty to me to remove them though.

fellshard22:12:54

Intuitively, the only information adding a new unit to the end can add is that A. a new element exists, or B. the top element is removed

fellshard23:12:40

Even if a new element is re-exposed from that interaction, it's inert still, because it cannot interact with the thing below it on the stack (having already been tested in a prior recursive step)

fellshard23:12:55

therefore, it can only react with the following unused element in the remaining input string

fellshard23:12:23

The key is that the relative ordering of the 'seen' portion of the string remains the same, and is stable while new elements are added to the end

mfikes23:12:55

Yeah, base case (empty polymer) is inert, and each inductive step leads to a new inert polymer.

fellshard23:12:56

This is very similar to the type of thing you'd see for validly matching nested bracketed statements 🙂

fellshard23:12:09

That's the best way to put it, yeah

Ben Grabow23:12:06

I refactored to use reduce like @mfikes and got another 20-40% speed up

mfikes23:12:33

If we were competing with other languages on perf, Ben’s would be a good candidate :)

Ben Grabow23:12:45

It's like the anticipation of Christmas when waiting for CI perf test results to come back 🎄