Fork me on GitHub
#powderkeg
<
2017-03-28
>
cgrand10:03:11

ah ok, I’ve “fixed” it locally anyway

viesti10:03:12

was thinking about with-resources, if a setup throws, the body isn’t run and the setup itself would need to cleanup any incomplete state

viesti10:03:27

try/finally in with-resources guards body so that teardown is run

viesti10:03:37

was thinking about #26

viesti10:03:15

now that testing was mentioned 🙂

viesti10:03:21

rearranged things a bit, but overall would be neat to be able to run all tests against local and docker, both 1.5 and 2.1

cgrand11:03:26

I broke the build 😕

viesti11:03:01

hum, didn’t find a way to see build log

viesti11:03:04

ah, had “my builds” button ticked so didn’t see any at https://circleci.com/gh/HCADatalab/powderkeg

viesti11:03:28

gah this test fixtures

viesti11:03:10

something like “lein run-tests-in-docker” but when locally in repl, use local-spark

viesti13:03:04

realizing that this would fail on a remote cluster https://github.com/viesti/powderkeg/blob/sql/test/powderkeg/sql_test.clj, either two deftests with different name, one with ^:integration meta and different setup or another way of saying the same

viesti13:03:23

to actually run remotely that is

cgrand13:03:18

can you provide more context on wy it would fail?

cgrand13:03:40

this .collect is begging for into support 🙂

viesti13:03:42

the spec registry

cgrand13:03:03

ah stupid me

cgrand13:03:44

Several suggestions: • keep an eye on all atoms transferred and if changed at next barrier, update them (WeakRef ftw), is it overkill? Is it going to create more bugs than it fixes?

cgrand13:03:25

• have a whitelist, initially populated with common atoms to migrate

cgrand13:03:43

• no more ideas

viesti13:03:57

last one :D

viesti13:03:53

second one sounds reasonable

cgrand13:03:00

The first suggestion is my plan for multimethods

cgrand13:03:33

usually a worker is not going to change a multimethod

cgrand13:03:38

while it may change an atom

cgrand13:03:53

and ruining caches stored in atoms at each barrier sounds mean (”hey replace your nice cache that you worked hard to populate with this empty one from this lazy driver")

viesti13:03:38

keeping distributed execution obvious but simple would be neat

cgrand13:03:18

huh? what do you have in mind?

viesti13:03:04

just that I've made similar mistakes in Spark with Scala without realizing where code is executed :)

viesti19:03:49

hmm, actually spec registry might not be a problem with DataSet, at least in the that I made, since a DataSet is returned to the driver, so specs themselves aren’t used by the workers