Fork me on GitHub
#clojure
<
2017-05-13
>
lincpa11:05:02

Welcome to #notepad_plus_plus Channel (notepad++ for Clojure)

Drew Verlee18:05:21

@ghadi thats great, bookmarked!

qqq19:05:11

anyone here using thinktopic/cortex ? I'm looking for a deep learning library that I can train in java land, and this looks the closest

john19:05:28

I've played with it a while back. Likely to still be pretty alpha. Pretty awesome though

john19:05:56

You'll have more tested/documented solutions in java though. I believe there's Java bindings for TensorFlow now.

john19:05:58

aaaaand, it looks like there's a wrapper already: https://github.com/kieranbrowne/clojure-tensorflow-interop

qqq19:05:30

@john: are those bindings for training + inference or inference only? last I checked, java is only calable of inference and not training because for training, we need to compute the gradient via backprop .... but doing so in TF requires some hack of attaching some python code to nodes or something, so it ends u ptaht java bindings can only eval ^^ this is just random stuff I read on the internet, so it may be blatantly false

john19:05:07

Not sure, but yea, I heard the java bindings aren't feature complete yet either

qqq19:05:23

support5ing this theory is the fact that the java example https://www.tensorflow.org/versions/master/install/install_java only does inference, and not training (it loads a model someone else trained)

john19:05:20

I'd definitely play around with cortex. My preference is always pure clj/cljs when it's an option.

qqq19:05:23

come to think of it, all I really need is a fast tensor library on cuda with splicing support

qqq19:05:33

yeah, I tried tensorflow in python, missed lisp

qqq19:05:43

then I tried tensorflow + Hy ... which was nice, but just didn't quite feel like clojure

john19:05:58

right. Not sure if it's possible, but it'd be neat if Dragan Djuric's Neanderthal (http://dragan.rocks/articles/17/Neanderthal-090-released-Clojure-high-performance-computing) could be helpful to cortex. Seems like some pretty interesting stuff he's doing there.

john19:05:12

They're already using cuda, but my understanding is that Dragan has really smoothed over all the hardware/software platforms.

john20:05:01

When (swap! some-map-atom assoc ... is run in another thread, is the new structure stored local to that thread, with a pointer back to the structure back in the main thread? Or is all the structure stored back in the main thread?

noisesmith20:05:39

threads don't store data

john20:05:55

I was wondering that. "thread local storage" was throwing me off

noisesmith20:05:56

well, not atoms at least - the jvm has no stack storage of objects

john20:05:32

I was reading that vars have something called "thread local storage"

noisesmith20:05:39

atoms are not vars

noisesmith20:05:00

vars can have thread local bindings, in that case it's possible to have thread local binding of a var containing an atom

noisesmith20:05:10

but the atom is never thread-local, it's the var that is

noisesmith20:05:42

a var (in clj on the jvm) is a mutable container owned by a namespace

noisesmith20:05:07

if a namespace defines an atom, the atom is in the heap, referenced in the var in the namespace - changes to the var (eg. giving it a different atom) can happen thread locally, changes to the atom are synchronized across all threads

john20:05:22

Ahhh, okay. I'm ruminating over the possibility of a distributed persistent data structure. It would end up being a structure with pointers to potentially many different locations. Sounds highly inefficient.

john20:05:32

Resolving any particular mutation would potentially update multiple locations. And coordinating that sounds hairy.

john20:05:53

But it definitely could be more efficient than distributed non-persistent data structures.

john20:05:32

But then computations over that structure would also have to be performed in a distributed fashion. Otherwise you'd have to materialize the whole structure locally. Or at least all the nodes you plan on mutating.

john20:05:15

Which I guess would be the way to do it. Lazily materialize only the nodes you need. Then submit yourself as the new owner of the head to who ever the root owner is, coordinating the pointers.

noisesmith20:05:15

you could look at eg. zookeeper, (not only how it does things, but it's features and limitations due to being distributed)

noisesmith20:05:32

there's avout, which uses zookeeper, and almost acts like clojure atoms / refs / agents

john20:05:44

yessss. This is what I was wanting.

qqq21:05:38

I'm writing a DSL that compiles to cuda.

qqq21:05:44

I need to come up with a good name for my DSL.

mobileink19:05:20

qqq: i suppose you've considered and rejected the obvious: barracuda.

qqq21:05:11

(the relation to #clojure is that the compiler for my DSl is written in clojure and I'm using jcuda)

qqq21:05:21

and the entire compiler is written as a series of nested core.logic/match

qqq21:05:14

phalanx = good name, all threads act in unison, like soliders in a phalanx 🙂

john21:05:54

hhhm, looks like avout does distributed MVCC, but I think each version is a full copy. I don't think avout is splitting the structure's internal nodes across the net. Still reviewing.

john21:05:04

@qqq what other names are you thinking of?

devn21:05:52

best way to take something like this: ["a" [1 2] [3 4] [5 6] "b" [99 1] "c" [32 43] [89 76] [12 13]] and get this: {"a" [[1 2] [3 4] [5 6]], "b" [[99 1]], "c" [[32 43] [89 76] [12 13]]}?

devn21:05:03

IOW, if i encounter a string, I want to assoc it with all the vectors following it in the sequence

devn21:05:47

nevermind, figured it out, but i would be open to other suggestions

devn21:05:53

i'm rusty!

kwladyka21:05:06

i am trying to do queue with async, but it confuse me. I have queue and i want get from this queue max 20 items on each iteration (when 10 items i want get 10 items and do job, not wait for 20). How to do it in most simple way?

devn21:05:09

wound up just using something like (reduce (fn [[m current-string] e] (if (string? e) [(assoc m e []) e] (update m current-string conj e) current-string])) [{} nil] ["a" [1 2] [3 4] "b" [89 23] [43 54] [23 12]])

john21:05:55

(map #(if (string? (first %)) (first %) %) (map vec (partition-by type ["a" [1 2] [3 4] "b" [99 1]])))

devn21:05:55

@john oh, right, good call on partition-by

john21:05:43

np. Might be an interesting exercise for a transducer too

devn21:05:03

@john I wonder if there's a cleaner way to do handle those multiple calls to first

john21:05:03

that'd return a boolean

devn21:05:04

(map (fn [[x :as e]]
       (if (string? x)
         x
         e))
     (partition-by type ["a" [1 2] [3 4] "b" [89 23] [43 54] [23 12]]))

john21:05:05

haven't messed with transducers much, but this might be a transducer: (into [] (map (fn [[x :as e]] (if (string? e) x e))) (map vec (partition-by type ["a" [1 2] [3 4] "b" [99 1]])))

john21:05:55

think you still need a first in there

john21:05:52

Yeah, I'm not sure if that binding form is doing what you wanted

john22:05:14

Better transducer version: (into [] (comp (map vec) (map #(if (string? (first %)) (first %) %))) (partition-by type ["a" [1 2] [3 4] "b" [99 1]]))

john22:05:14

Even betterer: (into [] (comp (partition-by type) (map vec) (map #(if (string? (first %)) (first %) %))) ["a" [1 2] [3 4] "b" [99 1]])

john22:05:02

I need to start using transducers 🙂

devn22:05:15

@john cool, tbh, i have never really used them, and i've been using clojure for something like 7 years?

devn22:05:03

i've found that i use about 1/3rd of clojure 90% of the time

john22:05:55

same here. But that last version ought to perform better, for speed and memory. And it doesn't look unreadable either.

devn22:05:28

@john i was doing some dumb hobby stuff, so i'm not focused on perf. though I do often focus (sometimes unnecessarily) on that kind of thing at work

devn22:05:01

im moving back to an individual contributor role from a management role in a week's time, and i am trying to get back into writing some more code

john22:05:28

get your hands dirty 🙂

devn22:05:46

@john indeed. i forgot how much i missed it until a couple weeks ago

devn22:05:33

that said, management pain or programming pain, both are inevitable in my experience 🙂

qqq22:05:42

@john : no other names, I think it kind of makes sense since cuda is all about multipe threads doing the same thing on different data

john22:05:44

I prefer programming pain. But maybe that's because I've never worked in a dev shop 🙂

john22:05:28

I've done mgmt though. Computers are far more predictable than people.

john22:05:26

@qqq yeah, I don't seen any other clojure phalanx projects out there. Good name.

devn22:05:04

@john so here's a query for you: which of the above do you think is easiest to read?

devn22:05:36

for my money i sort of prefer the explicit reduce, if for nothing else the names

devn22:05:31

i wonder if there's a way to do this with reductions...

john22:05:14

I prefer the map version, but a reductions version would probably be pretty slick

kwladyka22:05:33

I have queue and i want get from this queue max 20 items on each iteration (when 10 items i want get 10 items and do job, not wait for 20). How to do it in most simple way? After each iteration i want do X seconds of break. What would you use to code it? I was trying with async, but it is maybe not the right choice.

john22:05:35

@kwladyka I'd partition it

kwladyka22:05:37

@john if you use partition-all with chan it works almost like that. It iterate after 20 items, but i want do job even with 10.

kwladyka22:05:39

it is harder than it looks 🙂

john22:05:03

eagerly take as many items as possible first. Then partition it. The last partition should be (could be) less than 20.

john22:05:27

You need some kind of way to detect that the queue is done or it's waiting

john22:05:32

maybe a nil, or close the channel

john22:05:17

whats the source? do you need a chan?

kwladyka22:05:35

i need a queue with conditions which i described

kwladyka22:05:49

i tried with async, but i failed

john22:05:09

but you say you might not need a chan. can you control the source?

kwladyka22:05:53

whatever solution i will make, but with this conditions

kwladyka22:05:04

i can use something else than async

john22:05:21

Why not use partition-all without the chan? I missed the conditions you were talking about.

kwladyka22:05:29

in what way? <! will freeze it when less than 20 items

kwladyka22:05:06

or you mean without async? But then how i will add things to queue and get from qeueue

kwladyka22:05:53

to avoid additional side-effects

john22:05:26

well, in clj, you could have a thread constantly consuming the queue

john22:05:43

not such a great idea in cljs

kwladyka22:05:47

sure, it is what async do

kwladyka22:05:03

and it is easy to get from queue parts by 20

kwladyka22:05:17

but hard to get 20 or less - i don’t know how to do it

john22:05:46

My core.async-foo is not that great. I'd look for a function that allows you to detect whether you're in a waiting condition. Perhaps checking for a nil on the take would be enough.

kwladyka22:05:19

nil is when channel is closed, but i don’t want close queue 🙂

kwladyka22:05:53

i was trying with channel consuming another channel, but still i failed

john22:05:44

Yeah, you tried sending a chan in the chan, and then closing the inner chan after each put?

kwladyka22:05:15

(def queue (chan))
(def queue-portion (async/chan 100 (partition-all 20)))
(async/pipe queue queue-portion)
i tried this one

kwladyka22:05:29

and it is almost what i want, unless it takes 20, not 20 or less

john22:05:23

You mean it hangs on less than 20?

john22:05:11

Are you closing queue-portion after each set of puts?

kwladyka22:05:33

i have to close queue to “unhang” it

kwladyka23:05:19

in what way? I can close it only after it will get 20 items, because only then it will unfreeze

kwladyka23:05:37

so it doesn’t solve the problem

kwladyka23:05:13

maybe i can add next chan with timeout

kwladyka23:05:19

so chan > chan > chan heh

john23:05:40

I'm saying, make a new queue-portion for each new set of items you want to send. Have a receiver that receives new queue-portions (chan). On the producer side, dump your data into the newly created and sent queue-portion, then close it. On the receive side, pass the new queue-portion to a function that consumes it. Which would close when the producer closes it.

john23:05:57

Though, again, my core.async-foo is not great, so that may not be the idiomatic solution. Sounds like a really simple problem that probably has a really simple solution in core.async.

john23:05:08

Might want to wait a bit and ask again when more folks are around.

kwladyka23:05:24

it sounds easy, but code it is not so obviously 🙂

kwladyka23:05:06

but probably it has to be solved by chan > chan with timeout > chan with parition-all

kwladyka23:05:10

it could work

kwladyka23:05:22

but it is a little complex i would say

john23:05:28

I really don't think you need a timeout

kwladyka23:05:50

i need some kind of timeout to not wait to fill 20 items

john23:05:02

It'd probably work. But it feels wrong

john23:05:29

Unless you don't care about potentially unnecessary wait times.

john23:05:54

Do that a few thousand times though and those timeouts may get nasty.

kwladyka23:05:55

wait between iteration is my intention

john23:05:15

Yeah, if you're doing pauses anyway...

kwladyka23:05:19

like i wrote on the beginning:

I have queue and i want get from this queue max 20 items on each iteration (when 10 items i want get 10 items and do job, not wait for 20).  How to do it in most simple way? After each iteration i want do X seconds break.

kwladyka23:05:16

but thank so much for try to help 🙂

john23:05:42

np. Good luck!

sekao23:05:33

hey y'all! i just launched a website for building git-based programming tutorials. the free one is on Clojure basics, so if anyone would like to try it out i'd love hear feedback: https://gitorials.com/id/1/

devn23:05:33

Does mapv force lazy seqs? I forget.

devn23:05:02

Kinda wish that the (filter|map)v fns said so in the docstring

devn23:05:27

in the same way that do* fns are explicit about it

devn23:05:01

err, maybe those docstrings don't after all

john23:05:39

Well, the map docstring explicitly states it returns a lazyseq

john23:05:52

and mapv says vector.

john23:05:25

And you don't get a vec out of seq without realizing it, afaik

john23:05:08

I suppose it could be implemented lazily though :thinking_face:

john23:05:27

@sekao looks awesome!

john23:05:53

oh, nice

Interested in making one? Login with a GitLab account to get started. All gitorials are $3, and you get 70% of every sale.

taylor23:05:25

I want to specify some function(s) via configuration/EDN for my program to use. The functions would be defined in the program. Could I define them as symbols in the EDN and somehow resolve/invoke them at runtime?