Fork me on GitHub
#clojure
<
2016-02-22
>
alexisgallagher00:02:39

A question for the channel: What's the easiest way to call a side-effecting function only when a particular atom's value changes? I put one alternative on SO, but I think there must be a simpler way: http://stackoverflow.com/questions/35544113/call-a-side-effecting-function-only-when-atom-value-changes

jethroksy01:02:36

Atoms must be free of side effects

jethroksy01:02:49

In general Clojure's reference types might not be a good idea if you need side effects

jethroksy01:02:32

If you could explain your use case, maybe we could come up with something that would be more Clojure-ish

alexisgallagher01:02:53

I'm receiving messages from outside the process and need to sometimes update in memory state depending on the message. When I do, I then need to send a message on the network (side effect).

alexisgallagher01:02:16

It would be an error to send the message when no update was actually performed, or to send it more than once when a single update was performed.

jethroksy02:02:03

I like the answer by Timothy

jonahbenton02:02:04

hey @alexisgallagher for that use case, perhaps consider using an agent instead of an atom or ref. On message receipt, send-off a function that consumes the message, updates agent state, and determines whether to announce a change to other consumers. Agents ensure messages will be processed serially, saving you the trouble of managing subtle "did it change" semantics in a concurrent context.

alexisgallagher03:02:29

@jonahbenton: agents are asynchronous. I need synchronous. I need code after the change to see the new state value immediately not for that new state value to be computed and realized at some point later. Or am I misunderstanding agents?

jonahbenton05:02:00

@alexisgallagher: certainly- the point was really just about architecture. questions around "previous state" and "did something change" come up often with atoms and refs and are not just tricky because of the seeming omission that functions that operate on refs and atoms don't return the previous value- that's the case because defining "previous value" can itself be complicated and expensive to determine in a concurrent environment. the fact that the previous value isn't returned is a signal that that atoms + refs may not be the right tool to solve that problem. if it's most important to know if a change occurred, the suggestion was to consider imposing a total order, e.g. by putting all messages on a queue (which is what happens behind the scenes with agents). then you know when a change occurred, and you save yourself some of the cost of the concurrency machinery, with the tradeoff of some potential latency/asynchrony letting another thread service the queue.

alexisgallagher05:02:29

"the fact that the previous value isn't returned is a signal that that atoms + refs may not be the right tool to solve that problem" hmm, interesting. I'll have to think on that one...

arrdem05:02:13

Grimoire is down for a bit while I set up SSL, sorry for the interruption.

arrdem06:02:07

And we're back! g'night.

mx200008:02:32

I just open sourced my action rpg game that I wrote a few years ago : https://github.com/damn/cdq

mx200008:02:46

Try it out !

jarppe09:02:20

The one with buffer size 0 fails with (not (= 42 41))

rauh10:02:25

@jarppe: That's intentional, you need a buffer when using transducers.

rauh10:02:47

That's been been discussed a few times over at the core-async slack channel

jarppe10:02:09

I noticed that chan has this assert: (when xform (assert buf-or-n "buffer must be supplied when transducer is"))

jarppe10:02:54

that should propably fail also if buf-or-n is zero then?

jarppe10:02:05

ah, thanks for the link, it seems that there is a ticket for that in here http://dev.clojure.org/jira/browse/ASYNC-143

cristobal.garcia13:02:52

A question for the channel, if you don't mind: I am creating a series of namespaces which will offer alternate implementations of a protocol. Is there any way I could replicate the tests among all of them? At the end of the day, the results should be equivalent. Ability to easily add future implementations would be good as well. Thanks in advance!

jonahbenton14:02:47

hi @cristobal.garcia were you planning on using clojure.test?

cristobal.garcia14:02:20

hi jonahbenton, yes

jonahbenton15:02:39

and you probably have constructor-like functions in your namespaces to allow a user to specify which implementation they wish to use?

blueberry15:02:10

@cristobal.garcia: you can see an example of such testing setup for a rather demanding use case (protocols + native libs + GPU computing) at http://github.com/uncomplicate/neanderthal

cristobal.garcia15:02:56

@jonahbenton, Thanks. I was planning to use a default implementation and the ability to select non-default ones with bindings. The non-default ones would be useful for mocking. Constructor-like functions will be there, yes.

cristobal.garcia15:02:27

@blueberry: I am having a look right now, thanks.

cristobal.garcia15:02:34

@blueberry: exactly what I was looking for, thanks simple_smile. It might be good to switch to midje.

fenton19:02:44

does anyone have the opinion that using let is a code smell? I've heard someone say it was but didn't understand their rationale.

ghadi19:02:19

let is certainly not a code smell

ghadi19:02:37

if it's 900 lines long, then maybe

tolitius19:02:07

@fenton: do you have an example, or this is more of a philosophical question?

tolitius19:02:59

I find code smells are usually not caused by Clojure forms of built in functions but rather by the wrong usage

fenton19:02:03

@tolitius: both i can demo an example....gimme a sec.

fenton19:02:41

not sure if i should aim to not have the temp-uri state somehow...

ghadi19:02:10

no problem with let in that snippet

ghadi19:02:16

there is however a problem in factoring

ghadi19:02:28

the uri is clearly an argument

ghadi19:02:11

Rather than generating temp-uri in the body, make a third function that sets up the arguments.

ghadi19:02:44

I made a comment on the gist

ghadi19:02:24

with an argument the main function is now somewhat reusable

fenton19:02:42

@ghadi: ok. i'm using this snip to test out my *.edn files to load into datomic...just learing how to use datomic...so the URI really is temporary...but i do return a con, which can be used to connect to the db... but i understand your point that this isn't a reusable function

fenton19:02:14

@ghadi: thanks for a having a look tho.

ghadi19:02:34

(np. i realize it's scratch code)

fenton19:02:04

just reading a reddit on this question, and people suggest that let is sugar over a lambda.

fenton19:02:46

others say the added readability that let documents the code is better than the snarl of function calls.

ghadi19:02:50

that is how let is implemented in some languages, but not in clojure

ghadi19:02:16

but yes in lambda calculus an immediately applied one-arg function is similar to a let

fenton19:02:28

@ghadi: thanks for the clarification! simple_smile

mping20:02:25

do you know if jetty supports http pipelining?

hiredman20:02:57

according to google it does

mping20:02:53

dang I'm having problems with core.async then

mping20:02:05

doing a sample http server but requests are "sequential"

mping20:02:37

If I timeout on a go block it shouldnt block the curr thread right?

hiredman20:02:52

depends what you mean

hiredman20:02:33

the timeout function from core.async returns a channel, which on its own does nothing

mping20:02:55

I'm trying to replicate an experiment my colleague did on nodejs

mping20:02:00

but my reqs are sequential

mping20:02:51

I would hope two immediate prints with "accpt" but they have a delay of 2 seconds

mping20:02:19

with nodejs I can pipeline two reqs and the "accpt" print is immediate

hiredman20:02:38

my guess would be either some kind of issue with the jetty async adapter, or some kind of issue with your client

hiredman20:02:12

oh, I take that back

hiredman20:02:33

of course, if you are pipelining you are not making sequential requests

hiredman20:02:58

err, concurrent requests

hiredman20:02:09

pipelined requests, are one after another on the same tcp socket

mping20:02:33

yes, but what should I do to make the "accept" immediate?

mping20:02:56

I should not need to wait for the 1st request to finish

hiredman20:02:47

the http requests are in order on the tcp socket, so you have to process them, at least to some degree, in order

mping20:02:53

no biggie with that, I hope that jetty takes care of it

mping20:02:29

I just dont know why the 2nd accept "waits" for the first

hiredman20:02:16

because of pipelining you have a a single stream of requests and a single stream of responses, so in order to get to the "next" request in the stream, the first needs to be handled

mping20:02:20

Thats true, but i would expect not needing to close it,and having the ability of out-of-order writes. Só you are saying that until i respond to the first, the second is not handled right?

mping20:02:38

Maybe its jetty's fault :p

hiredman21:02:42

in order for responses to be sent back in order, either jetty has to buffer responses until the "previous" response are available, or process everything one at a time

hiredman21:02:16

like a pipeline

mping21:02:38

node is slightly diff because it has an internal queue

mping21:02:41

yep, Jetty is reusing the same HttpInput for the request body

hiredman21:02:52

assuming you posted this email to the mailing list about the same issue, if you look at the blog post you linked, it discusses the drawbacks of the buffering approach that nodejs takes

hiredman21:02:17

buffering the results uses more memory and is a possible ddos vector

hiredman21:02:35

jetty is entirely capable of handling actually concurrent http requests too, I think its default threadpool is limited to 50 threads, so if you want concurrently handled requests, you can just make them instead of trying to rely on how the server implements pipelining

alexisgallagher21:02:53

speaking of jetty, has anyone ever found an easy way to get jetty to use HTTPS with client certificate authentication? Every time I try I bang my head into the wall for about an hour and then go back to nginx as a reverse proxy.

hiredman21:02:20

you need a newish version (or at least newish several years ago) it exposes an option you need to set

alexisgallagher21:02:01

I can see two options on the jetty wrapper for a keystore and truststore. But the process of turning plain old certs into whatever file or object can be used to populate those seems quite painful, even with a helper library like https://github.com/aphyr/less-awful-ssl

hiredman21:02:33

oh, actually, I was using a custom version of the ring jetty adapter that exposed some more stuff, not sure if the vanilla one does it

hiredman21:02:49

it is painful

alexisgallagher21:02:56

I find openssl hard enough to use. Trying to learn keytool as well -- too painful.

alexisgallagher21:02:55

I wish there were some command line tool, like file, but which was super smart about the eight kajillion types of cryptographic assets one has to handle. Is this a key? a cert? In PEM? In DER? In PKCS12? PKCS8? armored or unarmored? It's very hard to keep straight unless you do it routinely.

mping22:02:41

I keep that stuff in gists

alexisgallagher22:02:50

I wonder how many security breaches around the world every year are fundamentally results of how hard it is to use openssl? 1000? 10,000?

hiredman22:02:59

openssl is hard to use, but it can actually be used, it may take longer than it should, but it can be done. I suspect more breaches are the result of that kind of stuff getting shuffled to the bottom of the backlog

mping22:02:00

@hiredman tks I posted it. Im actually not going to use this stuff in prod, just wanted to see if I can implement the same behaviour

mping22:02:30

And ofc the devil is in the details

mping22:02:20

I also suspect pipelining is not worth it in terms of performance

hiredman22:02:30

the way http2 is multiplexed over a connection (no surprise) seems like it would avoid the issues with http 1.1 pipelining