Fork me on GitHub
#clojure-uk
<
2019-01-23
>
agile_geek08:01:01

Bore da pawb welsh_flag

agile_geek08:01:28

Don't forget CfP for Build IT Right conference in Newcastle 4th April closes midnight Friday and early bird tickets stop after 31st January https://bitrconf.org/

alexlynham10:01:11

kibit doesn't like threading macros eh?

alexlynham10:01:24

(I vaguely remember this discussion already coming up before re: linting)

alexlynham10:01:38

thread all the things imo

agile_geek11:01:30

I tend to only use threading macros once I get beyond a fn wrapping the result of one other function but I sometimes like using let to provide intermediate names when it helps readability.

otfrom13:01:12

I think kibit has a number below which it doesn't like threading

otfrom13:01:16

I don't agree w/it either

mccraigmccraig14:01:55

i don't think you should use threading below a single form encapsulation

👍 5
alexlynham14:01:31

I think I'd generally start threading at 2 items, but the one I saw was two long function names from a lib, and using the threading macro inlined them

alexlynham14:01:36

which made it super more readable

alexlynham14:01:58

(even if the way they were being called was simple)

mccraigmccraig15:01:41

the other thing the threading macros do is give you some additional info about which argument is the focus, which i find useful for code comprehension - sometimes even with just a single call

✔️ 20
nha15:01:31

I agree 🙂 (I wrote a while back about it https://stackoverflow.com/a/32167270/1327651 )

💯 10
danm16:01:47

Threading can make it less obvious when the output structure of each form changes dramatically though. Like if one function outputs a vector and the next takes that in and outputs a map, or even something that takes in a map and outputs another map with completely different keys

danm16:01:52

We tend to restrict threads to where we are passing in a map and associng/updating/dissocing keys, or similar actions where it is a modification of the same data structure. That is like 90% of what our function do though, so we do tend to thread a lot

👍 5
alexlynham17:01:32

ooh I have thoughts... but I need to cycle home before I get killed by ice

alexlynham17:01:35

I think your pattern is probably pretty good there, but there's something you could do with schemas on reads or types I think (?)

alexlynham17:01:47

need to think about it more

alexlynham17:01:52

wish me luck and further life

alexlynham17:01:03

my eyes frosted up on the cycle in lol

Ben Hammond17:01:57

I'd agree theres an assumption that a thread represents a pipeline; I want to be able to comment out one line of the thread and not automatically break the code. which implies that the threaded elements all input/output the same datatypes

rickmoynihan17:01:56

Only thing I disagree with is not mixing -> and ->>. I think it’s ok iff you do it over the first arg only e.g

(-> (->> col 
         (map foo)
         (map bar)
         (zipmap [:a :b :c])
    (assoc :d :e))
Ultimately you’re probably better using a let but I don’t find the above hard to read.

thomas20:01:22

at this very moment I am running 3 test clients to my own MQTT broker and each client is sending and receiving 1M messages.

thomas20:01:08

and the clients publish with QOS 0 1 or 2, all driven with spec with random data etc.

thomas20:01:36

again this is only good path control flow.... no errors (I hope).

thomas21:01:21

One thing brokers tend to do is keep a count of the number of messages that have been processed... but just having an atom with a int in it and calling inc on it doesn't feel right some how (I have heard from other people that you get too many retries if the rate of updates is very fast (and it would be the case here)). Would an agent make sense? I don't mind it being a bit behind after all

thomas21:01:44

hmm about 80 second for 1M messages, not bad I think

thomas21:01:28

and over 5M test assertions. spec is great.

👍 5
thomas21:01:38

good night and ttfn

mccraigmccraig21:01:40

@thomas i've successfully used agents when i wanted to update a value from many threads without retries