Fork me on GitHub
#clojure-uk
<
2017-04-11
>
Rachel Westmacott05:04:29

I’m not aware of standard naming conventions, but as you can define your own predicate functions for the test-selectors you can pretend that there is one, eg.

:test-selectors {:default    (constantly true)
                   :acceptance #(.startsWith (str (:ns %)) "acceptance") ... }

agile_geek07:04:03

Anyone looked at Urania? It looks interesting. I see lots of code that doesn't properly separate 'fetching remote data' from 'business logic' and loads of boiler plate around concurrency. https://funcool.github.io/urania/latest/

Rachel Westmacott07:04:41

@U05390U2P my initial thoughts on urania are that the rationale probably doesn’t justify the extra complexity - but there might be cases where it made sense...?

Rachel Westmacott07:04:33

I generally find that things that abstract over data sources end up limiting control/flexibility and end up frustrating me. Data sources (especially remote ones or large ones) are one of the few places where I ever seem to end up worrying about performance - and then I want full manual control.

agile_geek07:04:53

I can see that concern. However, I find that working in teams of greater than one I want constraints to enforce at least some separation and sensible defaults on how to deal with the boiler plate. I also find, personally, that in 28 years of professional development I've only had about a dozen really gnarly performance problems...given sensible defaults. In other words almost all the problems I solve are caused by people not the technology and having frameworks, libraries, idiomatic approaches and patterns reduces the amount of sh*t I have to deal with. That and the fact that I'm not a great programmer so I need to stand on the shoulders of giants!

mccraigmccraig08:04:50

i've looked at it - it looks pretty handy, and i plan to move my ad-hoc client side caching over to urania at some point. it's fundamental abstraction of composition of promises is great, and promesa (or cats) both help you make your code easily comprehensible with mlet and alet

Rachel Westmacott07:04:02

random core function of the day:

clojure.core/reduce-kv
([f init coll])
  Reduces an associative collection. f should be a function of 3
  arguments. Returns the result of applying f to init, the first key
  and the first value in coll, then applying f to that result and the
  2nd key and value, etc. If coll contains no entries, returns init
  and f is not called. Note that reduce-kv is supported on vectors,
  where the keys will be the ordinals.

agile_geek07:04:17

👍 one of my faves!

Rachel Westmacott08:04:59

I don’t think I’ve ever used it.

korny08:04:09

I’ve used (reduce (fn [[k v]] ... ) quite often… I guess I could use reduce-kv instead, but the expanded version is pretty easy to remember.

Rachel Westmacott08:04:18

reduce-kv feels less composable

dominicm08:04:00

I'd bet reduce-kv is quite fast. Using destructuring on a mapentry is slower than using key and val on it.

dominicm08:04:18

Not sure how significant the difference is, but significant enough to be mentioned.

bronsa08:04:04

it's pretty insignificant 98.5% of the times :P

korny08:04:44

good point - though I so rarely have to care about performance at that level when doing clojure, I often forget to think that way.

dominicm08:04:24

No, absolutely.

glenjamin08:04:02

reduce-kv is generally how i implement map-values

glenjamin08:04:20

although i’d generally try and pull in a lib for that rather than write it inline

korny08:04:23

yeah, I’d prefer to ignore perf issues - premature optimisation and all that. But I have once had to diagnose a really slow clojure app, and it can be … exciting trying to work out where the bottlenecks are.

korny08:04:10

Nothing quite as fun as trying to work out in a profiler which of your lazy sequences is chewing through cpu…

glenjamin08:04:25

although (into) is usually faster because it handles transients for you

agile_geek08:04:54

IMHO it's just not worth worrying about performance until you prove you have a problem. I optimise for change (readability/maintainability/simplicity). But spend the time measuring performance so you know when/where you have a problem!

glenjamin09:04:18

I agree, but with the caveat that one of the best ways to optimise for change is to use functions which express intent well - and if you use widely used implementations of such functions then they can be optimised behind the interface

glenjamin09:04:46

which is why the common clojure attitude of just writing things inline or in isolated util namespaces makes me sad

Rachel Westmacott09:04:23

is there a useful util library we should be using instead?

glenjamin09:04:33

not really 😞

glenjamin09:04:41

there’s a few, that probably overlap

glenjamin09:04:59

and the JVM dependency model makes them hard to use in libs

glenjamin09:04:28

I was hoping focused single-function packages catch on, but it seems unlikely https://github.com/glenjamin/map-values

Rachel Westmacott10:04:12

I like this idea (single-function libraries). I've been recently thinking (a very little bit) about trying to implement function-level dependencies.

korny09:04:47

“it’s just not worth worrying about performance until you prove you have a problem” - I agree, up to a point. People far more often optimise prematurely. But when I meet devs who are writing a view that queries a SQL database, without considering indexes, because “that would be premature optimisation” - well, then someone needs slapping.

tcoupland09:04:15

Couldn't agree more, the "premature optimisation" quote is badly used most times I hear it 🙂

korny09:04:29

Also, some problems are dramatically harder to identify near the end of a project than near the start. The real answer for all this that I like is to get a pseudo-performance test into CI early on - it doesn’t matter if it’s “real world”, I just want it to load your web page with a decent amount of fake data behind it, and check it’s not terrible.

korny09:04:52

and ideally, graph performance per check-in, so you know which commit made it get 3x slower.

dominicm09:04:19

SO do something really interesting around that

korny10:04:00

I’ve managed that on a couple of projects. Sadly people seem to always get caught up in time-wasting discussions like “we can’t do perf tests until Management tell us what the performance should be” or “this isn’t realistic because it doesn’t have a load balancer” or “the fake test data isn’t like our guesses about production use”. Sigh. Perfect is the enemy of Good.

glenjamin10:04:59

One system I’m currently working on has a 40GB production database with millions of records that was imported from a mainframe system and has a horrible schema

glenjamin10:04:12

it’s also full of sensitive data

glenjamin10:04:28

so perf testing is tricky, and the query plans are almost impossible for me to decipher

korny10:04:28

The core problem in the slow clojure app I was digging into (and this was a couple of years ago) was over-zealous use of prismatic schema coercion - turns out if you just throw coercions all over the place, then try to show 1000 rows of data on a page (“it’ll only be 1000 rows at go-live, no need to paginate yet”) and then each row has to traverse several children, each of which gets coerced … you get a slow web page.

agile_geek15:04:10

@korny I was being slightly trite and thx for calling me out on it. I agree somethings are just normal 'hygiene' and cost next to nothing. However, I did caveat with > spend the time measuring performance so you know when/where you have a problem

agile_geek15:04:08

This comes back to my statement in a thread earlier this morning that almost all the issues I see in code bases are problems induced by differences in understanding and that introducing 'constraints' in the form of frameworks/idioms/patterns doesn't remove those problems but it does narrow the scale of the problems to 'problems in the frameworks/idioms/patterns' which at least gives you a fighting chance to solve them.