Fork me on GitHub
Rachel Westmacott05:04:29

I’m not aware of standard naming conventions, but as you can define your own predicate functions for the test-selectors you can pretend that there is one, eg.

:test-selectors {:default    (constantly true)
                   :acceptance #(.startsWith (str (:ns %)) "acceptance") ... }


Anyone looked at Urania? It looks interesting. I see lots of code that doesn't properly separate 'fetching remote data' from 'business logic' and loads of boiler plate around concurrency.

Rachel Westmacott07:04:41

@U05390U2P my initial thoughts on urania are that the rationale probably doesn’t justify the extra complexity - but there might be cases where it made sense...?

Rachel Westmacott07:04:33

I generally find that things that abstract over data sources end up limiting control/flexibility and end up frustrating me. Data sources (especially remote ones or large ones) are one of the few places where I ever seem to end up worrying about performance - and then I want full manual control.


I can see that concern. However, I find that working in teams of greater than one I want constraints to enforce at least some separation and sensible defaults on how to deal with the boiler plate. I also find, personally, that in 28 years of professional development I've only had about a dozen really gnarly performance problems...given sensible defaults. In other words almost all the problems I solve are caused by people not the technology and having frameworks, libraries, idiomatic approaches and patterns reduces the amount of sh*t I have to deal with. That and the fact that I'm not a great programmer so I need to stand on the shoulders of giants!


i've looked at it - it looks pretty handy, and i plan to move my ad-hoc client side caching over to urania at some point. it's fundamental abstraction of composition of promises is great, and promesa (or cats) both help you make your code easily comprehensible with mlet and alet

Rachel Westmacott07:04:02

random core function of the day:

([f init coll])
  Reduces an associative collection. f should be a function of 3
  arguments. Returns the result of applying f to init, the first key
  and the first value in coll, then applying f to that result and the
  2nd key and value, etc. If coll contains no entries, returns init
  and f is not called. Note that reduce-kv is supported on vectors,
  where the keys will be the ordinals.


👍 one of my faves!

Rachel Westmacott08:04:59

I don’t think I’ve ever used it.


I’ve used (reduce (fn [[k v]] ... ) quite often… I guess I could use reduce-kv instead, but the expanded version is pretty easy to remember.

Rachel Westmacott08:04:18

reduce-kv feels less composable


I'd bet reduce-kv is quite fast. Using destructuring on a mapentry is slower than using key and val on it.


Not sure how significant the difference is, but significant enough to be mentioned.


it's pretty insignificant 98.5% of the times :P


good point - though I so rarely have to care about performance at that level when doing clojure, I often forget to think that way.


No, absolutely.


reduce-kv is generally how i implement map-values


although i’d generally try and pull in a lib for that rather than write it inline


yeah, I’d prefer to ignore perf issues - premature optimisation and all that. But I have once had to diagnose a really slow clojure app, and it can be … exciting trying to work out where the bottlenecks are.


Nothing quite as fun as trying to work out in a profiler which of your lazy sequences is chewing through cpu…


although (into) is usually faster because it handles transients for you


IMHO it's just not worth worrying about performance until you prove you have a problem. I optimise for change (readability/maintainability/simplicity). But spend the time measuring performance so you know when/where you have a problem!


I agree, but with the caveat that one of the best ways to optimise for change is to use functions which express intent well - and if you use widely used implementations of such functions then they can be optimised behind the interface


which is why the common clojure attitude of just writing things inline or in isolated util namespaces makes me sad

Rachel Westmacott09:04:23

is there a useful util library we should be using instead?


not really 😞


there’s a few, that probably overlap


and the JVM dependency model makes them hard to use in libs


I was hoping focused single-function packages catch on, but it seems unlikely

Rachel Westmacott10:04:12

I like this idea (single-function libraries). I've been recently thinking (a very little bit) about trying to implement function-level dependencies.


“it’s just not worth worrying about performance until you prove you have a problem” - I agree, up to a point. People far more often optimise prematurely. But when I meet devs who are writing a view that queries a SQL database, without considering indexes, because “that would be premature optimisation” - well, then someone needs slapping.


Couldn't agree more, the "premature optimisation" quote is badly used most times I hear it 🙂


Also, some problems are dramatically harder to identify near the end of a project than near the start. The real answer for all this that I like is to get a pseudo-performance test into CI early on - it doesn’t matter if it’s “real world”, I just want it to load your web page with a decent amount of fake data behind it, and check it’s not terrible.


and ideally, graph performance per check-in, so you know which commit made it get 3x slower.


SO do something really interesting around that


I’ve managed that on a couple of projects. Sadly people seem to always get caught up in time-wasting discussions like “we can’t do perf tests until Management tell us what the performance should be” or “this isn’t realistic because it doesn’t have a load balancer” or “the fake test data isn’t like our guesses about production use”. Sigh. Perfect is the enemy of Good.


One system I’m currently working on has a 40GB production database with millions of records that was imported from a mainframe system and has a horrible schema


it’s also full of sensitive data


so perf testing is tricky, and the query plans are almost impossible for me to decipher


The core problem in the slow clojure app I was digging into (and this was a couple of years ago) was over-zealous use of prismatic schema coercion - turns out if you just throw coercions all over the place, then try to show 1000 rows of data on a page (“it’ll only be 1000 rows at go-live, no need to paginate yet”) and then each row has to traverse several children, each of which gets coerced … you get a slow web page.


@korny I was being slightly trite and thx for calling me out on it. I agree somethings are just normal 'hygiene' and cost next to nothing. However, I did caveat with > spend the time measuring performance so you know when/where you have a problem


This comes back to my statement in a thread earlier this morning that almost all the issues I see in code bases are problems induced by differences in understanding and that introducing 'constraints' in the form of frameworks/idioms/patterns doesn't remove those problems but it does narrow the scale of the problems to 'problems in the frameworks/idioms/patterns' which at least gives you a fighting chance to solve them.