Fork me on GitHub

A piece of advice from “A Philosophy of Software Design” is that increments should be abstractions and not features. The author (edit: John Ousterhout) warns of “tactical programming” that can emerge from for example agile development (which he neither fully rejects nor praises). He similarly criticises TDD as promoting the tendency of tactical programming when it is relied upon too much. Instead, more thinking and possibly experimentation (design twice) should be done before writing code. Comments and documentation should perhaps precede code for example. A guiding question should be: which abstraction enables this feature, rather than tunnel-visioning on a feature alone. I think this is an interesting take. In many ways it goes against some of the agile/OO advice (refactoring, TDD, sprints etc.) and I see some philosophical parallels with Clojure (hammock, system design etc.). I think there is value in both approaches. Sometimes the force of nature needs to kind of bang on a piece of code before we understand what the right abstractions are later, but sometimes a clear up-front design can enable better long term growth. Usually it is a combination of the two, as long as one is disciplined enough to actually take a step back and redesign and redo the thing at the right point in time.


I like the sound of this book. I agree with much of that. What’s the name of the author?


I think you can have your cake and eat it. You can strategize about how you’ll develop your libraries but also do just enough with them to get the working for the feature you’re working on at the moment. As you complete each feature you can re-evaluate and refine (or alter) your design


the author is John Ousterhout


@UVDD67FFX I work in a very small team of just two devs, one of my responsibilities is to make re-usable libs when they emerge from projects. Libraries are hard, because they imply more than working features. I don’t always see how their shape should be and I typically don’t get them right the first time. Sometimes they end up being smaller than I initially hoped (only provide a subset of the functionality). This is why I’ve been reading a ton of these kind of books and papers lately. The re-evaluation of how something was implemented is very communication heavy. And there is a pragmatic pressure (do we have time do this library/abstraction right) as well. So I tend to agree with what you said, almost purely because of how it turns out in practice. I’m starting to think that there are two almost distinct modes of writing code, where one is experimental and value oriented, as in do the minimal thing that provides a function/feature, and the other is about refinement, robustness and future understanding and re-use. Maybe one cannot exist without the other?

💯 2

#nbb is currently on the front-page of hacker news 🎉

💪 30
🚀 6
catjam 4

I have a program running on a server that produces a lot of messages, and I want to enable pub-sub across other machines. How do I evaluate options here? I've looked a little bit at redis, zeroMQ, amqp. But I'm not sure at this point what factors should motivate the decision

Ben Sless14:09:14

latency, persistence, resiliency, topology (single region, multi region, etc)

👍 4

multi region is required, hmm, I think I have to think out my future usage a bit more to rank the others.


I have experience both with RabbitMQ and Kafka, so maybe I can help a little bit more here. Redis is simpler, and you probably will need to write more code to support multiple scenarios, or use some tool over Redis; RabbitMQ is probably the "sweet spot", Kafka is faster but also lower level and I would avoid if you can. Maybe Apache Pulsar (I have no experience with it, but people that moved from Kafka to Pulsar usually like the experience)

👍 2

yeah, it's all about your requirements, all these suggestions are very different, before perf make sure these are met. Like can you afford to loose messages, what about ordering, partitioning, ha, etc etc


there’s also plenty of managed solutions for this: AWS SQS, Kinesis for bigger loads, and GCP PubSub in GCP land


A really off-line topic, but that I think worth mention. I have been working with a lot of projects that are not more than integration middle-wares with custom rules engine in the middle, and more curious is that a big part of those projects were built with Ruby on Rails, just because it’s a full batteries included solution with everything you can need to setup this integrations fast, but as generally happens with projects of this kind, things start getting complicated. I have a sort of intuition/gut that it’d be really wonderful to have the power of Clojure available in such projects, as a way to start really fast with Ruby and rely on Clojure as needed, and I think that it’d not feel unnatural, given that Ruby has a LISP heritage, so I bring this here, because it’s something I have been observing.


@marciol It's maybe a bit of a stretch, but you could try running Ruby and Clojure in the same VM. E.g. jRuby + calling Clojure should work I think within a JVM.


Another possibility is using GraalVM with TruffleRuby but this is probably harder to get off the ground, since Ruby is a guest language and Clojure would run in the host environment.


Another possibility would be to implement your "front-end" business app in Ruby and delegating some things to another service written in Clojure. That's not even that uncommon I think.


it was very common in the early days of clojure, a rails website with a clojure data processing backend. A former coworker used to call it the mullet (business in the front, party in the back), amusing to refer to ruby as business

metal 4
😆 2

when at I worked at a place like that we didn't have any kind of special sauce for communicating between clojure and the rails app, just http apis. not very different from what is in vogue these days (a react frontend talking over http apis to some backend)


My only memory of Ruby/Rails is that I had to learn it very quickly for a client project in a consulting company and it turned out to be a non-standard Rails project: we needed to disable the ORM stuff ( and at the same time deal with a real database. Because we didn't manage that database and to reduce going back and forth with their sysadmins we asked for just one text column we could write to in which we just wrote JSON. Half way in the project the client imposed to use that we needed to run in their JBoss server so we switched to JRuby and only needed to port some imagemagick code to awt, which surprisingly all worked. Debugging prod problems was very hard since we could not access their machines because of security reasons. After that, I never used Ruby/Rails again.


Yes @borkdude, I was wondering on how common/uncommon it’s approach and hope to gather other opinions


(That project I was referring to would have been much easier (yes, easier!) in Clojure instead of working around Rails.)


Yes, @deleted-user Sean did a awesome work on Coast


It’s another Sean


Ah nice, so you know him


I follow his work with Janet as well


The joy framework


I think I saw a talk about this on ClojureD once


Yes, I think that he presented there


> (That project I was referring to would have been much easier (yes, easier!) in Clojure instead of working around Rails.) Definitely!


I think the last project I heard about is the classical Flighcaster case:


@borkdude Would it be too hard to use libSCI to run some Clojure code inside Ruby? Or am I deep-dreaming too much? 😄


yeah, you could prepare yourself a native shared lib like that and then use that within Ruby, no problem ;)


you don't even need SCI if you don't need evaluation. just GraalVM native-image and compile as shared lib of what you want to expose.


I’ll play with it