Fork me on GitHub
Dimitar Uzunov18:01:45

This went to the top page on the orange link-share site: I think he was on to something when he described classes and how badly they are often explained, but then went into a totally wrong conclusion. Regardless this feels like the majority view if there ever was of most programmers I’ve spoken to.

Lennart Buit20:01:15

arent they both kinda orange, the two top link sharing sites 😛

🍊 1

Agree, it really went nowhere after his initial why not use a function instead.


He also seems to somehow be confused between a struct and a class for some reason.


I had some trouble understanding what he was arguing for. Classes in Python are nice and should not be removed. And... Zig and Go should get classes? Also interesting to see "without classes people use maps" interpreted to be bad, where Clojurians tend to think og that as good - generic information structures.


I think immutability is key though. It is easy to see why someone might think that if a struct is mutable, what will guarantee its invariants and guard against data corruption? So classes solve this by encapsulating the data, and restricting the methods allowed to change it, so others no longer have direct access to the data, but must go indirectly through the methods who then can enforce the invariants.


And this is reasonable, and it's why Clojure does the same when you have mutable data, it hides it either behind an Atom, Ref, Agent, Var or behind a Type (deftype)


But if you have immutable data-strcutures, the interface to the data-strcuture is enough to guard against corruption, as each user gets its own copy and is free to twist the data however they prefer it.


And from that you gain flexibility like you said, each user can shape the data in their preferred structure/arrangement, modify it as they please, it become bounded to them, they add to it whatever they want, etc.


I'm curious to hear about other people's Sprint process. I've found that in 2 weeks, frankly, it's impossible to deliver value from start to finish. Apart from minor bug fixes, almost no feature or improvement can be started and done to production in 2 weeks. Even if the sprint was extended to 4 weeks, I think you'd still be looking at mostly trivial features and improvements. And realistically, I feel this can't just be my team, I feel what piece of software gains a feature or improvement every 2 weeks? Not only does that sound unrealistic, but I don't even know if it is desired, that would mean 26 features a year, 4 year down the line you'd have over 100 features, what product or service actually gets better when bloated to have over 100 features? And at that pace, I just can't imagine the code base and architecture being able to truly sustain 100+ features and still allow people to accrue more features to it in a 2 week sprint. In our case, features and improvements take more between 8 to 12 weeks, with some more ambitious things easily going to the 20, 30 and even in the 42 weeks range form start to finish. But I can't really imagine moving to a 12 week sprint. So what has happened in our case is that our Stories have really just become task, partial work that either can't be released to prod, or can't be released in a usable way, so there is no gained value to the users or the business, as in behind feature flag or a code path not exercised yet. The tasks look like: setup the hosts for X, configure the Y for Z, modify package H to take in new field K, etc. Basically they're all technical tasks that mean nothing to a user or stakeholder. Demos are like, here's the code I changed, or here is a log statement, or frankly its just: ya we started setting up the environment, which is a necessary step. Now, we are full CD and CI, we have automated unit, functional, integ and end to end test. So none of this is a symptom of lack of CD/CI. And so our Sprint are kind of like a touch point, every 2 week, we assess: ok so did we complete task X,Y,Z towards project FOO? Yes, ok, what are the next 2,3 tasks for it? Ok let's plan them in the sprint. Alright, what about project BAR? Oh we couldn't complete task K for it? How come, blocked on something, ok what do we need to get unblocked, ok plan a task in coming sprint for that, and carry over the task. It works alright, but also seems to just not be within the spirit of Sprints. It also causes a bit of a disconnect with the real vertical user/business deliverable, and could be said to add a bit of overhead. So curious what others have done with these challenges.


In my experience, teams that are able to focus on outcomes for customers rather than features are able to iterate faster. Continuously defining new tasks, new features becomes a chore. And as you say - a product with 50 new features after 100 sprints might not be better at all, just bloated.


Right, but the question then is, what process do you use to manage the "focus on outcomes", how do you keep focused on that and make sure progress is made and that you're focusing on the right outcomes?


I think it also depends on the tech and then structure and approach. If you have a system that has real modularity and very minimal code entanglement (referring to dependency issues, shared state, etc) then you can probably make quick features and changes in such a span. When you don't have that (where I work) you might find yourself with something ready that is too dependent on something else to release yet. Or even too risky. I work on a large team for an organization where this is just hurdle we have to work around, although I wish it wasn't. We are trying to get to a place where we are truly 'agile' with our scrums and 2 week sprints that mean something but in my limited 6 years in software I have never experienced anything that works as perfectly as the book describes. Not to say it can't exist of course.


@didibus I'm a bit surprised to hear that "almost no feature or improvement can be started and done to production in 2 weeks" -- I think a lot of agile shops would disagree 🙂


What sort of customer value are you building that it takes 8 to 12 weeks to deliver even an initial version of it? We're generally rolling out new features in 2-4 weeks, even for some complex stuff.


That's valuable info as well. What kind of features do you deliver start to finish so quickly? For us, for example, you'd have one that was adding a diff mechanism to processed ingested files which detects changes and automatically sends a notification with a report of the change attached to the user. That took us around 8 week I think, and it was one of our most straightforward feature. And I don't count the weeks prior to discuss the feature and come up with a design for what to do to solve the problem and how to build it, which was probably another 4 weeks of work that happened before we started implementing it. It was mostly carried out by one person, with an extra person here and there along the way.


I'd have to dig through our releases in JIRA to come up with specifics. Looks like two engineers get through 40-60 JIRA tickets every month on average between them, looking at the last roughly six months of releases. Lately, quite a few of those tickets have been infrastructure-related and/or preparing the ground for other initiatives, but there is also a steady stream of bug-fix tickets in each release.


For a lot of features, we look at what "steps" add value along the way, that could be built and deployed on a regular cadence as a way to get a complete epic built and deployed over several releases. That's in line with what @U3X7174KS was saying.


I'm discounting everything that is more of the maintainance and operations side. Things like upgrades, security patches, bug fixes, infrastructure related, answering questions, etc. Those often do fit in a Sprint or two. So does it mean that the deliverables of value to users and the business in your case also tend to spam across many sprints and be longer term Epics? If so, how do you break down the work each Sprint?


I'm also missing the "steps that add value along the way" part? Are you saying for every feature you manage to find a smaller 2 week feature that eventually adds up to a mega feature but still can be used immediately by users or benefits the business right away even prior to the whole thing being done?


Yeah, normally, we can find pieces that easily fit in a single "sprint" that still provide some value to some "user".


A lot of our features break down into stuff the internal users need (editing, managing, reporting, analyzing, etc) and stuff the external users will trigger and/or take advantage of, and we can usually see a natural order for getting these pieces to production at least one piece per "sprint".


(we're not using Scrum these days -- we used that when we were doing our initial build -- but once we had a stable product we switched to Kanban... just implementing stuff as it comes up and releasing it to production in regular chunks)


In all honesty, I never really liked sprints. It feels forced to time box "features" and the whole process seems to value getting features done more than delivering actual value. I guess a lot may depend on the product/app you are developing, and the phase you are in. My job basically comes down to delivering the most value possible to the end user. For us it turns out, simplicity is the key to almost every decision we make. If simplicity is at the core, then a process that adds features every two weeks does not make sense. If I can spend an extra few days figuring out if we can make a specific feature simpler, and validate it before we build, then those days will be well spent.


Beliefs based on life lived so far... Pros: I find 1 or 2 week time-boxes are good for free-flowing communication (with optional daily checkins). The 1 or 2 week timebox can also be a useful time resolution for historical analysis. Cons: Such arbitrary boxing fails as a constraint about the future (for planning, estimation, and goal-setting). One, because humans tend to be terrible at estimates, and two because the future is a function of workflows, and deeply influenced by feedback loops and emergent behaviours (life happens). It is not a function of how we would like it to unfold (which is what an estimate is saying). Why? A time box sets up an artificial play pen which we can game ourselves into subverting. What does it even mean to get N tickets "done" in that time frame? Sure, it has some correspondence to value delivered, but really, to what extent? Measuring value is pretty sticky. It lies in the eye of the beholder, and is a subjective interpretation-cum-extrapolation derived from whatever past we see. For planning, it's important to think in terms of priorities and uncertainties. One team I worked in had this 3-tier notion of work. • One is reactionary stuff that needs attention now (a service is failing, an outage happened, we found a severe bug). • Second is tactical stuff we understand well and definitely want to get done by a certain time (migrate a service to new boxes - AWS has told us they will nuke some class of instances by so-and-so date, execute a data-at-rest encryption plan because of new compliance needs, ship a particular feature because it unblocks a 6 figure deal) • Third is strategic stuff we need to do to satisfy whatever we think will be true about the near future (year or two). Each of these demands a completely different way of working and thinking --- as an individual and as a team. For example, no. 3 activity must start last year to make sure our MTTR is still ~45 mins even though we scaled 10x by now. The way to keep MTTR at that level is totally different now for <reasons>. We can react and respond fast today because we gradually chipped away at the problem over months, including multiple iterations of design and prototyping to readjust strategy to our changing model of how we wanted to run our systems, which change in turn was influenced by No. 1 and No. 2 type priorities. Our team would make tickets to track no. 1 and no. 2 type work breakdowns as a means of communication, but we never did story points or metrics-oriented estimates. Communicating frequently, on a regular cadence, helped us keep all three types of work sorted in our heads. That's how we shipped a tremendous amount of value, while keeping a fairly large and mixed-bag collection of systems up and running. We weren't perfect of course, but I like to think we weren't too bad either :) I guess in hindsight, that model is basically the famous 2 x 2 box of Important (yes, no) v/s Urgent (yes/no).


> • One is reactionary stuff that needs attention now (a service is failing, an outage happened, we found a severe bug). > • Second is tactical stuff we understand well and definitely want to get done by a certain time (migrate a service to new boxes - AWS has told us they will nuke some class of instances by so-and-so date, execute a data-at-rest encryption plan because of new compliance needs, ship a particular feature because it unblocks a 6 figure deal) > • Third is strategic stuff we need to do to satisfy whatever we think will be true about the near future (year or two). @U051MHSEK This separation makes a ton of sense to me - thanks for sharing. How did you prioritize work on the team between people and reactionary/tactical/strategic? Did everyone work on all tracks? How do you prioritize between strategic and tactical work? What kind of work did you do personally?


Well, it all evolved over time. And my explanation is how it happened, not necessarily articulated in those exact words when we were in flight. I was attached to what we called "production engineering", a sort of devops + dev tools group. I did both kinds of work (e.g. things like executing zero-downtime production db switches, and making internal infra for use by dev teams). We were a team of 5-to-10 people over time, and lucky to have a terrific team lead who created the context and culture. We operated with a shared understanding, rather than strict role separation. Prod uptime was priority no. 1. So, designated OnCall (primary/secondary) would catch prod issues and triage with the respective service owners. And if it was severe, we were fine dropping everything else and going all hands on deck. The oncall persons would also pick up the quick bug fixes and anything else if it was a quiet day in server-land. We'd discuss this daily. Beyond that, we would pick tactical stuff on a shared basis. Not pairing, but not alone either, which meant writing things down in code, and wiki. The goal here was to get multiple people familiar with the details of our systems. We were expected to communicate bad news, obstacles, or SNAFUs as soon as we encountered them. Pretty much everyone saw all changes at some point, because we had a single shared devops codebase, with trunk based development. Shared operating context is critical. Graphs, metrics, and our public slack channel were the daily bread and butter. Our team lead was hands-on with prioritising the long-range stuff and he would talk to us regularly about those. This included designing + coding workflows that we used to deliver on type 1 and 2 work. He would firewall us from out-of-band requests, sort out inter-team conflicts, coach us, and work out organizational priorities with the exec team. Work was intense, but I really liked my time on that team. It was quite a formative experience.

❤️ 3

We followed a fairly strict time-boxed, sprint-based process when we were building the platform because we had a pretty good roadmap of features and found that time-boxing gave us near-future goals and it felt good to see the backlog of features steadily reduce over time. Once we had the platform live, we switched to more of a Kanban model where we pick tickets that are ready to work on, and just work continuously, releasing the trunk to production every two to four weeks depending on what the business folks want (since nearly all our stuff is automatically promotable to production -- by the business team, if they want, or they can ask us to do it -- we can release multiple times a day if we have anything urgent that needs to be done).

gratitude 4

P.S. I'm all for timeboxing. We did that very aggressively in fact, at multiple levels. A) If anyone was stuck for more than an hour on type 2 stuff, we would bang out a message and ask for help. B) Our lead coached us to get one thing done per day --- anything at all, but at least get something done, mainly for our own sense of satisfaction. But what do you know, little bits and bobs add up over a week, and compound over a month. C) For the longer-term stuff we would let our collective sense of priority determine what to re-scope, what to cut, what to double down on, typically on a weekly cadence --- planning meetings and retros.

👍 1

☝️:skin-tone-2: I think B) here is often overlooked -- and it is extremely important, IMO.


> a fairly strict time-boxed, sprint-based process when we were building the platform because we had a pretty good roadmap of features I do agree that this is one place where an engine-like system works well; fixed-scope, solved-design-problem type work. There's a certain joy of the momentum of shipping day after day, that's hard to describe. I haven't seen the scenario play out in my work (chaotic high-growth startups and/or greenfield projects where I'm individual contributor/sole owner). I also think it works well only if there's mature leadership / teamwork to keep the scope the scope, and say no to The Feature Creep.

👍 1

Thanks for sharing, Adi and Sean 🙌 The "do at least one thing per day" really resonates with me. There are productive days, and unproductive days. But keeping the baseline solid -- even on the bad days -- gives a ground floor. And small incremental improvements add up, especially when those small improvements lets us work smoother.


I find the best part of "do one thing a day" isn't so much to get it done, but to force you to each day think about what to do next. It's surprising how often you're stalled because of a simple lack of clarity on the next step. And actively thinking, what is the next thing I can possibly get done today? That in itself can enable you to stay focused on delivering and help you get the right things done.


I recently finally read the "gtd" thing which seems popular with lots of developers. It struck me that a lot of the practices I've seen over the years seem a lot like the practices prescribed in gtd for personal time management (just applied at the team level)


> In our case, features and improvements take more between 8 to 12 weeks, @didibus sounds pretty close to Shape Up's 6-week cycles, which of course can be tweaked


Wow, never heard of this. That's a looong read haha. But I will look into it, seems interesting.


My company is also on an 8 week cycle for final deliverables, despite maintaining 2 week sprints in JIRA. Odd, but somehow we have made it this far...


8 week makes more sense. But still. Say there's something they want and it's just not going to fit in 8 weeks? Then what? Can you really downsize every requirement to fit under that? In a way for me it's a similar issue as 2 weeks. Probably in 8 weeks it would work a lot better, for most things we'd find a way to size it down to an 8 weeks deliverable, but I think some things wouldn't fit, though rare, more ambitious things most likely, or things that touch a lot of moving parts. And that's where I still find it all a bit strange, so would that 8 weeks just become a bunch of partial work towards say the 12 weeks it truly takes? And at some point I just wonder what's the benefit anyways of these time slices?


Derisking is a part of the usual reasoning, i.e. there might be a developer<->requirements mismatch so the longer one works without delivering something that can be factually assessed, the larger the risk accrues

Richard Bowen23:01:56

Hey, what companies do you know of in the automotive industry that uses Clojure?


One large car producer from the south of Germany uses Clojure. :)

Richard Bowen00:01:12

Yea, which one? My guess is Porsche.


I think is in the automotive space and they are users of Clojure/Script

Richard Bowen23:01:10

@dpsutton, interesting, thanks.