Fork me on GitHub
Benjamin C00:05:16

I'm curious if there are other full-time freelancers / sole proprietors in here. If so, do you have any recommendations for tools/resources to simplify the "business" side of things? Invoices/Policies/Legal etc.


I ejected from my consulting practice and went back to school about 18 months ago. I can't speak to any specific tool recommendations, but one thing I would try differently if I ever go back to it is paying a virtual admin/exec assistant for a few hours a week to handle some of the tedium of those types of tasks, timekeeping, sending routine emails that I way overthink, etc. There are some things I constantly avoid regardless of how easy a tool makes it, so I would rather pay to offload some of that mental/emotional toil onto someone with a different set of neuroses.

πŸ˜… 2
βž• 2
Benjamin C01:05:07

Hmm, that makes sense. Thank you!


Yep, peace of mind with regards to this stuff is important. Also it would cost me more to do it myself than offloading the work to a third party.


When people do greenfield project, like building something from scratch, do you bother with code reviews and unit tests at first? Or you kind of push out a working skeleton/prototype first, and then when you've got the structure you like and are done your exploration and refactoring, you retrofit tests and get it all code reviewed?


I consider TDD/unit testing and code review to be essential, even at the start of a new project.

❀️ 1

Interesting, I find it slows down figuring out the structure for the code. You get something code reviewed, tested, only to tear it all down and refactor things so all the code review and tests were wasted. How do you deal with that? Just take the hit?


If it ends up being torn down, yes, that's fine.


And refactoring is a natural part of maintenance so it's going to be in code reviews all the time.


If an engineer decides to build and throw away several possible solutions before they create a pull request to review, that's up to them, but that would be fairly unusual...


Ya, but normally in maintainance mode each iteration goes to prod. But on Greenfield that's not the case. So like your tests for that code that never makes it to prod, I feel that's wasted.. And kind of similar for code reviews. Feels silly to review a version that won't make it to prod.


But why aren't those first greenfield steps leading to some sort of production code?


If you're building an MVP, it should be a production candidate, even if its users are only a small, private segment of the target audience.


Because there's an initial chunk needed for end to end user behavior, and you're still exploring the best way to model the problem/apis and semantics around them you're not ready to commit to a client and backward compatibility on them yet.


You still need that tested and reviewed -- because it could stay around and get baked into the production... you're not guaranteed it will all be thrown away and new code built from scratch "properly".


Ya, an MVP is what I'm talking about. But it's not like 1 person builds the entire MVP solo on their local. The team will all work on it until it's ready to be released. So stalking about all those pre-release commits.


Nearly every startup makes that mistake πŸ™‚


Ya, the old prototype is now production switcheroo 😝


There's no such things as a "pre-release" commit. Everything is committed. Everything is reviewed. Everything is tested. Otherwise you're building on sand.

βœ… 1

Those tests and reviews are important to either validate the approach or highlight problems that might need a different approach.

βœ… 1

Generally I feel I like to see something working first, then I go and get it reviewed and add a test suite.


Everything about that rewrite was greenfield: new branding, new UI/UX, new requirements (based on lessons the business had learned from their first near-decade of operation), new platform/stack. Everything was new.


And you were writing tests and doing CRs on all commits even prior to MVP release?


Since that project started, we've switched bug tracker and git hosting, we've switched CI, we've switched search engines, we've switched languages completely, we've switched web servers. And we just migrated it all to new infrastructure in a new data center. And it was all tested and reviewed at every step of the way, even though none of the original code exists now and most of the infrastructure has changed too.


As it says in that article: step 7 was CI so unit testing and deployment to a staging server was fully automated and step 8 was to write (and test and review) the first feature as an MVP.


So that first feature was written by one developer from start to finish?


We were putting a new team together for that project, with a new management structure, so a lot of "people" stuff and communication had to be hammered out, which wouldn't be needed for any new projects...


No, that first feature was developed by a team of three or four developers as I recall.


How did they work together on it without commiting?


They didn't. That's exactly what I'm saying.


Ok, so they'd write some code, unit test it, get it code reviewed, pushed it even though it didn't deliver on the mvp feature yet. And iterated?


From "day one" they worked the way they continued to work, collaborating to produce new features for deployment to "production" (which was just a staging server at first since the only "customers" were internal for the first several months), with everything getting tests and reviews as we went along.


It didn't take very long to build that first feature (we deliberately picked something simple) but everyone was involved in some part of the machinery that supported it.


Sometimes one developer built one feature -- which would be reviewed with its automated tests before merging, to trigger CI to integrate/test/deploy it -- sometimes multiple developers worked on a feature branch together.


Would your feature branch commits also go through code review and full test coverage?


Before merging, yes. And developers often provided feedback on shared code on a branch before that point too.


Okay, well good to know. Sorry for sounding challenging haha, I've always done it the other way. Get MVP working, once happy with it and with the code design, add tests and have it all code reviewed. But I'm thinking of trying it with code reviews and tests throughout, because I find junior developers are thrown off by the freedom of not having them, and also sometimes they need the constant guidance. And this project will have a lot of junior. But my worries is that it delays the delivery and results in worse initial code design, because of the overhead to experimentation.


Yeah, I would imagine it could be very disorienting being thrown into a project that has no "structure" compared to other projects and then for someone to declare "OK, playtime's over! Time to start working the same way as other projects do! Oh, and by the way, all that prototype code you've written? I want you to spend several days writing tests for it and doing code reviews and fixing all the issues! What? It's hard to write tests for this code because it was written without testing in mind? Well, that's your problem now..."


Ya true. My prior Greenfield's we're often like 2 or 3 senior, with everyone knowing that we'll be adding tests so our code would be written already with those expectations, we just wouldn't commit to tests until we were like, ok this piece is good now, we can freeze the APIs and the general design for it, let's add tests.


But honestly, that phase of: we have a lot of tests to write now, and a lot of code to review, isn't fun, and takes a while as well. And I'm not sure if you had just done it as you go it might not have ended up being any longer.


Interestingly, the internet seems to be divided on the topic.


Quel surprise! I'm shocked, shocked I tell you, that the Internet is divided on something :rolling_on_the_floor_laughing:

πŸ˜‚ 2

There's a big mindset in SV startups that you should just "go fast and break stuff" and that means building any old cr*p as your MVP as fast as possible to get it in front of "customers" -- and I just hate that mindset in so many ways.


(I hate startup culture in so many different ways, TBH)

βž• 1

I've never done a startup, so I can't say for sure haha. It intrigues me though, might be my next gig so I know how it is for real. I definitely do tend to enjoy this: even though it might be a bit foul, so NSFW warning haha. Mostly I just find it funny, but also, I feel there's too much in the way of just having people program (at least at the jobs I've had) Anyways, what's funny for me is I actually worry the code will be worse, because I'm afraid the tests will come in the way of experimentation and finding the ideal design. But I think that can be mitigated by just being willing to refactor even if it breaks all the tests.


Oh dear, Zed Shaw... he is just... so toxic and awful... 😞


I don't think there's any one size fits all. It depends on the project, the team, your working style, etc.

πŸ‘ 1

> But honestly, that phase of: we have a lot of tests to write now, and a lot of code to review, isn't fun, and takes a while as well. And I'm not sure if you had just done it as you go it might not have ended up being any longer. Personally, I do self-reviews, writing, and tons of testing even as a single person on my own projects. I have at least as much fun thinking about problems as I do writing code to solve them and verifying that what I've made works well. Adjusting attitude to understand reviews as teaching and learning opportunities helped me a lot. Mature engineering teams do this from day zero for everything all the time because it keeps the bus factor low, velocity and throughput high, and crucially, because it makes "day one" a little bit easier. Most of the cost of software is in maintenance. It's good to assume everything is brownfield by default.

☝️ 1

Some of the most valuable aspects of today's version of software is the earlier versions of the warez, starting from day zero. Because day zero was full of faulty, untested assumptions. And today we may have addressed them or punted them for <reasons>. If we don't know the past we don't know why we are where we are today. The ADRs, design docs, well-written commit messages, and code reviews left behind by former selves remain useful long after they go obsolete.


(Pet peeve: A mechanical reason why code reviews seem onerous is because popular git forges are absolute garbage for reviews. patch-based diffing workflows a la gerrit, sourcehut, and even format-patch are infinitely better and faster for everyone concerned.)

☝️ 1

> Mature engineering teams do this from day zero for everything all the time It's really hard to go back to just slingin' code after one gets a taste of this.


@U04V70XH6 I never followed him. But would not be surprised when you make a website like that that you're not the most pleasant person in the world haha.


@U051MHSEK I don't find code reviews via GitHub/BitBucket pull requests to be onerous but the ability to switch from diff view to side-by-side view is critical functionality in my experience there.


I feel maybe some of you assume that I'm suggesting you don't test the code, but you do, what you delay is the implementation of a regression suite. In Clojure you'd test the code as you write it in the REPL for example. And you'd run the MVP have UAT on it, iterate feedback, and rework whatever you need. And once you've got something you're ready to commit too long term, you add a regression suite. Because you'll now have real users that you don't want to accidentally break. But I think I'll try the write and refactor the regression test suite even during the MVP phase and see if it really slowed things down or not. Knowing others have been successful that way and don't seem to mention it costing them time or arriving at a worse solution because of it is reassuring. Thanks all!


I really do find that writing tests helps me reason about APIs at all levels in the code and it helps me think about edge cases, which in turn all helps with design and code structure.


The less sure I am about how to do something, the more tests I tend to write upfront. That's how I figure out "how should this work and how should it be designed?"


Tests with assertions, multiple test cases, happy paths, and error paths? Or just a test like a playground where you can try calling your function/unit to get a feel for it?


"It Depends". I usually have at least one case I expect should not work as well as one or two happy path cases. When I'm building out a new API, I often think about what input should be disallowed right from the get-go, and write tests to verify those cases are correctly rejected. It's good "documentation" for the design thinking too, showing what you expect code to not handle.


I never get my APIs right the first time haha, like I dramatically end up changing them, sometime breaking one API into two or three, sometimes merging two APIs back into one, changing the number of arguments, their shape, etc.


Code reviews and tests are tools. I think their utility makes them worth trying and learning about, but they're not the kind of tool that is required everywhere, all the time. This my personal take, but I think different people have different styles for learning and exploring. Part of the task isn't just learning what works, but what works for you and your team. The type of project will also have a big influence on how effective code reviews and tests are. Tests can be more effective if you're building your 100th web API that reads and writes to a database. They might be less effective if you're writing a program that's addressing a problem that you're unfamiliar with.


> I don't find code reviews via GitHub/BitBucket pull requests to be onerous but the ability to switch from diff view to side-by-side view is critical functionality in my experience there. @U04V70XH6 I have a huge rant about this πŸ˜… so in the interest of world peace, let's just say that I'm firmly in camp Gerrit!

πŸ˜† 1
Amit Rathore04:05:29

πŸ’― code review process and architecture review and design documents ftw


You've never wasted code review and tests though? As you're finalizing the structure, design and APIs?

Amit Rathore13:05:14

Plenty - but always with the struts to support change

πŸ‘ 1

Thanks for the feedback


I'm in London for a few more days. Any London clojurians have recommendations?


How many more days?


Anyone else having Github remember tags and settings between issues and PRs? It is super annoying and I can't find a setting where I can switch the behaviour off.