Fork me on GitHub
#off-topic
<
2022-05-06
>
Benjamin C00:05:16

I'm curious if there are other full-time freelancers / sole proprietors in here. If so, do you have any recommendations for tools/resources to simplify the "business" side of things? Invoices/Policies/Legal etc.

chucklehead00:05:18

I ejected from my consulting practice and went back to school about 18 months ago. I can't speak to any specific tool recommendations, but one thing I would try differently if I ever go back to it is paying a virtual admin/exec assistant for a few hours a week to handle some of the tedium of those types of tasks, timekeeping, sending routine emails that I way overthink, etc. There are some things I constantly avoid regardless of how easy a tool makes it, so I would rather pay to offload some of that mental/emotional toil onto someone with a different set of neuroses.

πŸ˜… 2
βž• 2
Benjamin C01:05:07

Hmm, that makes sense. Thank you!

mpenet05:05:25

Yep, peace of mind with regards to this stuff is important. Also it would cost me more to do it myself than offloading the work to a third party.

didibus04:05:12

When people do greenfield project, like building something from scratch, do you bother with code reviews and unit tests at first? Or you kind of push out a working skeleton/prototype first, and then when you've got the structure you like and are done your exploration and refactoring, you retrofit tests and get it all code reviewed?

seancorfield04:05:33

I consider TDD/unit testing and code review to be essential, even at the start of a new project.

❀️ 1
didibus04:05:04

Interesting, I find it slows down figuring out the structure for the code. You get something code reviewed, tested, only to tear it all down and refactor things so all the code review and tests were wasted. How do you deal with that? Just take the hit?

seancorfield04:05:19

If it ends up being torn down, yes, that's fine.

seancorfield04:05:57

And refactoring is a natural part of maintenance so it's going to be in code reviews all the time.

seancorfield04:05:09

If an engineer decides to build and throw away several possible solutions before they create a pull request to review, that's up to them, but that would be fairly unusual...

didibus04:05:30

Ya, but normally in maintainance mode each iteration goes to prod. But on Greenfield that's not the case. So like your tests for that code that never makes it to prod, I feel that's wasted.. And kind of similar for code reviews. Feels silly to review a version that won't make it to prod.

seancorfield04:05:38

But why aren't those first greenfield steps leading to some sort of production code?

seancorfield04:05:37

If you're building an MVP, it should be a production candidate, even if its users are only a small, private segment of the target audience.

didibus04:05:49

Because there's an initial chunk needed for end to end user behavior, and you're still exploring the best way to model the problem/apis and semantics around them you're not ready to commit to a client and backward compatibility on them yet.

seancorfield05:05:09

You still need that tested and reviewed -- because it could stay around and get baked into the production... you're not guaranteed it will all be thrown away and new code built from scratch "properly".

didibus05:05:19

Ya, an MVP is what I'm talking about. But it's not like 1 person builds the entire MVP solo on their local. The team will all work on it until it's ready to be released. So stalking about all those pre-release commits.

seancorfield05:05:22

Nearly every startup makes that mistake πŸ™‚

didibus05:05:49

Ya, the old prototype is now production switcheroo 😝

seancorfield05:05:01

There's no such things as a "pre-release" commit. Everything is committed. Everything is reviewed. Everything is tested. Otherwise you're building on sand.

βœ… 1
seancorfield05:05:34

Those tests and reviews are important to either validate the approach or highlight problems that might need a different approach.

βœ… 1
didibus05:05:14

Generally I feel I like to see something working first, then I go and get it reviewed and add a test suite.

seancorfield05:05:46

Everything about that rewrite was greenfield: new branding, new UI/UX, new requirements (based on lessons the business had learned from their first near-decade of operation), new platform/stack. Everything was new.

didibus05:05:19

And you were writing tests and doing CRs on all commits even prior to MVP release?

seancorfield05:05:44

Since that project started, we've switched bug tracker and git hosting, we've switched CI, we've switched search engines, we've switched languages completely, we've switched web servers. And we just migrated it all to new infrastructure in a new data center. And it was all tested and reviewed at every step of the way, even though none of the original code exists now and most of the infrastructure has changed too.

seancorfield05:05:51

As it says in that article: step 7 was CI so unit testing and deployment to a staging server was fully automated and step 8 was to write (and test and review) the first feature as an MVP.

didibus05:05:00

So that first feature was written by one developer from start to finish?

seancorfield05:05:09

We were putting a new team together for that project, with a new management structure, so a lot of "people" stuff and communication had to be hammered out, which wouldn't be needed for any new projects...

seancorfield05:05:39

No, that first feature was developed by a team of three or four developers as I recall.

didibus05:05:01

How did they work together on it without commiting?

seancorfield05:05:13

They didn't. That's exactly what I'm saying.

didibus05:05:18

Ok, so they'd write some code, unit test it, get it code reviewed, pushed it even though it didn't deliver on the mvp feature yet. And iterated?

seancorfield05:05:26

From "day one" they worked the way they continued to work, collaborating to produce new features for deployment to "production" (which was just a staging server at first since the only "customers" were internal for the first several months), with everything getting tests and reviews as we went along.

seancorfield05:05:29

It didn't take very long to build that first feature (we deliberately picked something simple) but everyone was involved in some part of the machinery that supported it.

seancorfield05:05:47

Sometimes one developer built one feature -- which would be reviewed with its automated tests before merging, to trigger CI to integrate/test/deploy it -- sometimes multiple developers worked on a feature branch together.

didibus05:05:38

Would your feature branch commits also go through code review and full test coverage?

seancorfield05:05:34

Before merging, yes. And developers often provided feedback on shared code on a branch before that point too.

didibus05:05:12

Okay, well good to know. Sorry for sounding challenging haha, I've always done it the other way. Get MVP working, once happy with it and with the code design, add tests and have it all code reviewed. But I'm thinking of trying it with code reviews and tests throughout, because I find junior developers are thrown off by the freedom of not having them, and also sometimes they need the constant guidance. And this project will have a lot of junior. But my worries is that it delays the delivery and results in worse initial code design, because of the overhead to experimentation.

seancorfield05:05:50

Yeah, I would imagine it could be very disorienting being thrown into a project that has no "structure" compared to other projects and then for someone to declare "OK, playtime's over! Time to start working the same way as other projects do! Oh, and by the way, all that prototype code you've written? I want you to spend several days writing tests for it and doing code reviews and fixing all the issues! What? It's hard to write tests for this code because it was written without testing in mind? Well, that's your problem now..."

didibus05:05:14

Ya true. My prior Greenfield's we're often like 2 or 3 senior, with everyone knowing that we'll be adding tests so our code would be written already with those expectations, we just wouldn't commit to tests until we were like, ok this piece is good now, we can freeze the APIs and the general design for it, let's add tests.

didibus05:05:47

But honestly, that phase of: we have a lot of tests to write now, and a lot of code to review, isn't fun, and takes a while as well. And I'm not sure if you had just done it as you go it might not have ended up being any longer.

didibus05:05:32

Interestingly, the internet seems to be divided on the topic.

seancorfield05:05:18

Quel surprise! I'm shocked, shocked I tell you, that the Internet is divided on something :rolling_on_the_floor_laughing:

πŸ˜‚ 2
seancorfield05:05:24

There's a big mindset in SV startups that you should just "go fast and break stuff" and that means building any old cr*p as your MVP as fast as possible to get it in front of "customers" -- and I just hate that mindset in so many ways.

seancorfield05:05:53

(I hate startup culture in so many different ways, TBH)

βž• 1
didibus06:05:39

I've never done a startup, so I can't say for sure haha. It intrigues me though, might be my next gig so I know how it is for real. I definitely do tend to enjoy this: http://programming-motherfucker.com/ even though it might be a bit foul, so NSFW warning haha. Mostly I just find it funny, but also, I feel there's too much in the way of just having people program (at least at the jobs I've had) Anyways, what's funny for me is I actually worry the code will be worse, because I'm afraid the tests will come in the way of experimentation and finding the ideal design. But I think that can be mitigated by just being willing to refactor even if it breaks all the tests.

seancorfield06:05:33

Oh dear, Zed Shaw... he is just... so toxic and awful... 😞

phronmophobic06:05:33

I don't think there's any one size fits all. It depends on the project, the team, your working style, etc.

πŸ‘ 1
adi10:05:45

> But honestly, that phase of: we have a lot of tests to write now, and a lot of code to review, isn't fun, and takes a while as well. And I'm not sure if you had just done it as you go it might not have ended up being any longer. Personally, I do self-reviews, writing, and tons of testing even as a single person on my own projects. I have at least as much fun thinking about problems as I do writing code to solve them and verifying that what I've made works well. Adjusting attitude to understand reviews as teaching and learning opportunities helped me a lot. Mature engineering teams do this from day zero for everything all the time because it keeps the bus factor low, velocity and throughput high, and crucially, because it makes "day one" a little bit easier. Most of the cost of software is in maintenance. It's good to assume everything is brownfield by default.

1
☝️ 1
adi10:05:53

Some of the most valuable aspects of today's version of software is the earlier versions of the warez, starting from day zero. Because day zero was full of faulty, untested assumptions. And today we may have addressed them or punted them for <reasons>. If we don't know the past we don't know why we are where we are today. The ADRs, design docs, well-written commit messages, and code reviews left behind by former selves remain useful long after they go obsolete.

adi10:05:48

(Pet peeve: A mechanical reason why code reviews seem onerous is because popular git forges are absolute garbage for reviews. patch-based diffing workflows a la gerrit, sourcehut, and even format-patch are infinitely better and faster for everyone concerned.)

☝️ 1
adi10:05:44

> Mature engineering teams do this from day zero for everything all the time It's really hard to go back to just slingin' code after one gets a taste of this.

didibus16:05:11

@U04V70XH6 I never followed him. But would not be surprised when you make a website like that that you're not the most pleasant person in the world haha.

seancorfield17:05:28

@U051MHSEK I don't find code reviews via GitHub/BitBucket pull requests to be onerous but the ability to switch from diff view to side-by-side view is critical functionality in my experience there.

didibus17:05:57

I feel maybe some of you assume that I'm suggesting you don't test the code, but you do, what you delay is the implementation of a regression suite. In Clojure you'd test the code as you write it in the REPL for example. And you'd run the MVP have UAT on it, iterate feedback, and rework whatever you need. And once you've got something you're ready to commit too long term, you add a regression suite. Because you'll now have real users that you don't want to accidentally break. But I think I'll try the write and refactor the regression test suite even during the MVP phase and see if it really slowed things down or not. Knowing others have been successful that way and don't seem to mention it costing them time or arriving at a worse solution because of it is reassuring. Thanks all!

seancorfield17:05:33

I really do find that writing tests helps me reason about APIs at all levels in the code and it helps me think about edge cases, which in turn all helps with design and code structure.

seancorfield17:05:16

The less sure I am about how to do something, the more tests I tend to write upfront. That's how I figure out "how should this work and how should it be designed?"

didibus17:05:22

Tests with assertions, multiple test cases, happy paths, and error paths? Or just a test like a playground where you can try calling your function/unit to get a feel for it?

seancorfield17:05:56

"It Depends". I usually have at least one case I expect should not work as well as one or two happy path cases. When I'm building out a new API, I often think about what input should be disallowed right from the get-go, and write tests to verify those cases are correctly rejected. It's good "documentation" for the design thinking too, showing what you expect code to not handle.

didibus17:05:21

I never get my APIs right the first time haha, like I dramatically end up changing them, sometime breaking one API into two or three, sometimes merging two APIs back into one, changing the number of arguments, their shape, etc.

phronmophobic17:05:30

Code reviews and tests are tools. I think their utility makes them worth trying and learning about, but they're not the kind of tool that is required everywhere, all the time. This my personal take, but I think different people have different styles for learning and exploring. Part of the task isn't just learning what works, but what works for you and your team. The type of project will also have a big influence on how effective code reviews and tests are. Tests can be more effective if you're building your 100th web API that reads and writes to a database. They might be less effective if you're writing a program that's addressing a problem that you're unfamiliar with.

adi01:05:23

> I don't find code reviews via GitHub/BitBucket pull requests to be onerous but the ability to switch from diff view to side-by-side view is critical functionality in my experience there. @U04V70XH6 I have a huge rant about this πŸ˜… so in the interest of world peace, let's just say that I'm firmly in camp Gerrit!

πŸ˜† 1
Amit Rathore04:05:29

πŸ’― code review process and architecture review and design documents ftw

didibus04:05:41

You've never wasted code review and tests though? As you're finalizing the structure, design and APIs?

Amit Rathore13:05:14

Plenty - but always with the struts to support change

πŸ‘ 1
didibus16:05:50

Thanks for the feedback

emccue08:05:34

I'm in London for a few more days. Any London clojurians have recommendations?

dharrigan08:05:57

How many more days?

pez20:05:50

Anyone else having Github remember tags and settings between issues and PRs? It is super annoying and I can't find a setting where I can switch the behaviour off.