Fork me on GitHub
#off-topic
<
2022-01-23
>
hiredman00:01:21

The big difference is usually qa is testing against a complete running system without knowledge of the internals of the system.

hiredman00:01:20

Often devs write unit tests or limited integration tests, and write them from the point of view of knowing the internels

hiredman00:01:29

Qa is a second set of eyes with a different point of view

3
mjw04:01:09

Yes, I can see that role being useful in larger organizations building larger applications that cannot be managed by either a single team or at least a smaller number of teams. But otherwise I don't see why developers shouldn't be expected to write system-wide tests in addition to unit and integration tests.

andy.fingerhut04:01:19

I have been in an organization where I think developers had so many customer-found and internal-QA-team-found issues to work on, that it wasn't really recognized that it was part of a developer's job to write tests, but to throw it over the wall to the internal QA team. The very few developers who did write their own extensive "module-level" or "unit-level" tests were sometimes asked by their managers why they spent so much time doing so. I would consider that a sign of a software dev organization that isn't rewarding developer testing activities enough, to the eventual cultural point where it was semi-discouraged. Not a good place to be long term, in my opinion, but it happened in that place, at least.

Max05:01:19

QA is tricky, I haven’t seen it done in a really satisfactory way before. On one hand you want devs to own quality, and they’re the experts on how the software should work and the ways it could break. On the other hand, for quality to be high you need an org-wide strategy for how to test what where when. And there’s problems you don’t want to have to solve N times for N teams, like testing infra, spotting consistent quality process gaps, etc. Unfortunately most QA people on the market aren’t (skilled) software engineers, so you end up with skill/perspective/tool/language gaps between devs and QA, which creates silos, which leads to chucking quality over the fence. For example, at my current org, the devs write Clojure but QA uses Python. I’d be surprised if QA was familiar with property-based testing or mutation testing.

mauricio.szabo19:01:45

I also never saw a QA team that was satisfactory. One of the problems I saw in one or two companies that I worked with is that QA was the mentality: the work of a QA team is to find issues in quality, so when the developers' evolved to not have these issues, lots of nitpicking started to happen, like "this UI can't go to the production because it's wrong by 5px". Also, even when QA is able to write tests, there's the trouble with "who feels the pain": devs that need to run tests multiple times to make a feature will suffer with slow tests that QA don't have to run as often, and QA don't trust devs' tests that much so they tend to re-write some of these tests...

2
Max05:01:11

It’s kind of like DevOps: I think what you really want is devs specialized in QA, but hiring/retaining devs in those roles is nigh impossible

adi05:01:21

No one department can assure quality. That is the job of every department. Quality is an aggregate thing. It's better to discuss in terms of testing. Should you test in many different ways? Yes, absolutely, including having dedicated teams. To what extent depends on budget and criticality of the software being written.

adi05:01:32

Dan Luu has a couple of nice posts on the subject of testing: • Given that devs spend the effort they do on testing, what can we do to improve testing? https://danluu.com/testing/ • Testing v/s informal reasoning https://danluu.com/tests-v-reason/ And it is worth reading stuff written by old hands like Cem Kaner (https://kaner.com/) and James Bach (https://www.satisfice.com/).

mjw14:01:37

Thanks! I'm not questioning the value of tests, but rather the value of creating a whole separate group that own responsibility for them with the inevitable consequence that anyone else no longer owns that quality in the same way.

adi16:01:59

Well, I think it's important to distinguish quality and testing. They are not the same. Testing is just a tool in service of quality. Quality is an organizational responsibility because every function affects the quality of the end product or service. Thus any organization that says "that group is solely responsible for quality" is actually abdicating responsibility. That attitude fails in manufacturing, in civil construction, in services, pretty much everywhere.

adi16:01:05

So I basically agree with your opinion; just with the distinction of terms.

adi16:01:25

To connect that with a real-life anecdote: at a former employer, our biggest enterprise customers have told us they stuck with us because our MTTR was under an hour even after major outages. Our direct competitor (a publicly traded company) would fail them for several hours to a day sometimes. We could do that because of the systems our production engineering group had built. So the customer experience of quality was "yes, we expect things to fail now and then, but we want to be live again as soon as possible". And operational excellence made that possible.

adi16:01:36

Analogous to quality, safety is not the job of one department. Pretty much all the arguments made in Safety Differently (worth a watch) also apply to quality (what does it take to get safety right?). https://www.youtube.com/watch?v=moh4QN4IAPg

mjw00:01:02

Thanks, this is all helpful. And yes, I fully agree that "tests" and "quality" are not identical concepts. The context for my original question is the organization with which I work (and others like it in the past) have separate QA personnel that not only test applications for bugs, but also write large numbers of automated end-to-end test scenarios. The result is that, especially historically, applications have very few (if any) tests written by developers. The mentality is that developers can write application code, toss it over a wall to QA, and let QA worry about finding defects. The pressures of meeting a deadline increases the number of corners cut and therefore the number of bugs uncovered by QA, adversely affecting everyone's impression of quality. There are a number of people within the company that recognize that this is a problem, and there is even leadership that will acknowledge the same. However, once you're back into day-to-day life on a team, you're back to working with developers who are used to not writing tests, testers who are used to compensating for developers who don't test, business users who are used to not sleeping at night until they know they have a comprehensive set of e2e tests, and leadership that is used to giving that to them. I will admit that this context has given me tunnel vision when it comes to seeing where QA fits outside this specific paradigm. So again, this conversation is helpful to me.

👍 1
adi04:01:02

Yes, what you describe is endemic to many organizations. FWIW, my thoughts formed because I started in tech doing testing work and hiring for testing. We tested backends and we wrote e2e integration test suites in Clojure. Developers were expected to write unit tests and integration tests. We came at things more from a systems point of view. We would also get into to the Clojure code and help developers debug stuff. But we mainly focused on e2e testing work required to increase confidence on features. A good tester is worth their weight in gold. Sadly it's a third-class low-status job in the industry at large. I think the bias is because "manual" testing is seen as donkey work, far beneath the station of "people who can code". A bit tongue-in-cheek... Well if you're writing the code and you're making the bugs, and the other person is finding them. Guess who's smarter?

adi04:01:20

My colleague in testing did some good work like this: https://www.youtube.com/watch?v=YOsfPrgNY_M

👍 1
mjw14:01:11

> Sadly it's a third-class low-status job in the industry at large. What do you think would be required to change this mentality? I work with testers who are definitely bright and very good at their jobs, but I think it would be really beneficial if they were allowed to put their skills toward helping the entire team improve their own testing skills. > Well if you're writing the code and you're making the bugs, and the other person is finding them. Guess who's smarter? I think a big part of my frustration is that developers should find these bugs before they ever reach a tester's desk, and that we should make an effort to train developers to test the same way we've trained testers. The extent to which these two roles have been so cleanly separated is absurd to me. The sad thing is you say the "Guess who's smarter" bit is somewhat tongue-in-cheek, but it's really not. That whole separation of roles is predicated on the idea that developers cannot be trusted to deliver quality; I see evidence of this when I point out that I'd previously always written my own e2e tests and am told, "that's like you're accepting your own story" or "that's like letting the fox guard the henhouse".

adi17:01:07

Oh now this is becoming a conversation meant for a bar, my friend :) Short answers: > What do you think would be required to change this mentality? Within a company? Better bosses. Industry-wide? I have no idea. > That whole separation of roles is predicated on the idea that developers cannot be trusted to deliver quality Somewhat agree, but I think it falls out of weak goal alignment and poor communication.

mjw18:01:00

If you ever happen to be in the Chicago, USA area...

🍻 1
adi18:01:31

Maybe in 2025 :)

😆 1
adi06:01:43

This is one of my favourite resources: What Is a Good Test Case? https://kaner.com/pdfs/GoodTest.pdf I like to give this to people and ask them what type of testing should we ask of programmers, and what to ask of testing specialists?

💯 1
p-himik07:01:33

Thanks for the links! I love the work of Dan Luu, but haven't heard of the other guys before.

1
1
vemv17:01:00

Anyone ever experienced this one? git ops over the network will stall until I reset my wifi connection Could be just my ISP messing with stuff

seancorfield18:01:08

Yup. We get this at work with one of the VPN endpoints we use. If I open a VPN to our old zone, git ops take forever to complete (they do eventually complete but they take minutes rather than seconds). If I open a VPN to the new zone that we're migrating everything into, git ops complete normally (in seconds). So there's something funky about network routing for us, presumably through the old zone network to BitBucket cloud, that isn't there for non-VPN access and isn't there in the new zone network.

seancorfield18:01:00

(and there's no knobs to dial on the VPN software to force certain routing to not go through the VPN, before anyone says "Oh, just tweak your VPN settings" 🙂 )

seancorfield18:01:11

So, yes, it's possible that some changes in network setup -- beyond your control -- could cause this. You could use traceroute to your remote repo servers at different points to see if you could figure it out, but there's probably not much you can do about it.

vemv02:01:19

thanks for the insights! 👀

qqq17:01:24

Besides ()'s, how does groovy compare to Clojure ?

mauricio.szabo19:01:31

My last experience with Groovy (with Grails) was simply awful. Probably one of the worst languages/frameworks I worked with: compiler entered an infinite loop at least once a week, specific JVM versions crashed compilation, static typing in code but only checked at runtime, incremental compilation was a joke (when it did work, it compiled more than it needed, when it didn't, I got runtime errors like ThisObject can't be converted to ThisObject), compilation errors that disappeared when I tried to compile a second time...

qqq19:01:26

Hmm, so you would not recommend learning Groovy as a way to extend IntelliJ?

ericdallo19:01:44

@UQTHDKJ8J spent some time with intellij clojure-extras plugin, and what he said to me is that he misses clojure 😂

clojure-spin 1
mauricio.szabo19:01:29

I don't know, I don't use IntelliJ 😄

brcosta21:01:37

I never did anything relevant with groovy, just toys, so can’t really recommend it! But if you want to extend IntelliJ I can recommend Kotlin - very well supported and easy to learn, it’s not clojure but at least interop works ok 😉

👍 1
qqq23:01:48

Last I checked Koltin did not have a REPL. I was hoping Groovy would offer nice REPL + optional typing. Unfortunately, it looks like my plan won't work.

mauricio.szabo19:01:45
replied to a thread:

I also never saw a QA team that was satisfactory. One of the problems I saw in one or two companies that I worked with is that QA was the mentality: the work of a QA team is to find issues in quality, so when the developers' evolved to not have these issues, lots of nitpicking started to happen, like "this UI can't go to the production because it's wrong by 5px". Also, even when QA is able to write tests, there's the trouble with "who feels the pain": devs that need to run tests multiple times to make a feature will suffer with slow tests that QA don't have to run as often, and QA don't trust devs' tests that much so they tend to re-write some of these tests...

2
mjw01:01:02

This is more or less where we're at now. Developers know they need more tests and do seem to be making an effort to improve the quality of both the code they write and the experience they deliver to our users. At the same time, QA testers continue to churn out tests that developers have already written (or should have written), leads continue to push for a large e2e suite, and the product owners continue to demand those e2e suites. In other words, different people seem to be getting different messages/incentives.

🤝 1
andy.fingerhut19:01:05

The goal of an overall VP of software development, or whatever their title is, at a successful software company is to create an organization and roles such that everyone has the incentives to make the company successful overall. That isn't easy to do as the organization gets larger. Possible, but not easy.

andy.fingerhut19:01:58

If you give people incentives in their job like "I'm going to measure you on metrics A, B, and C, and that determines whether you get raises, or promotions", then some people in that role will figure out how to maximize A, B, and/or C, regardless of whether that is good for the overall success of the organization, because that is how you said you evaluated their work. A good manager tries to tune A, B, C, or adopt more wholistic measures and tries to communicate them effectively, to avoid people gaming the system in ways that hurt the overall goals.

jimmy21:01:31

> A good manager tries to tune A, B, C, or adopt more wholistic measures and tries to communicate them effectively, to avoid people gaming the system in ways that hurt the overall goals. Or in my personal view, doesn’t try to measure these things with a numerical value. The contributions people make aren’t reducible to what we measure and trying to make them so will always lead to bad incentives.

dorab02:01:45

Metrics are very useful to help understand and inform what is going on. Typically, metrics should be paired - one metric measures some notion of quantity and the other measures some notion of quality. Metrics are close to useless for evaluations. People are just very good at gaming metrics. Evaluations should be done holistically, taking metrics into account, but not only relying on metrics. I highly recommend The Tyranny of Metrics https://press.princeton.edu/books/hardcover/9780691174952/the-tyranny-of-metrics

👀 1
mjw13:01:31

I've been seeing the effects of this in multiple ways. As an example, the organization knows it hasn't made much effort to train its developers, and quality/maintainability have suffered as a result. Leadership have clearly been making efforts to change this, but once developers show up for work the next day, all of the incentives, feedback loops, and communication structures that made these problems possible (if not rendered them inevitable) are still there.

respatialized16:01:24

https://ferd.ca/plato-s-dashboards.html Everything Fred Hebert has written recently on the relationship between organizations and the technologies they use/produce is super insightful, and his thoughts on metrics are great perspective on this problem.

respatialized16:01:34

For a more "pro-metrics" counterpoint perspective, I like this narrative story by Erik Bernhardsson. I think what's useful about it is that he emphasizes fluency in data across teams and the usefulness of certain metrics for certain "local" decisions (e.g. "according to our tests, this new menu format gets customers through the purchase flow faster, with an effect size of 15%, so we should ship it") set by teams themselves rather than global metrics imposed on everyone without a clear indication that they make sense for the teams' day to day work. Notably, he does not mention using metrics to track the performance of individual contributors. https://erikbern.com/2021/07/07/the-data-team-a-short-story.html