This page is not created by, affiliated with, or supported by Slack Technologies, Inc.
- # aws (10)
- # beginners (43)
- # calva (1)
- # cider (7)
- # cljs-dev (83)
- # clojure (132)
- # clojure-dev (20)
- # clojure-europe (6)
- # clojure-greece (4)
- # clojure-italy (2)
- # clojure-nl (6)
- # clojure-spec (21)
- # clojure-sweden (16)
- # clojure-uk (21)
- # clojuredesign-podcast (16)
- # clojurescript (74)
- # cursive (41)
- # datomic (7)
- # emacs (3)
- # fulcro (30)
- # graalvm (3)
- # graphql (2)
- # instaparse (1)
- # jobs (1)
- # joker (13)
- # kaocha (14)
- # off-topic (118)
- # pathom (13)
- # re-frame (5)
- # reagent (22)
- # shadow-cljs (67)
- # spacemacs (7)
- # sydney (1)
- # testing (1)
- # tools-deps (82)
- # vim (4)
- # xtdb (1)
@idiomancy If you scroll down on that page it explains how to solve that particular puzzle
(and how to solve the next puzzle where you must pick just two pieces to form the desired result)
My personal opinion is that these sorts of tests are stupid as any sort of screening for a job and I'd probably name and shame the company trying to do that 🙂
I couldnt possibly agree more. This "aptitude" test is a really remarkable source of both false negatives and false positives
I couldnt possibly agree more. This "aptitude" test is a really remarkable source of both false negatives and false positives
I’m into those kinds of puzzles / pattern matching exercises but how are they helpful when recruiting?
Last time I did an interview I did some research on how - and came to the conclusion that I had no idea how to do it and my internet research was not helping. Big companies that have a very large number of applicants seem to have something to gain from eliminating “bad” candidates, i.e if they make the interview process so hard that it filters out all weaker candidates it’s a net win.
But filtering out people who can't do stupid puzzles has no bearing on whether they could be great software engineers 😞
Sure - but if you apply enough stupid filters on a large enough pool something is going to happen eventually.
I refused to interview at Google because of their stupid interview process and the silly "puzzle" questions they asked (they've gotten better recently)
Many big tech firms have this same very broken approach to interviewing. They are slowly learning that it is a stupid approach.
It’s mostly the smaller companies without thousands of applicants per positions that are going to suffer from emulating googles process.
They've done a huge amount of analysis and they now accept their previous interview filter technique was broken.
I've used a mind map to guide interview discussions for years (decades?) and I've never had to fire anyone for incompetence.
Hmm, I had it pinned in a bunch of channels... let me dig it up and re-post it...
I start in the top-right and I just use it as a guide to get candidates talking about projects and experience/techniques. I want to hear them explain what they like, what they don't like, what excites them, what bores them.
I have something like a mind map over the problem space that we are in, and then we probe a few trunks until the expertise level of the candidate tapers out (or exceeds mine). And then usually a design exercise of some kind on a whiteboard
Whiteboard stuff sucks. Sorry. It's mostly a terrible way to understand whether a candidate can actually do the work/fit in.
Well I guess we do, but in this case it’s more to have something to hang a discussion about system design on
If it's the latter, then your interview is worthless -- because it doesn't reflect how you work.
Good point - yeah absolutely how we work. But I agree it’s not the major part of the work
That's cool. If you typically use whiteboards a lot for work, then having them in the interview process is fine.
In this case we had a real service that needed building and two junior guys interviewing, and they each did a quick design discussion / exercise of the service during the interview. In the end we saw that they had great understanding of most of the work that needed to be done except the data modeling stuff and database work. So we hired them and assigned a lead to their team with some instructions based on that and it worked out great
I ask candidates to pre-configure an empty project for unit testing on their own laptop and let them solve an existing coding kata
and ask them about possible patterns, refactorings, pros, cons + I check how they think and how well they know the toolings / languages
I've had one senior developer so far to cancel the interview when he heard he had to write code
They reply to me they don't have one, I ask which IDE they prefer and I let them use one of ours
IDE with tooling they prefer. The only prerequisite is they need to be able to write unit tests. The reason why I ask the candidates to set up an environment with the tools of their likings is because tooling also counts towards productivity
I don't work on a laptop generally. And I don't do TDD -- I believe RDD is better. How does that fit?
I've had a couple of developers using tools I've never heard of. They really knew their stuff and were able to assert tests in a much better way than I've mostly encountered
(I think most companies have a fundamentally broken hiring process so my immediate reaction to most people is to challenge how they interview people)
The last time I ever asked a candidate to do anything "live" in any interview was about 30 years ago.
I ask them to write it using test-driven development because then you need to be able to reason about the smallest testable unit they can start developing. It's quite difficult for some (myself included at times) and it makes these exercises more interesting
Also, thinking about the design of the software upfront is difficult for most junior developers
I think that's old-fashioned and I would refuse (TDD). I don't think it's a good way to interview people.
I respect that. I've found it to be refreshing as it leads to many interesting conversations
Before that my colleague asked the theoretical stuff. SQL query questions, SOLID principles, simple design, patterns, ... but most candidates can question those
And that's an even worse interview process. I'd support live TDD over that any day! 🙂
Like I said, most companies have terrible interview processes that don't determine whether candidates will fit in and be good developers.
I only interview candidates who are hired on consulting jobs as I'm a consultant myself. So it's kind of picking my own direct colleagues. Whether they are hired on the job or not, the only difference for them is that they no longer sit and wait for the next project
@seancorfield You’ve made me want to look up RDD, but there are lots of options. Which one do you like?
It was interesting hearing Stu Halloway talk about "Rich Comment Forms" since that was already the way I was developing and I've focused very heavily on immediate feedback and live evaluation from the source file I'm working on over the last few years.
I was fairly TDD-sympathetic before I switched to Clojure (nearly nine years ago).
If you're hiring for languages that don't have that tight feedback loop then TDD may well make sense. I'm sort of assuming we're talking about hiring for Clojure jobs tho'.
But I'll also reiterate my point about talking with developers in interviews rather than making them code stuff live via any means -- seriously, it's been nearly 30 years since I last expected a candidate to work under those constraints.
Communication and collaboration are what development is about. You can always train people on your processes and technology. Well, you should be able to train them on processes, which is what a lot of that mind map and discussion is intended to uncover...!
People can, and do, produce awful software via TDD... it's just very easy to test. It's still awful, tho'...
"Test-first fundamentalism is like abstinence-only sex ed: An unrealistic, ineffective morality campaign for self-loathing and shaming." -- DHH 🙂
I'm no TDD advocate. I use it only when I feels like it's helping me accomplish a goal
I have a vague feeling that TDD is used (to good effect at times) as a battering RAM in enterprise Java et al settings where breaking stuff up into pieces is a revolution and an eye-opener. For people who are more aligned with common FP idioms it’s much less interesting. Maybe.
We use TDD because before software ships, Legal requires we have 80% test coverage :thinking_face:
Protip: You can just write tests that have assertTrue(true) in them like we caught an offshore team doing
my experience with TDD is that it encourages step-by-step discovery of the details of a problem space, which is fine if you already understand it in the large but it tends not to work if you don't [in that it encourages a settling on a specific implementation early on]
Why does it have to be black and white? I mix REPL driven development with TDD and whatever feels great in that moment. After a reported bug, I would probably confirm this via the REPL, try to think of a solution, write a failing test, then implement it
are you still talking in Clojure-land? Because the original poster was specific about it
I actually don't know exactly where the discussion started, but I read the comments as either TDD or RDD
I believe TDD flow could be improved a lot (especially within Clojure, better than in any other language), but the concepts behind TDD have always been relevant I think, also for Clojure
Dang, apply onto pandas DataFrame using lambda expression is 16 times slower than a regular double loop :face_with_rolling_eyes:
I always think of the example given by Peter Seibel of Norvig/Jeffries and their sudoku solvers. https://gigamonkeys.wordpress.com/2009/10/05/coders-unit-testing/
I can only say that I believe in the concepts of TDD, not in the current practical implementation. To give a real live example; if I create a new feature in a project, the TDD way, if i'm unlucky an insignificant change will trigger my whole test suite. This doesn't mean that the idea of TDD is wrong, it means my tool that scans for changes is wrong. It does have the practical implication I cannot use TDD all the time
So I don't believe what is said in the article by Uncle Bob, unless there exists very advanced tooling I have no idea of yet. But I also don't agree with "TDD deniers". I hope tooling will improve and we get closer to what TDD promises to be
In my TDD workflow, for each feature that I will write, I start with a test I write this test, develop the functionality running just this tests. Once complete, I push it then the CI will run all tests. Any regression will blow up and I will solve it.
My tests are usually things like
when a user confirms a buy, it receive a email
A test like this run all my system, but runs in 100ms and has no side-effect.
It can be The "email-driver" should be a parameter from my system (not only my funcion) And should be easy to swap/mock in my test If it's not simple to mock/test, I review all my code to make it easy.
I used very well tested components (as in stuart sierra's component) with different implementations (e.g. for Redis I have an in-memory version). When the mock implementation works I assume it works with the real implementation as well. I can make this assumption because the fake components are tested on compatibility. If I come accross an exception in this assumption I try to fix it. To get this working you need to go all the way (fake time, control randomness etc)
@michael.e.loughlin ok read it slightly better, I think the point of Peter Norvig is good and nuanced
I see REPL-based development as orthogonal to TDD vs real-code-then-tests (or top-down vs bottom-up, etc). REPL-based development allows you do do any of those other things, but with faster feedback. (Note: I said “REPL-based” — not sure what “REPL-driven” means.)
Despite my apparent stark black and white position, I agree that TDD is better than write-code-then-write-tests.
And, yes, if you have some clear requirement (or bug!) for which it is easy and obvious to write a failing test that will only pass when the requirement is implemented (or bug is fixed).
But, more often, you need to do some investigation around the requirement/bug first -- in Clojure we generally do that in the REPL.
So I'm advocating a REPL-first workflow -- REPL-Driven Development in the exact same way that TDD is development driven by tests. But don't type into the REPL, type into a source file, maybe in a
(comment ,,,) form, and keep that exploration around. Some of it should become your production source code, some of it might become tests (to make sure you don't break this feature in the future).
At work we have about 90K lines of Clojure overall and nearly 20K of that is test code at various levels from very fine-grained "unit" tests up to full, end-to-end UAT style. And, yes, some of those tests were written for specific TDD pieces of work (mainly around our API: we write down a detailed description of the API, then write a tests for all the documented failure modes and a few tests for the success mode, and then we implement the code). But mostly our tests are either regression (after the fact) or evolve out of REPL exploration.