This page is not created by, affiliated with, or supported by Slack Technologies, Inc.
- # beginners (54)
- # bitcoin (2)
- # boot (1)
- # calva (10)
- # cider (30)
- # cljs-dev (25)
- # cljsrn (17)
- # clojure (27)
- # clojure-dev (16)
- # clojure-estonia (3)
- # clojure-hk (1)
- # clojure-italy (8)
- # clojure-losangeles (1)
- # clojure-nl (17)
- # clojure-russia (1)
- # clojure-spec (15)
- # clojure-uk (45)
- # clojurebridge (1)
- # clojurescript (95)
- # clojurescript-ios (1)
- # core-async (5)
- # cursive (10)
- # datomic (8)
- # emacs (2)
- # figwheel-main (31)
- # fulcro (99)
- # hyperfiddle (3)
- # immutant (1)
- # jobs (13)
- # jobs-discuss (82)
- # keechma (6)
- # leiningen (3)
- # lumo (1)
- # nrepl (1)
- # off-topic (37)
- # onyx (1)
- # pedestal (6)
- # re-frame (7)
- # reitit (2)
- # remote-jobs (1)
- # ring-swagger (3)
- # rum (6)
- # shadow-cljs (14)
- # specter (4)
- # tools-deps (27)
- # yada (12)
We have a pair coding interview in our process, in which we code some tests on the take home assignment (we do ask candidates to spend only some hours on it, and the task is minimal enough you can get it done in like 1h). Now this is basically a simulation of working time: we pair really often (like 30% of my time at work?) and the result of the programming exercise is not supposer to be a correct answer, but we use that time to evaluate the candidate's approach to the problem. With our last hire we spent roughly the whole interview trying to setup the clojurescript compiler on his machine all while having great conversations about the value of determinism in software (the same exact setup would work on my machine but not theirs, it was pretty funny)
Have you ever tried a “here’s some buggy code, what’s wrong with it and how would you fix it”? I heard some people used this as a more realistic coding skills interview and got better results than pure whiteboard
@lady3janepl That's the closest I've ever come to giving a "coding test" in an interview. Many years ago, at a company that did mainly C, I created a string copy function with about half a dozen bugs in it and would ask candidates to see how many bugs they could find -- and, most importantly, to explain their thought process in debugging the function.
@lady3janepl if I would be asked that during an interview I'd be really stressed (because there is a set of "right answers" and a given time to give them, but the interview shouldn't be about who's faster to give the right answers)
But I've been asked "here's this 300 lines class, how would you refactor it and why", which I think is a really useful question
@lady3janepl Yeah, I was generally satisfied with anyone who could spot at least two bugs in that code -- and the experience with that company and their hiring process was what turned me away from those sorts of "coding quizzes" in the first place.
A company in Dallas flew me out for an interview (from England) and they had that really superior "we only hire the best" attitude and it was an all-day quick-fire code quiz style experience. I hated it. Mid-afternoon I walked out of the interview and left early to head home, after they put me in a room with a Windows PC and Visual C++ and a "spec"... A few days later they called up and apologized and offered me the job. I told them where to go.
Interviews are a two-way process. They should be a conversation that shows respect and gives an insight into how you operate as a company and how the candidate may (or may not) fit into that.
I've never, ever had to let someone go that I've hired for any sort of incompetence, across over two decades as a hiring manager.
I have had people say that the ostensibly non-technical interview I give has been the hardest of their life tho' 🙂
@lady3janepl that's basically what the test is at my current place. Simulated piece of buggy, legacy code and then you pair on it. I think my difficulty is that you can mostly tell from the walk from reception to the room with the laptop whether or not they're going to nail it. I think if I was hiring for my own thing, I'd in all honesty have to do a coding test of some kind, so obviously I'm a hypocrite
@seancorfield your experience as a candidate is possibly far from typical. Most everyone else is in the position of “eh if you’re too precious, I have a ton of other people who applied”
@nilrecurring one way or the other, there always is the right answer though (especially if the test is “do we like you”.) For coding puzzles, you still have to have solutions to the most common problems memorised in order to not create a bad impression with taking too much time relative to other candidates, for example. Personally, I wouldn’t look for things like “do you know exactly what value an empty string is cast to”, but rather “do you recognise that automatic casts in a dynamically typed language are a potential source of bugs”.
Kinda reminds me of the driving test in the UK: one of the things they do is show you a recording from a car dashboard camera, and have you click when you see a developing or actual hazard (pedestrian looking to cross, sheep by the roadside, car joining the traffic)
@alex.lynham you mean that if you were hiring for your own thing you’d be doing a whiteboard? Did the buggy legacy code test not work then? Also I’m curious, how do you tell from the walk to the room with the laptop whether people are going to do well? (I do believe you on that btw, just curious what you look at.)
I guess maybe it's just what I'm interested in talking about, but I tend to ask what they're working on, how they got into programming, some kind stuff to gauge that. If they're thoughtful and passionate they've probably got what it takes to nail the pairing. Bonus points for showing empathy because most problems are not the tech and you need to understand team dynamics, your users, stakeholders etc
As for my own thing, I'd want to step through something just to understand whether they could see the big picture mainly. Lots of folks are good at the code in front of them but don't see the wood for the trees.
I paired with several people over the last few weeks for the first time (colleagues, not interviewees) and some people took a minute to click, other people didn't click so I'd be worried that it's one of those things where you select for yourself
Re wood for the trees, maybe ask about a hypothetical misleading situation and conduct shared analysis?
Yeah I guess I mean that if I'm talking about why maintaining a contract in some legacy code is important that then gets them thinking about the wider system - but that's also depending on level. I'd expect that of a senior, not from a junior
Yeah that self selection is a risk :) Perhaps the solution is to administer a test that you yourself wouldn’t want to take XD
From what I remember from research, apparently the most reliable factors are basically three things: IQ, conscientiousness (understood as a personality factor from big five) and an overview of CV
Especially unstructured interviews were shown to decrease it and select for people like the interviewer / penalise according to stereotypes
The CV overview as compared to coding tests came out of research by Google into their recruitment process (and the highest accuracy they could get still landed at ~30% so not great), can’t remember where I got the other bits of (possibly dis-)information from.
There were some great sessions on this at codecraft last week but very few real solutions
Oh, http://interviewing.io maybe. I remember they had an article about what they learned from aggregated data of test interviews somewhere.
Flipping it on its head though @lady3janepl if you were hiring for your own company, how would you do it?
I'm not sure it can ever be a scientific process, human interaction (which is a lot of working in software) is not objective
@conor.p.farrell I would go as far as to say that trying to make it completely objective might bias it even more than you would do by keeping it slightly fuzzy (pattern: overfitting)
One thing I haven’t seen anyone discuss that I’d like to try (if I were in the position of hiring) is extreme cv anonymisation. Remove name, nationality, name of place of education (convert into enum of level achieved at most), names of previous companies (convert into binary signal of has/has not previous experience with our specific domain if actually important), names of languages (if senior), substitute name of tools with generics (git->version control system)
Possibly remove titles of positions, because the type of work a person actually does, doesn’t always correspond to job title (eg I’ve done a lot of lead type work when a “plain” dev, after which they named me a lead but the project I was to lead got strategically delayed. So... what am I. The answer is, you can’t tell by title.)
For code, I’d do a pair refactoring exercise, and for senior/soft skills I’d have the person perform a peer review, and explain to me why they have that particular advice (bonus points for things like explaining they had more advice but if you give too much of it, the author will be discouraged rather than learn.)
And then for personal compatibility I’d check if they can rant, and whether they can respectfully/graciously disagree (not sure how to do the last one, maybe I’d just ask them if there’s anything that really annoys them?) Working with people who just don’t give a shit makes me depressed, and I hate the trend of not rocking the boat, but also would like to select out people who do not respect me back if I have a different opinion. You know, self proclaimed alpha types.
(And while we’re at it, I’d also like a pony. For an average boring CRUD dev job however, it seems to be enough if the person doesn’t make an unholy mess with their code and understands OOP to the level of what interfaces are for.)
Interesting conversation. I have been giving some interviews lately and have given this topic a lot of thought. We have a take-home assignment that takes ~2hrs after a phone screen w/ an engineer. I feel like this maybe our best tool. We then have in-person interviews, 3 hours, 3 people, one of them (the one I usually do) goes through a coding challenge that is usually at a computer, with a working Clojure environment (but we’ve had candidates who are more comfortable w/ Python or whatever use that). I like it. It feels like it works. It’s almost like a paired programming exercise. I am very aware of the stress an interviewee might feel in any situation where they have to “perform” in front of me, so I try to make it as much like paired/peer programming as possible. BUT, i’m not sure if it works great or not.
This resonated a great deal, with me: - http://pete.holiday/blog/2018/05/killing-the-coding-interview
I’m not sure it works great because by that point everyone has already done the take-home assignment and, if they’ve done well on that, they tend to do well on the coding challenge in the interview.
The in-interview coding challenge is actually pretty easy, and leads to a lot of discussion afterwards that may or may not involve editing the code: How can you make this faster or more efficient? How can you break the solution or get an error? How do you prevent that? How would you document it? How would you test it? How would you turn this into a library for other to use? Etc.
But, as I said, if you’ve already done the earlier assignment well, which you have because you’re not in the interview, then you’ll probably do well here.
How do you you deal with the language in which they interview? Do some make it harder or easier for you to ask the questions you’re interested in? If people select Python, would you see it as a minus (given you’re recruiting for a Clojure position) or a plus (standardised test) or a neutral?
@lady3janepl Well, most of the time it’s Clojure. So far, when it hasn’t been clojure, it’s been a programming language I’m familiar with and it’s easy enough to spin up an editor and an env to test the code. So, the language thing hasn’t been a problem. Def not a negative if someone chooses another language--many people interview with us because they are fans of Clojure and would love to work in it, but are not currently working in it, and are more comfortable in something else.
Hmm, this makes me think. If I wanted a structured process that was designed to continually improve how I hired, what would that process look like
I'm thinking in the problems in terms of how I can test my ideas of what makes a good candidate
Which means measuring developer performance somehow, which is also a notoriously hard problem 🙂
(a thorough overview of various potential signals, also based on the particular author’s experience with interviewing)
My favourite line is this one: > The real question is: does solving a problem about turning a binary tree upside down predict future job performance? because 'solving a problem about turning a binary tree upside down' could be replaced with any bit of an interview process and work as a critique
I remember seeing some startup that was trying to apply AI to recruitment. Whether it worked, I’m not sure. And it wasn’t clear to me what they meant by “AI”.
yeah, I thought it might have been more along the lines of automatically (rather than manually) finding keywords in job advert and cv and then matching…
Just catching up on the discussion. Agree with @lady3janepl about the anonymization of resumes/CVs -- otherwise it's really difficult to ensure no subconscious biases are creeping in. Although once you get to a video/in-person interview, that's impossible to prevent (even a phone interview can cause biases to creep in).
This resonated a great deal, with me: - http://pete.holiday/blog/2018/05/killing-the-coding-interview