The only programming interview question most companies need

Abstract

Instead of giving Leetcode problems, whiteboard interview, or such, ask what the candidate to describe his best project/programming work.

Score his story in 100 points allocated in the following:

  • choice of the project/task: 10 points
  • problem description: 20 points
    • did the candidate describe inputs and outputs?
    • did the candidate explain the value the end-suser will get from the featre/task?
  • the problem complexity: 50 points
    • preconditions: training/tutorials/programming courses he had already gone through before that tasks
    • inputs: inputs received in terms of specs
    • estimation & design: analysis, implementation, testing? was there a design phase?
    • time + effort: what did he do to surmount it, and how long did it take?
    • self-evaluation: what did he do to test and measure his work?
  • the outcome: 20 points
    • what was learned from this experience?
    • did he get a chance to apply the learning anytime past that?

A decade ago in a company I worked, programming interviews were haphazardly handled.

One day, my boss came up to me and asked me to conduct a technical interview. As I wasn’t prepared for it, I shrugged. He said, “No worries. Just google a few C++ & OOP-related questions (our technical stack at that time).”

I was still hesitant. To that, he said, “Relax. We have conducted around 20+ interviews just by googling. Our formula is 10–10–100. 10 points/question, 10 questions for the test to make a total of 100. Binary to hex, we call it 84.”

That somehow reminded me of 1984 by George Orwell.

Today many dev teams may have a consistent interview format to ensure standardized evaluation and neutrality.

When I woke myself up. I was aghast. No uniformity in hiring for a Fortune 500 product company?

However, as I found out later, I was still a newbie in interviewing. Later, I learned that despite intense innovations, the industry completely lacked tech interview standardization. If it ever existed, there was no public data about it, apart from candidates posting their experiences on Reddit and its ilk.

Obviously, today many dev teams may have a consistent interview format to ensure standardized evaluation and neutrality. No big company would reveal it, and no one cares about small firm interviews anyway. But it is less likely that such formats are ever documented, for privacy reasons.

That demands the eternal question: If it’s not standardized, how do you ensure neutrality? And if it is standardized, how do you ensure it is preserved without documentation?

The Interview Question Ambiguity is killing candidates and interviewers alike:

Coding problems make sense at the filtering level. During the rest of the hiring process, they are inefficient.

Candidates spend an inordinate amount of time on LeetCode+ preparing for whiteboard interviews. More than that, they also struggle with issues such as vocabulary, representation, and narration skills. Without these elements, they feel lame.

When programmers are tech-evangelists, STAR perfectly makes sense. For the rest, it keeps candidates busier in guessing the questions rather than preparing for the answers.

Interviewers’ challenges aren’t less. Company HR expects standardization, and it is difficult to maintain, as I just described in the paradox in the prior section.

It makes things far easier for everyone if the entire surprise element behind technical interviews is removed. Just make them single question event. Evaluate candidates based on the answers.

There is STAR-format that addresses the culture fit problem by involving scenario-based questions. However, the question variety is overwhelming. The question dataset is too big for every senior developer role. It makes the interview event a guessing game, which can filter out competent and fitting candidates if they go unprepared.

Don’t get me wrong: there are roles where STAR-format is fitting. When programmers are tech-evangelists, they perfectly make sense. For the rest, it keeps candidates busier in guessing the questions rather than preparing for the answers.

I sympathize with programmers who mindlessly prepare for the mysterious behavioral question bank. I was one. I went through an 18-month long serial rejection phase in senior developer interviews.

The outcome was my latest eBook about senior developer interviews (50% off for the first 100 Medium users), in which I have included 26 STAR and 15 non-STAR questions with relevant answers and in-depth analysis.

STAR exactly fits high customer-facing or stakeholder-facing roles. An example is AWS architect— the role that is the most popular source of this methodology.

Small and medium-sized tech companies should do away with too many STAR-format questions:

  • Just have 1 fixed question — Spare the candidates from ambiguity.
  • Focus on the answer to evaluate and compare candidates

The only interview question that is fitting for most tech firms:

I had previously penned a popular medium article surrounding a similar topic:

https://betterprogramming.pub/the-only-programming-interview-question-you-need-to-prepare-for-e604c4c4d5eb

That was written from a candidate’s perspective. If you stay till the end of the article, you will realize that all the content still stands true.

However, this time we are discussing from an interviewer’s perspective.

The only interview question that you need to ask the candidate is:

Describe your best project/programming work.

Note that:

  • This is the root question. Once the candidate starts answering, it must branch out to other questions.
  • You can share the question with the candidate beforehand. This will give him predictability. When there is no ambiguity around what needs to be delivered, the delivery can be evaluated much better.
  • This question is best suited to individual contributors. It is suitable also for architects, who mostly write/share specs in written/flowchart format.
  • If you would rather want candidates with great listening skills (customer focus, or stakeholder-focus), then this is not a suitable strategy.

The devil is in the details:

The real challenge lies with the interviewers: How to come up with the tree structure that sprouts from the root question, and ensures fair evaluation?

Let’s say you assign 100 points to this question, the ideal breakup for the question must be like this:

The choice of the project/task: 10 Points

For senior roles, give full points if the project involves a distributed system. For junior roles, give full points for a sizeable desktop/web/mobile feature. It can be a complex bug too if you are hiring for a service company developer.

The problem description: 20 points

This is even more important than the task itself, because it provides perspective, and perspective is everything.

Assign 10 points to each question:

  • Did the candidate describe inputs and outputs?
  • Did the candidate explain the value the beneficiary/end-user will get from the feature/task? For example, if a candidate’s best project was the login function, did he describe for whom it was made, and to what extent it served its purpose? (e.g. provide a checkbox to remember the login for 2 weeks to not enforce password)

The problem complexity: 50 points

This is the meat. Was the candidate able to justify the complexity of the task? Note that it doesn’t have to be universally complex like NASA’s lunar module. However, given the candidate’s stature at the time of solving the problem, it should be difficult enough.

To know it, ask enough questions to ensure you evaluate him/her fairly. (assign 10 points to each question)

  • Preconditions: What training/tutorials/programming courses he had already gone through before that task
  • Inputs: What inputs he/she had received in terms of specs
  • Estimation & Design: What was the challenge: Analysis, Implementation, testing, or something else? Was there an estimation of the efforts? Was there a design phase? (ask for a drawing if relevant)
  • Time + effort: What did he/she do to surmount it, and how long did it take? (ask for an API /code statement if relevant)
  • Self-evaluation: What did he/she do to test and measure his/her work?

The outcome: 20 points

This is to evaluate:

  • What the candidate learned from the task and the experience
  • Did he get a chance to apply the learning anytime past that? The answer to this shouldn’t differentiate non-doers. But you should at least expect that the candidate has some idea of how to utilize the learning in the future. You could come up with a point table that goes like this, and ensure that concrete thinkers + concrete doers get full 20 points.
Concrete Idea: 10 points
Poor Idea: 5 points
Concrete Implementation: 10 points
Poor Implementation: 5 points

The total:

That constitutes 100 points.

One last note:

To ensure fair judgment, it is recommended that you record the interview conversation with permission from the candidate. That way, you have the opportunity to go back, listen, and be fair.

Conclusion:

The current STAR-based interview format is backed by psychology — a discipline that serves hiring many more domains than tech. There is no doubt that it is exhaustive to measure whether the candidate possesses all the expected company values.

However, in the absence of roles that require much more than programming, interviewers end up hiring false positives.

Removing the ambiguity from tech interviews, standardizing the question-answer format, and actively listening to candidates will ensure you find someone who fits your company.

Already before you meet him/her for the interview.