You Are Not Google

Nicolás Miari
3 min readNov 23, 2020

Why Most Companies Get the Coding Interview Wrong.

There's a lot of debate about the value of the so called “coding interview” (i.e., solving general — although challenging — coding problems that test knowledge on computer science essentials, requiring deep knoledge and familiarity with algortihms and data structures) in order to assess a job candidate’s aptitude as a software engineer.

The main arguments against the interview format can be summarized as: The interview problems rarely have anything to do with the day-to-day tasks that the candidate will ultimately engage in if hired (e.g., developing real-world web applications). When was the last time you had to sort an array, or implement a linked list? Those low-level problems have long ago been solved, and their tried-and-true, heavily optimized implementations are hidden underneath libraries. Who needs to reinvent the wheel?

This is of course a valid discussion, and there are many counter arguments to the above, in favor of the coding interview:

  • The problems are a proxy, a normalized way to assess general problem solving ability across candidates of varying specializations, using a "common ground" domain (algorithms and data structures).
  • The ultimate goal is not to solve the problems (altough of course that does contribute towards a positive evaluation), but to determine how the candidate approaches it: what is their thought process, whether they assess the problem constraints before jumping in, etc.
  • At most companies, there are also domain-specific, behavioral, and system design interviews in addition to the coding one, and the candidate is evaluated from their performance on the whole set, so the coding interview is not the sole deciding factor.

These are all sound arguments, and I tend to agree with them for the most part. This is why coding interviews at top-tier Silicon Valley companies such as Facebook, Amazon, Apple, Netflix, and Google (often refered to by the acronym FAANG) are done onsite: these companies don't just want you to write code that passes all test cases and runs within the complexity constraints of time and memory; they also want to see “the whole picture” of how you approach the problem, even if you don't get the solution 100% right.

This is where the lesser companies get it wrong and go full “cargo cult”, emulating the reputable FAANG companies only on the surface, by asking the candidate to solve a couple of problems online on e.g. HackerRank (where you waste half of the allotted time figuring out how to read and decode STDIN), and evaluating the them solely (and strictly) on their score. This way, they miss all the deeper insights that could be gained by observing the problem-solving process as a whole.

The end result is that these (in my opinion) misguided companies will get the candidates that they deserve: computer science nerds and competitive coding aficionados, people trained exclusively to ace HackerRank and LeetCode problems, and in the process miss a lot of people who are a bit rusty on (say) searching binary trees, but have extensive real-world experience designing, implementing, and debugging large systems.

To them, I say: You are not Google; you are the worst of both worlds.

--

--

Nicolás Miari

Software Engineer. Husband. Black Tie Afficionado. Wine Enthusiast.