> Interviewing has always been a big can of worms in the software industry. For years, big tech has gone with the LeetCode style questions mixed with a few behavioural and system design rounds. Before that, it was brainteasers.
Before Google, AFAIK, it was ad hoc, among good programmers. I only ever saw people talking with people about what they'd worked on, and about the company.
(And I heard that Microsoft sometimes did massive-ego interviews early on, but fortunately most smart people didn't mimic that.)
Keep in mind, though, that was was before programming was a big-money career. So you had people who were really enthusiastic, and people for whom it was just a decent office job. People who wanted to make lots of money went into medicine, law, or financial.
As soon as the big-money careers were on for software, and word got out about how Google (founded by people with no prior industry experience) interviewed... we got undergrads prepping for interviews. Which was a new thing, and my impression is that the only people who would need to prep for interviews either weren't good, or were some kind of scammer. But then eventually those students, who had no awareness of anything else, thought that that this was normal, and now so many companies just blindly do it.
If we could just make some other profession be easier big money, maybe only people who are genuinely enthusiastic would be interviewing. And we could interview like adults, instead of like teenagers pledging a frat.
Prepping for interviews has been a big deal forever in most other industries though. It's considered normal to read extensively about a company, understand their business, sales strategies, finances, things like that, for any sort of business role.
What you're describing sounds to me like just caring for the place we'll be spending half a decade or more and will have the most impact on our health, financial and social life.
I'd advise anyone to read the available financial reports on any company they're intending to join, execpt if it's an internship. You'll spend hours interviewing and years dealing with these people, you could as well take an hour or two to understand if the company is sinking or a scam in the first place.
Why should anyone do that in a world where fundamentals don't make sense? Yes, knowing how the company makes money is important (though that often is incomplete or unclear from what's publicly available), but knowing their 10-K or earnings reports? Too much.
"Is the company consistently profitable or not?" and "Are revenue and profits growing over time, stable, or declining?" are very important questions to answer, particularly if stock grants are part of the compensation package.
For developers who work on products, getting a sense of whether the product of the team you'd be joining is a core part of the business versus speculative (i.e. stable vs likely to have layoffs) and how successful the product is in the marketplace (teams for products that are failing also are likely to be victims of layoffs) are also very important to understand.
And if your team is far from the money, what often matters much much more is how much political capital your skip level manager has and to what extent it can be deployed when the company needs to re-org or cut. Shoot, this can matter even if you're close to the money (if you're joining a team that's in the critical path of the profit center vs a capex moonshot project funded by said profit center).
This is one thing I really like about sales engineering. Sales orgs carry (relatively) very low-BS politically.
It matters a lot whether the organization is growing. If you get assigned to a toxic manager in a static organization then you're likely to be stuck there indefinitely. In a growing organization there will be opportunities to move up and out to other internal teams.
I remember being told this during my interview prep classes in college in 2008. Interviewing was so much more formal even then (in the NYC area): business casual attire, (p)leather-bound resume folios, expectations of knowing the company, etc. I definitely don't miss any of that nonsense.
It was good standard advice even for programmers to know at least a little about the company going in. And you should avoid typos and spellos on your resume.
But no "prep" like months of LeetCode grinding, memorizing the "design interview" recital, practicing all the tips for disingenuous "behavioral" passing, etc.
IIRC Google had an even higher bar in their early days: candidates had to submit a transcript showing a very high GPA and they usually hired people only from universities with elite CS programs. No way to prep for that.
They only gave it up years later when it became clear even to them it wasn't benefiting them.
> IIRC Google had an even higher bar in their early days: candidates had to submit a transcript showing a very high GPA and they usually hired people only from universities with elite CS programs.
Which sounds like a classic misconception of people with no experience outside of a fancy university echo chamber (many students and professors).
Much like Google's "how much do you remember from first-year CS 101 classes" interviews that coincidentally looked like maybe (among my theories) they were trying to make a metric that matches... (surprise!) a student with a high GPA at a fancy university.
Which is not very objective, nor very relevant. Even before the entire field shifted its basic education to help job-seekers game this company's metric.
Yes. A while ago a company contacted me to interview, and after the first "casual" round they told me their standard process was going full leetcode on the second round and I'm advised to prepare for those if I'm interested in going further.
While that's the only company that was so upfront about it, most accept that leetcodes are dumb (need to be prepped even for a working engineer) and still base the core of their technical interview on them.
Casual interviews definitely still exist, though the companies those jobs are attached to are typically not tech and pay less.
Consulting positions also don't have much leetcode BS. These have always focused much more on practical experience. They also pay less than Staff+ roles at FAANGs.
> And we could interview like adults, instead of like teenagers pledging a frat.
I think you're viewing the "good old days" of interviewing through the lens of nostalgia. Old school interviewing from decades ago or even more recently was significantly more similar to pledging to a frat than modern interviews.
> people who are genuinely enthusiastic
This seems absurdly difficult to measure well and gameable in its own way.
The flip side of "ad hoc" interviewing as you put it was an enormous amount of capriciousness. Being personable could count for a lot (being personable in front of programmers is definitely a different flavor of personable in front of frat bros, but it's just a different flavor is all). Pressure interviews were fairly common, where you would intentionally put the candidate in a stressful situation. Interview rubrics could be nonexistent. For all the cognitive biases present in today's interview process, older interviews were rife with much more.
If you try to systematize the interview process and make it more rigorous you inevitably make a system that is amenable to pre-interview preparation. If you forgo that you end up with a wildly capricious interview system.
If course you rarely have absolutes. Even the most rigorous modern interview systems often still have capriciousness in them and there was still some measure of rigor to old interview styles.
But let's not forget all the pain and problems of the old style of interviews.
> I think you're viewing the "good old days" of interviewing through the lens of nostalgia. Old school interviewing from decades ago or even more recently was significantly more similar to pledging to a frat than modern interviews.
Yeah, no, not at all. Interviewing in the 90s was just a cool chat between hackers. What interesting stuff have you built, let's talk about it. None of the confrontational leetcode nonsense of later years.
I still refuse to participate in that nonsense, so I'll never make people go through such interviews. I've only hired two awesome people this year, so less than a drop in the bucket, but I'll continue to do what I can to keep some sanity in the interviewing in this industry.
Being personable does count for a lot in any role that involves teamwork. Certain teams can maybe accommodate one member whose technical skills make up for bad interpersonal skills as a special exception, but one is the limit.
The article implies that somewhat, before AI the leetcode/brainteaser/behavioral interview process had somewhat acceptable results.
The reality is that AI just blew up something that was a pile of garbage, and the result is exactly what you'd expect.
We all treat interviews in this industry as a human resources problem, when in reality is an engineering problem.
The people with the skills to assess technical competency are even more scarce than actual engineers (b/c they would be engineers with people skills for interviewing), and that kind of people is usually very very busy to be bothered with what's a (again, perceived) human resources problem.
Then the rest is just random HR personnel pretending that they know what they're talking about. AI just exposed (even more) how incompetent they are.
The results did filter out a few people who could not think.
i reciently interviewed someone who was a senior engineer on the space shuttle, but managed a call center after that. Can this person still write code is a question we couldn't figure out and so had to pass. (We can't prove it but think we ended up with someone who outsourced the work to elsewhere - but at least that person could code if needed as proved by the interview)
I’ve conducted about 60 interviews this year, and have spotted a lot of AI usage.
At first I was quite concerned, then I realized that in nearly all cases I’d spotted usage, a pattern stood out.
Of the folks I spotted, all spoke far too clearly and linearly when it came to problem solving. No self doubt, no suggestion of different approaches and appearance of thought, just a clear A->B solution. Then, because they often didn’t ask any requirements questions beyond what I initially asked, the solution would be inadequate.
The opinion I came to is that even in the best Pre-AI era interviews I conducted, most engineers contemplate ideas, change their mind, ask clarifying questions. Folks mindlessly using AI don’t do this and instead just treat me as the prompt input and repeat it back. Regardless of if they were using AI or not, I won’t know ultimately, they still fail to meet my bar.
Sure, some more clever folks will mix or limit their LLM usage and get past me, but oh well.
I interviewed a guy in person and he paused for 5 seconds, then wrote a perfect solution. I tried making the problem more and more complicated and he nailed it anyway, also after a brief pause. We were done in half the time.
Maybe he just memorized the solution, I don’t know.
It depends, I had some interviews like this that I suspected. For context, most of the interviews I conduct are technical design related where we have a discussion, less coding. So in those it is quite open ended where we will go, and there are many reasonable solutions.
In those cases where I’ve seen that level of performance, there have been (one or more of):
- Audio/video glitches.
- candidate pausing frequently after each question, no words, then sudden clarity and fluency on the problem.
- candidate often suggests multiple specific ideas/points to each question I ask.
- I can often see their eyes reading back and forth (note; if you use AI in an interview, maybe dont use a 4K webcam).
- way too much specificity when I didn’t ask for it. For example, the topic of profiling a go application came up, and the candidate suggested we use go tool pprof and suggested a few specific arguments that weren’t relevant, later I found in the documentation the same exact example commands verbatim.
In all, the impression I come away with in those types of interviews is that they performed “too well” in an uncanny way.
I worked for AWS for a long time and did a couple hundred interviews there, the best candidates I interviewed were distinctly different in how they solved problems, how they communicated, in ways that reading from an llm response can’t resemble.
The point is that I interviewed the guy in person and he nailed it 200%. If you interviewed him online you would likely come to conclusion he’s a fake per the criteria you specified, wouldn’t you?
It’s not a rubric I’m checking off for interviews. And in person it’s more straightforward to assess a candidate than questioning if they are using any aids over video… whats your point?
> most engineers contemplate ideas, change their mind, ask clarifying questions
I don't disagree at all. I find it slightly funny that in my experience interviewing for FAANG and YC startups, the signs you mentioned would be seen as "red flags". And that's not just my assumption, when I asked for feedback on the interview, I have multiple times received feedback along the lines of "candidate showed hesitation and indecision with their choice of solution".
I work for a FAANG, have done interview training and numerous interviews. We are explicitly trained that candidates should be asking questions, second guess themselves etc.
Yeah that is definitely something that is subject to the interviewers opinion and maybe company culture. To me, question asking is a great thing, though the candidate eventually needs to start solving.
The funny thing is, they don’t. They often jump to a solution that lacks in many ways, because it barely addresses the few inputs I gave (since they asked no follow up, even when I suggest they ask for more requirements).
The real problem will be in 5 years, when current university students having their brains melted by AI that somehow luck into entry level positions can’t ever get to senior level because they’re too reliant on AI and they literally don’t know how to think for themselves. There will never again be as many senior engineers as there are today. There won’t be any good engineers left to hire.
Look around you. 15 years ago we didn’t have phones and now kids are so addicted to them they’re giving themselves anxiety and depression. Not just kids, but kids have it the worst. You know it’s gonna be even worse with AI.
Most departments at companies run on zero to two good engineers anyway. The rest are personality and nepotism hires limping along some half-baked project or sustainment effort.
Most people in my engineering program didn’t deserve their engineering degrees. Where do you think all these people go? Most of them get engineering jobs.
I’m gonna assume you’re being facetious here. I’ve been in tech for 15 years and I’ve never met a “nepotism hire”. Most of my coworkers have been incredible people.
But in case you’re serious, there’s an old saying that says if everywhere you go smells like shit maybe it’s time to check your shoes.
For our coding interviews we encourage people to use whatever tools they want. Cursor, Claude, none, doesn’t matter.
What I’m looking for is strong thinking and problem solving. Sometimes someone uses AI to sort of parallelize their brain, and I’m impressed. Others show me their aptitude without any advanced tools at all.
What I can’t stand is the lazy AI candidates. People who I know can code, asking Claude to write a function that does something completely trivial and then saying literally nothing in the 30 seconds that it “thinks”. They’re just not trying. They’re not leveraging anything, they’re outsourcing. It’s just so sad to set how quickly people are to be lazy, to me it’s like ordering food delivery from the place under your building.
AI is breaking more than interviews. I recently overheard someone who is studying to be a psychiatric nurse practitioner (they are already a RN) via an online program say “ChatGPT is my new best friend.” We are doomed.
I am teaching a coding class, and we had to switch to in person interview/viva assessment about the code written by students, to deal with AI written code. It works, but it requires a lot of extra effort on our side. I don't know if it is sustainable...
1. Get students to work on a more complex than usual project (in relation to their previous peers). Let them use whatever they want and let them know that AI is fine.
2. Make them come in for a physical exam where they have questions about they why of decisions they had to take during the project.
And that's it? I believe that if you can a) produce a fully working project meeting all functional requirements, and b) argue about its design with expertise, you pass. Do it with AI or not.
Are we interested in supporting people who can design something and create it or just have students who must follow the whims of professors who are unhappy that their studies looked different?
A project doesn't quite work for my course, as we teaching different techniques and would like knowledge of each of them.
But yes we currently allow students to use AI provided their solution works and they can explain it. We just discourage to use AI to generate the full solution to each problem.
If I read your suggestion correctly, you're saying the exam is basically a board explaining their decision making around their code. That sounds great in theory but in practice it would be very hard to grade. Or at least, how could someone fail? If you let them use AI you can't really fault them for not understanding the code, can you? Unless you teach the course to 1. use AI and then 2. verify. And step 2 requires an understanding of coding and experience to recognize bad architecture. Which requires you to think through a problem without the AI telling you the answer.
Exactly the same as in professional environments: you can use LLMs for your code but you've got to stand behind whatever you submit. You can of course use something like cursor and let it go free, not understanding a thing of the result, or you can step-by-step do changes with AI and try to understand the why.
I believe if teachers relaxed their emotions a bit and adapted their grading system (while also increasing the expected learning outcomes), we would see students who are trained to understand the pitfalls of LLMs and how to maximise getting the most out of them.
If you grade on pass/fail it’s easy to grade. Not every course uses letter grades…
If you let people use AI they are still accountable for the code written under their name. If they can’t look at the code and explain what it’s doing, that’s not demonstrating understanding.
Companies being forced to overhaul their interview processes is certainly an unexpected side-effect of the insurgence of LLMs.
On the other hand, encouraging employees to adopt "AI" in their workflows, while at the same time banning "AI" on interviews, seems a bit hypocritical - at least from my perspective.
One might argue that this is about dishonesty, and yes, I agree. However, AI-centric companies apparently include AI usage in employee KPIs, so I'm not sure how much they value the raw/non-augmented skill-set of their individual workers.
Of course, in all other cases, not disclosing AI usage is quite a dick move.
If companies are going back to physical onsites but are using remote interviewers, then maybe it makes more sense to have interview centers. They'd be like testing centers --- device lockers, multiple cameras, nearby proctor, shitty desktops from the 2010s with even worse keyboards --- but just for interviews.
Interviews are fundamentally really difficult to get right. On one side, you could try to create the best fairest standardized interview process based on certain metrics, but people will eventually optimize on how well they can do on the standardized interview, making it less effective. On the other side, you could create a customized ad hoc interview to try to learn as much about the candidate as possible, and have them do a work trial for a few days to ensure they're the right candidate, but this takes a ton of time and effort on both the company and the candidate.
I personally think the best interview format is the candidate doing a take home project and giving a presentation on it. It feels like the most comprehensive yet minimal way to assess a candidate on a variety of metrics, tests coding ability in the project, real system design rather than hypothetical, communication skills, and depth of understanding on the project when the interviewer asks follow-up questions. It would be difficult to cheat this with AI since you would need a solid understanding of the whole project for the presentation.
Maybe it’s time to ask deeper questions, ask how to reduce complexity while preserving meaning. Doing real pair programming with shared remote code and simulate as much as possible a real day-to-day environment. Not all companies search for the same kind of developers. Some don’t really care about the person as long as the tech skills are there. Some don’t look for the brightest in favor of a better cultural match with the team.
Genuine remote interviews aren’t easy but it also depends on the interviewer’s skills.
We’ve been touted for year that AI will replace developers, would Elon replace the engineers working on the software of it’s rockets with AI ? It depends what’s at stake. I bet their interviews are quite specific and researched thoroughly.
We can find better ways to create a real connexion in the interviews and still make sure the tech skills are sound without leet code.
We also need developers who master the use of AI and have real skills of thinking before and designing and deep review code skills
I’ve mentioned it before, but it’s not just that people “cheat” during interviews with an LLM…it’s that they have atrophied a lot of their basic skills because they’ve become dependent on it.
Honestly, the only ways around it for me are
1. Have in person interviews on a whiteboard. Pseudocode is okay.
2. Find questions that trip up LLMs. I’m lucky because my specific domain is one where LLMs are really bad at because we deal with hierarchical and temporal data. They’re easy for a human but the multi dimensional complexity trips up every LLM I’ve tried.
3. Prepare edge cases that require the candidate to reconsider their initial approach. LLMs are pretty obvious when they throw out things wholesale
Universities and education overall also had their foundation detonated by AI. Some Stanford classes now do 15 minute tricky exams to reduce the chance of cheating with AI (it takes some time to type it so the point is to make the exam so short that one can't physically cheat well). I am not sure what the solution for this mess is going to be.
1. Strict honor code that is actually enforced with zero tolerance.
2. Exams done in person with screening for electronic devices.
3. Recognize that generative AI is going to be ambient and ubiquitous, and rework course content from scratch to focus on the aspects that only humans still do well.
Only 3) could scale but then those exam takers not using AI would fail unless they are geniuses in many areas. 1) and 2) can't be done when you have 50-70% of your course consisting of online students (Stanford mixes on-campus with CGOE external students who take the exams off-campus), who are important for your revenue. Proctoring won't work either as one could have two computers, one for the exam, one for the cheating (done for interviews all the time now).
Well realistically exam takers not using AI will fail in any sort of real world technical / professional / managerial occupation anyway. They might as well get used to it. Not being able to use LLMs effectively today is like the equivalent of not knowing how to use Windows 20 years ago.
I still think how many golf balls fit in a 747 is a good interview question. No one needs to give me a number but someone could really wow me but outlining a real plan to estimate this, tell me how you would subcontract estimating the size of the golf ball and the plane. It's not about a right or wrong answer but explaining to me how you think. I do software and hardware interviews and always did them in person so we can focus on how a candidate thinks. You can answer every question wrong in my interview but still be above the bar because of how they show me they can think.
Some of the best hires I’ve ever made would’ve tanked that sort of interview question. Being able to efficiently work through those puzzles is probably decent positive signal, but failure tells me next to nothing, and a question that can fail to give me signal is a question that wastes valuable time — both mine and theirs.
A format I was fond of when I was interviewing more was asking candidates to pick a topic — any topic, from their favourite data structure to their favourite game or recipe — and explain it to me. I gave the absolute best programmer I ever interviewed a “don’t hire” recommendation, because I could barely keep up with her explanation of something I was actually familiar with, even though I repeatedly asked her to approach it as if explaining it to a layperson.
I feel like the stereotype about this question is different from your approach, though: supposedly, it started with quirky, new tech-minded businesses using it rationally to see people who could solve open-ended problems, and evolved to everyone using it because it was the popular thing. If someone still uses it today, I would totally expect the interviewer to have a number up on their screen, and answers that are too far off would lead to a rejection.
Besides, it's too vague of a question. If I were asked it, I would ask so many clarifying questions that I would not ever be considered for the position. Does "fill" mean just the human/passenger spaces, or all voids in the plane? (Cargo holds, equipment bays, fuel and other tanks, etc). Do I have access to any external documentation about the plane, or can I only derive the answer using experimentation? Can my proposed method give a number that's close to the real answer (if someone were to go and physically fill the plane), or does it have to be exactly spot on with no compromises?
Problem is many people want to grade the answer for correctness instead of thinking. It is easy to figure out a correct answer and you can tell hr they were off by some amount t
so 'no'. It is much harder to tell hr that even though they were within some amount of correct you shouldn't hire them because they can't think (despite getting a correct answer)
The questions were just a proxy for the knowledge you needed. If you could answer the questions you must have learned enough to be able to do the work. We invented a way to answer the test questions without being able to do the work.
To continue the point. If the knowledge you need is easily obtained from an LLM then knowledge isn’t really necessary for the job. Stop selecting for what the candidate knows and start selecting for something more relevant to the job.
An accurate test would just be handing them a real piece of work to complete. Which would take ages and people would absolutely hate it. The interview questions are significantly faster, but easy to cheat on in online interviews.
The better option is to just ask the questions in person to prevent cheating.
This isn’t a new problem either. There is a reason certifications and universities don’t allow cheating in tests either. Because being able to copy paste an answer doesn’t demonstrate that you learned anything.
I don’t understand how offline interviewing is needed to catch ai use, not counting take homes.
Surely just asking the candidate to lean a bit back on the web interview and then having a regular talk without him reaching for the keyboard is enough? I guess they can have some in between layer hearing the conversation and posting tips but even then it would be obvious someone’s reading from a sheet.
That type of cat-and-mouse game is ultimately pointless. It's fairly easy to build an ambient AI assistant that will listen in to the conversation and automatically display answers to interview questions without the candidate touching a keyboard. If the interviewer wants to get any reliable signal then they'll have to ask questions that an AI can't answer effectively.
In every practical sense, online interviews are the part of the early screening process. The sheer amount of applicants means that you need to do some filtering before inviting people to do on-site interviews.
While I agree LLMs have forever changed the interviewing game, I also strongly disagree with deeming slop code as "perfect" and "optimal".
There's a lot of shitty code made my LLMs, even today. So maybe we should lean in, and get people to critique generated code with the interviewer. Besides, being able to talk through, review, and discuss code is more important than the initial creation.
Interview questions are a genre of their own though. They are:
1. Very commonly repeated across the internet
2. Studied to the point of having perfect solutions written for almost any permutation of them
3. Very short and self-contained, not having to interact with greater systems and usually being solvable in a few dozen lines of code
4. Of limited difficulty (since the candidate is put on the spot and can't really think about it much, you can only make it so hard)
All of that lends them to being practically the perfect LLM use case. I would expect a modern LLM to vastly outperform me in almost any interview question. Maybe that changes for non-juniors who advance far enough to have niche specialist knowledge, but if we're talking about the generic Leetcode-style stuff, I have no doubts that an LLM would do perfectly fine compared to me.
So many words just to say interview process is broken.
It always been that way , anyone really think that someone that prepared and solve few leet code question can plan complete distributed system?
The reality is that no correlation was found between interview success and success at work especially for SW engineers, AI toola didn't change it not remote interviews.
I interviewed a guy a couple of months ago that had perfect responses to every tech question I threw at him. He even did really well on the white boarding session. The only thing was he would wait for 10-20 seconds to respond to everything. Not long enough to get called out but just long enough to notice. He aced everything. He’s a horrible employee, a senior that doesn’t seem to know anything. I almost suggested he start using his interview LLM when regular folks were asking him questions.
> Then there’s the pacing. A human pauses to think. AI-assisted candidates pause to receive a perfect answer. You can mostly feel the rhythm shift. Their eyes drift slightly. You think we don’t see that, don’t you?
I really hope most interviewers have at least the barebones skills to be able to discern AI-using interviewees, like what the author claims to have. I'm trying to get hired at the junior level, and the thought of competing with people who have no qualms with effectively cheating in real time is pretty scary. I'm human, I will inevitably not know something or make minor missteps - someone with an AI or a quick-witted friend by their side can spit out perfect, fully-rounded, flawless, HR-optimized stories and replies with a satisfying conclusion for the behavioral questions, and basically always-correct, optimal solutions for the technical questions.
> Interviewing has always been a big can of worms in the software industry. For years, big tech has gone with the LeetCode style questions mixed with a few behavioural and system design rounds. Before that, it was brainteasers.
Before Google, AFAIK, it was ad hoc, among good programmers. I only ever saw people talking with people about what they'd worked on, and about the company.
(And I heard that Microsoft sometimes did massive-ego interviews early on, but fortunately most smart people didn't mimic that.)
Keep in mind, though, that was was before programming was a big-money career. So you had people who were really enthusiastic, and people for whom it was just a decent office job. People who wanted to make lots of money went into medicine, law, or financial.
As soon as the big-money careers were on for software, and word got out about how Google (founded by people with no prior industry experience) interviewed... we got undergrads prepping for interviews. Which was a new thing, and my impression is that the only people who would need to prep for interviews either weren't good, or were some kind of scammer. But then eventually those students, who had no awareness of anything else, thought that that this was normal, and now so many companies just blindly do it.
If we could just make some other profession be easier big money, maybe only people who are genuinely enthusiastic would be interviewing. And we could interview like adults, instead of like teenagers pledging a frat.
Prepping for interviews has been a big deal forever in most other industries though. It's considered normal to read extensively about a company, understand their business, sales strategies, finances, things like that, for any sort of business role.
I think tech is and was an exception here.
What you're describing sounds to me like just caring for the place we'll be spending half a decade or more and will have the most impact on our health, financial and social life.
I'd advise anyone to read the available financial reports on any company they're intending to join, execpt if it's an internship. You'll spend hours interviewing and years dealing with these people, you could as well take an hour or two to understand if the company is sinking or a scam in the first place.
Why should anyone do that in a world where fundamentals don't make sense? Yes, knowing how the company makes money is important (though that often is incomplete or unclear from what's publicly available), but knowing their 10-K or earnings reports? Too much.
kinda silly given the ability of most people to infer anything substantial through finances and marketing copy
really company reviews is all that matters and even that has limited value since your life is determined by your manger
best you can do is sus out how your interviewers are fairing
are they happy? are they stressed, everything else has so much noise to be worse than worthless
"Is the company consistently profitable or not?" and "Are revenue and profits growing over time, stable, or declining?" are very important questions to answer, particularly if stock grants are part of the compensation package.
For developers who work on products, getting a sense of whether the product of the team you'd be joining is a core part of the business versus speculative (i.e. stable vs likely to have layoffs) and how successful the product is in the marketplace (teams for products that are failing also are likely to be victims of layoffs) are also very important to understand.
So many ways to juice those numbers though.
And if your team is far from the money, what often matters much much more is how much political capital your skip level manager has and to what extent it can be deployed when the company needs to re-org or cut. Shoot, this can matter even if you're close to the money (if you're joining a team that's in the critical path of the profit center vs a capex moonshot project funded by said profit center).
This is one thing I really like about sales engineering. Sales orgs carry (relatively) very low-BS politically.
It matters a lot whether the organization is growing. If you get assigned to a toxic manager in a static organization then you're likely to be stuck there indefinitely. In a growing organization there will be opportunities to move up and out to other internal teams.
I remember being told this during my interview prep classes in college in 2008. Interviewing was so much more formal even then (in the NYC area): business casual attire, (p)leather-bound resume folios, expectations of knowing the company, etc. I definitely don't miss any of that nonsense.
It was good standard advice even for programmers to know at least a little about the company going in. And you should avoid typos and spellos on your resume.
But no "prep" like months of LeetCode grinding, memorizing the "design interview" recital, practicing all the tips for disingenuous "behavioral" passing, etc.
I’m glad civil engineers can’t vibe build a dam.
IIRC Google had an even higher bar in their early days: candidates had to submit a transcript showing a very high GPA and they usually hired people only from universities with elite CS programs. No way to prep for that.
They only gave it up years later when it became clear even to them it wasn't benefiting them.
> IIRC Google had an even higher bar in their early days: candidates had to submit a transcript showing a very high GPA and they usually hired people only from universities with elite CS programs.
Which sounds like a classic misconception of people with no experience outside of a fancy university echo chamber (many students and professors).
Much like Google's "how much do you remember from first-year CS 101 classes" interviews that coincidentally looked like maybe (among my theories) they were trying to make a metric that matches... (surprise!) a student with a high GPA at a fancy university.
Which is not very objective, nor very relevant. Even before the entire field shifted its basic education to help job-seekers game this company's metric.
> many companies just blindly do it.
Yes. A while ago a company contacted me to interview, and after the first "casual" round they told me their standard process was going full leetcode on the second round and I'm advised to prepare for those if I'm interested in going further.
While that's the only company that was so upfront about it, most accept that leetcodes are dumb (need to be prepped even for a working engineer) and still base the core of their technical interview on them.
Casual interviews definitely still exist, though the companies those jobs are attached to are typically not tech and pay less.
Consulting positions also don't have much leetcode BS. These have always focused much more on practical experience. They also pay less than Staff+ roles at FAANGs.
> And we could interview like adults, instead of like teenagers pledging a frat.
I think you're viewing the "good old days" of interviewing through the lens of nostalgia. Old school interviewing from decades ago or even more recently was significantly more similar to pledging to a frat than modern interviews.
> people who are genuinely enthusiastic
This seems absurdly difficult to measure well and gameable in its own way.
The flip side of "ad hoc" interviewing as you put it was an enormous amount of capriciousness. Being personable could count for a lot (being personable in front of programmers is definitely a different flavor of personable in front of frat bros, but it's just a different flavor is all). Pressure interviews were fairly common, where you would intentionally put the candidate in a stressful situation. Interview rubrics could be nonexistent. For all the cognitive biases present in today's interview process, older interviews were rife with much more.
If you try to systematize the interview process and make it more rigorous you inevitably make a system that is amenable to pre-interview preparation. If you forgo that you end up with a wildly capricious interview system.
If course you rarely have absolutes. Even the most rigorous modern interview systems often still have capriciousness in them and there was still some measure of rigor to old interview styles.
But let's not forget all the pain and problems of the old style of interviews.
> I think you're viewing the "good old days" of interviewing through the lens of nostalgia. Old school interviewing from decades ago or even more recently was significantly more similar to pledging to a frat than modern interviews.
Yeah, no, not at all. Interviewing in the 90s was just a cool chat between hackers. What interesting stuff have you built, let's talk about it. None of the confrontational leetcode nonsense of later years.
I still refuse to participate in that nonsense, so I'll never make people go through such interviews. I've only hired two awesome people this year, so less than a drop in the bucket, but I'll continue to do what I can to keep some sanity in the interviewing in this industry.
Being personable does count for a lot in any role that involves teamwork. Certain teams can maybe accommodate one member whose technical skills make up for bad interpersonal skills as a special exception, but one is the limit.
The article implies that somewhat, before AI the leetcode/brainteaser/behavioral interview process had somewhat acceptable results.
The reality is that AI just blew up something that was a pile of garbage, and the result is exactly what you'd expect.
We all treat interviews in this industry as a human resources problem, when in reality is an engineering problem.
The people with the skills to assess technical competency are even more scarce than actual engineers (b/c they would be engineers with people skills for interviewing), and that kind of people is usually very very busy to be bothered with what's a (again, perceived) human resources problem.
Then the rest is just random HR personnel pretending that they know what they're talking about. AI just exposed (even more) how incompetent they are.
The results did filter out a few people who could not think.
i reciently interviewed someone who was a senior engineer on the space shuttle, but managed a call center after that. Can this person still write code is a question we couldn't figure out and so had to pass. (We can't prove it but think we ended up with someone who outsourced the work to elsewhere - but at least that person could code if needed as proved by the interview)
I’ve conducted about 60 interviews this year, and have spotted a lot of AI usage.
At first I was quite concerned, then I realized that in nearly all cases I’d spotted usage, a pattern stood out.
Of the folks I spotted, all spoke far too clearly and linearly when it came to problem solving. No self doubt, no suggestion of different approaches and appearance of thought, just a clear A->B solution. Then, because they often didn’t ask any requirements questions beyond what I initially asked, the solution would be inadequate.
The opinion I came to is that even in the best Pre-AI era interviews I conducted, most engineers contemplate ideas, change their mind, ask clarifying questions. Folks mindlessly using AI don’t do this and instead just treat me as the prompt input and repeat it back. Regardless of if they were using AI or not, I won’t know ultimately, they still fail to meet my bar.
Sure, some more clever folks will mix or limit their LLM usage and get past me, but oh well.
I interviewed a guy in person and he paused for 5 seconds, then wrote a perfect solution. I tried making the problem more and more complicated and he nailed it anyway, also after a brief pause. We were done in half the time.
Maybe he just memorized the solution, I don’t know.
Would you fail that guy?
I might hire him, but I would insist he clock out for his 5 second paused. We can’t have him wasting company time like that.
you pay devs hourly?
Apparently by the second. Don't blink too often.
I’m running a high precision outfit over here ya know
It depends, I had some interviews like this that I suspected. For context, most of the interviews I conduct are technical design related where we have a discussion, less coding. So in those it is quite open ended where we will go, and there are many reasonable solutions.
In those cases where I’ve seen that level of performance, there have been (one or more of):
- Audio/video glitches.
- candidate pausing frequently after each question, no words, then sudden clarity and fluency on the problem.
- candidate often suggests multiple specific ideas/points to each question I ask.
- I can often see their eyes reading back and forth (note; if you use AI in an interview, maybe dont use a 4K webcam).
- way too much specificity when I didn’t ask for it. For example, the topic of profiling a go application came up, and the candidate suggested we use go tool pprof and suggested a few specific arguments that weren’t relevant, later I found in the documentation the same exact example commands verbatim.
In all, the impression I come away with in those types of interviews is that they performed “too well” in an uncanny way.
I worked for AWS for a long time and did a couple hundred interviews there, the best candidates I interviewed were distinctly different in how they solved problems, how they communicated, in ways that reading from an llm response can’t resemble.
The point is that I interviewed the guy in person and he nailed it 200%. If you interviewed him online you would likely come to conclusion he’s a fake per the criteria you specified, wouldn’t you?
It’s not a rubric I’m checking off for interviews. And in person it’s more straightforward to assess a candidate than questioning if they are using any aids over video… whats your point?
He made the point clearly, stop dodging the question...
> most engineers contemplate ideas, change their mind, ask clarifying questions
I don't disagree at all. I find it slightly funny that in my experience interviewing for FAANG and YC startups, the signs you mentioned would be seen as "red flags". And that's not just my assumption, when I asked for feedback on the interview, I have multiple times received feedback along the lines of "candidate showed hesitation and indecision with their choice of solution".
I work for a FAANG, have done interview training and numerous interviews. We are explicitly trained that candidates should be asking questions, second guess themselves etc.
Hotshot FAANG and YC startups don't want humans, they want zipheads[0].
[0] https://www.urbandictionary.com/define.php?term=Ziphead
Yeah that is definitely something that is subject to the interviewers opinion and maybe company culture. To me, question asking is a great thing, though the candidate eventually needs to start solving.
Jumping straight to the optimal solution may also indicate that candidate have seen the problem before.
The funny thing is, they don’t. They often jump to a solution that lacks in many ways, because it barely addresses the few inputs I gave (since they asked no follow up, even when I suggest they ask for more requirements).
Can I ask - out of the 60 interviews, roughly how many times did you suspect AI usage?
Probably about 10 or so.
The real problem will be in 5 years, when current university students having their brains melted by AI that somehow luck into entry level positions can’t ever get to senior level because they’re too reliant on AI and they literally don’t know how to think for themselves. There will never again be as many senior engineers as there are today. There won’t be any good engineers left to hire.
Look around you. 15 years ago we didn’t have phones and now kids are so addicted to them they’re giving themselves anxiety and depression. Not just kids, but kids have it the worst. You know it’s gonna be even worse with AI.
Most departments at companies run on zero to two good engineers anyway. The rest are personality and nepotism hires limping along some half-baked project or sustainment effort.
Most people in my engineering program didn’t deserve their engineering degrees. Where do you think all these people go? Most of them get engineering jobs.
I’m gonna assume you’re being facetious here. I’ve been in tech for 15 years and I’ve never met a “nepotism hire”. Most of my coworkers have been incredible people.
But in case you’re serious, there’s an old saying that says if everywhere you go smells like shit maybe it’s time to check your shoes.
For our coding interviews we encourage people to use whatever tools they want. Cursor, Claude, none, doesn’t matter.
What I’m looking for is strong thinking and problem solving. Sometimes someone uses AI to sort of parallelize their brain, and I’m impressed. Others show me their aptitude without any advanced tools at all.
What I can’t stand is the lazy AI candidates. People who I know can code, asking Claude to write a function that does something completely trivial and then saying literally nothing in the 30 seconds that it “thinks”. They’re just not trying. They’re not leveraging anything, they’re outsourcing. It’s just so sad to set how quickly people are to be lazy, to me it’s like ordering food delivery from the place under your building.
AI is breaking more than interviews. I recently overheard someone who is studying to be a psychiatric nurse practitioner (they are already a RN) via an online program say “ChatGPT is my new best friend.” We are doomed.
I am teaching a coding class, and we had to switch to in person interview/viva assessment about the code written by students, to deal with AI written code. It works, but it requires a lot of extra effort on our side. I don't know if it is sustainable...
Why wouldn't something like this work?
1. Get students to work on a more complex than usual project (in relation to their previous peers). Let them use whatever they want and let them know that AI is fine.
2. Make them come in for a physical exam where they have questions about they why of decisions they had to take during the project.
And that's it? I believe that if you can a) produce a fully working project meeting all functional requirements, and b) argue about its design with expertise, you pass. Do it with AI or not.
Are we interested in supporting people who can design something and create it or just have students who must follow the whims of professors who are unhappy that their studies looked different?
A project doesn't quite work for my course, as we teaching different techniques and would like knowledge of each of them.
But yes we currently allow students to use AI provided their solution works and they can explain it. We just discourage to use AI to generate the full solution to each problem.
If I read your suggestion correctly, you're saying the exam is basically a board explaining their decision making around their code. That sounds great in theory but in practice it would be very hard to grade. Or at least, how could someone fail? If you let them use AI you can't really fault them for not understanding the code, can you? Unless you teach the course to 1. use AI and then 2. verify. And step 2 requires an understanding of coding and experience to recognize bad architecture. Which requires you to think through a problem without the AI telling you the answer.
Yep, you can fault them for not understanding it.
Exactly the same as in professional environments: you can use LLMs for your code but you've got to stand behind whatever you submit. You can of course use something like cursor and let it go free, not understanding a thing of the result, or you can step-by-step do changes with AI and try to understand the why.
I believe if teachers relaxed their emotions a bit and adapted their grading system (while also increasing the expected learning outcomes), we would see students who are trained to understand the pitfalls of LLMs and how to maximise getting the most out of them.
If you grade on pass/fail it’s easy to grade. Not every course uses letter grades…
If you let people use AI they are still accountable for the code written under their name. If they can’t look at the code and explain what it’s doing, that’s not demonstrating understanding.
Companies being forced to overhaul their interview processes is certainly an unexpected side-effect of the insurgence of LLMs.
On the other hand, encouraging employees to adopt "AI" in their workflows, while at the same time banning "AI" on interviews, seems a bit hypocritical - at least from my perspective. One might argue that this is about dishonesty, and yes, I agree. However, AI-centric companies apparently include AI usage in employee KPIs, so I'm not sure how much they value the raw/non-augmented skill-set of their individual workers.
Of course, in all other cases, not disclosing AI usage is quite a dick move.
If companies are going back to physical onsites but are using remote interviewers, then maybe it makes more sense to have interview centers. They'd be like testing centers --- device lockers, multiple cameras, nearby proctor, shitty desktops from the 2010s with even worse keyboards --- but just for interviews.
It's funny how this article seems to repeat itself halfway through, like it was written by AI
Keep reading, the author repeats themselves 3-4 times in a loop. I eventually had to give up reading the same thesis explained over and over again.
Interviews are fundamentally really difficult to get right. On one side, you could try to create the best fairest standardized interview process based on certain metrics, but people will eventually optimize on how well they can do on the standardized interview, making it less effective. On the other side, you could create a customized ad hoc interview to try to learn as much about the candidate as possible, and have them do a work trial for a few days to ensure they're the right candidate, but this takes a ton of time and effort on both the company and the candidate.
I personally think the best interview format is the candidate doing a take home project and giving a presentation on it. It feels like the most comprehensive yet minimal way to assess a candidate on a variety of metrics, tests coding ability in the project, real system design rather than hypothetical, communication skills, and depth of understanding on the project when the interviewer asks follow-up questions. It would be difficult to cheat this with AI since you would need a solid understanding of the whole project for the presentation.
Maybe it’s time to ask deeper questions, ask how to reduce complexity while preserving meaning. Doing real pair programming with shared remote code and simulate as much as possible a real day-to-day environment. Not all companies search for the same kind of developers. Some don’t really care about the person as long as the tech skills are there. Some don’t look for the brightest in favor of a better cultural match with the team. Genuine remote interviews aren’t easy but it also depends on the interviewer’s skills. We’ve been touted for year that AI will replace developers, would Elon replace the engineers working on the software of it’s rockets with AI ? It depends what’s at stake. I bet their interviews are quite specific and researched thoroughly. We can find better ways to create a real connexion in the interviews and still make sure the tech skills are sound without leet code. We also need developers who master the use of AI and have real skills of thinking before and designing and deep review code skills
I’ve mentioned it before, but it’s not just that people “cheat” during interviews with an LLM…it’s that they have atrophied a lot of their basic skills because they’ve become dependent on it.
Honestly, the only ways around it for me are
1. Have in person interviews on a whiteboard. Pseudocode is okay.
2. Find questions that trip up LLMs. I’m lucky because my specific domain is one where LLMs are really bad at because we deal with hierarchical and temporal data. They’re easy for a human but the multi dimensional complexity trips up every LLM I’ve tried.
3. Prepare edge cases that require the candidate to reconsider their initial approach. LLMs are pretty obvious when they throw out things wholesale
Rather than trying to trip up the LLM I find it’s much easier to ask about something esoteric that the LLM would know but a normal person wouldn’t.
That basically amounts to the same thing. LLMs are pretty good at faking responses to conversational questions.
Universities and education overall also had their foundation detonated by AI. Some Stanford classes now do 15 minute tricky exams to reduce the chance of cheating with AI (it takes some time to type it so the point is to make the exam so short that one can't physically cheat well). I am not sure what the solution for this mess is going to be.
Several possible solutions:
1. Strict honor code that is actually enforced with zero tolerance.
2. Exams done in person with screening for electronic devices.
3. Recognize that generative AI is going to be ambient and ubiquitous, and rework course content from scratch to focus on the aspects that only humans still do well.
Only 3) could scale but then those exam takers not using AI would fail unless they are geniuses in many areas. 1) and 2) can't be done when you have 50-70% of your course consisting of online students (Stanford mixes on-campus with CGOE external students who take the exams off-campus), who are important for your revenue. Proctoring won't work either as one could have two computers, one for the exam, one for the cheating (done for interviews all the time now).
Well realistically exam takers not using AI will fail in any sort of real world technical / professional / managerial occupation anyway. They might as well get used to it. Not being able to use LLMs effectively today is like the equivalent of not knowing how to use Windows 20 years ago.
I still think how many golf balls fit in a 747 is a good interview question. No one needs to give me a number but someone could really wow me but outlining a real plan to estimate this, tell me how you would subcontract estimating the size of the golf ball and the plane. It's not about a right or wrong answer but explaining to me how you think. I do software and hardware interviews and always did them in person so we can focus on how a candidate thinks. You can answer every question wrong in my interview but still be above the bar because of how they show me they can think.
Some of the best hires I’ve ever made would’ve tanked that sort of interview question. Being able to efficiently work through those puzzles is probably decent positive signal, but failure tells me next to nothing, and a question that can fail to give me signal is a question that wastes valuable time — both mine and theirs.
A format I was fond of when I was interviewing more was asking candidates to pick a topic — any topic, from their favourite data structure to their favourite game or recipe — and explain it to me. I gave the absolute best programmer I ever interviewed a “don’t hire” recommendation, because I could barely keep up with her explanation of something I was actually familiar with, even though I repeatedly asked her to approach it as if explaining it to a layperson.
I feel like the stereotype about this question is different from your approach, though: supposedly, it started with quirky, new tech-minded businesses using it rationally to see people who could solve open-ended problems, and evolved to everyone using it because it was the popular thing. If someone still uses it today, I would totally expect the interviewer to have a number up on their screen, and answers that are too far off would lead to a rejection.
Besides, it's too vague of a question. If I were asked it, I would ask so many clarifying questions that I would not ever be considered for the position. Does "fill" mean just the human/passenger spaces, or all voids in the plane? (Cargo holds, equipment bays, fuel and other tanks, etc). Do I have access to any external documentation about the plane, or can I only derive the answer using experimentation? Can my proposed method give a number that's close to the real answer (if someone were to go and physically fill the plane), or does it have to be exactly spot on with no compromises?
Problem is many people want to grade the answer for correctness instead of thinking. It is easy to figure out a correct answer and you can tell hr they were off by some amount t so 'no'. It is much harder to tell hr that even though they were within some amount of correct you shouldn't hire them because they can't think (despite getting a correct answer)
If AI can solve all of your interview questions trivially, maybe you should figure out how to use AI to do the job itself.
The questions were just a proxy for the knowledge you needed. If you could answer the questions you must have learned enough to be able to do the work. We invented a way to answer the test questions without being able to do the work.
To continue the point. If the knowledge you need is easily obtained from an LLM then knowledge isn’t really necessary for the job. Stop selecting for what the candidate knows and start selecting for something more relevant to the job.
An accurate test would just be handing them a real piece of work to complete. Which would take ages and people would absolutely hate it. The interview questions are significantly faster, but easy to cheat on in online interviews.
The better option is to just ask the questions in person to prevent cheating.
This isn’t a new problem either. There is a reason certifications and universities don’t allow cheating in tests either. Because being able to copy paste an answer doesn’t demonstrate that you learned anything.
I don’t understand how offline interviewing is needed to catch ai use, not counting take homes.
Surely just asking the candidate to lean a bit back on the web interview and then having a regular talk without him reaching for the keyboard is enough? I guess they can have some in between layer hearing the conversation and posting tips but even then it would be obvious someone’s reading from a sheet.
That type of cat-and-mouse game is ultimately pointless. It's fairly easy to build an ambient AI assistant that will listen in to the conversation and automatically display answers to interview questions without the candidate touching a keyboard. If the interviewer wants to get any reliable signal then they'll have to ask questions that an AI can't answer effectively.
There are interview cheating tools which listen in on the call and show a layer over the screen with answers which doesn’t show on screen shares.
So you’d only be going off how they speak which could be filtering out people who are just a bit awkward.
Interviews should be in-person.
In every practical sense, online interviews are the part of the early screening process. The sheer amount of applicants means that you need to do some filtering before inviting people to do on-site interviews.
If you make the first interview in person, most people filter themselves out because they aren’t in the country or can’t be bothered.
Do a first phone screening to agree on the details of the job and the salary, but the actual knowledge testing should be in person.
So the process is now:
1. Embellish your resume with AI (or have it outright lie and create fictional work history) to get past the AI screening bots.
2. Have a voice-to-text AI running to cheat your way past the HR screen and first round interview.
3. Show up for in-person interview with all the other liars and unscrupulous cheats.
No matter who gets hired, chances are the company loses and honest people lose. Lame system.
Modern problems sometimes require old-fashioned solutions.
While I agree LLMs have forever changed the interviewing game, I also strongly disagree with deeming slop code as "perfect" and "optimal".
There's a lot of shitty code made my LLMs, even today. So maybe we should lean in, and get people to critique generated code with the interviewer. Besides, being able to talk through, review, and discuss code is more important than the initial creation.
Interview questions are a genre of their own though. They are:
1. Very commonly repeated across the internet
2. Studied to the point of having perfect solutions written for almost any permutation of them
3. Very short and self-contained, not having to interact with greater systems and usually being solvable in a few dozen lines of code
4. Of limited difficulty (since the candidate is put on the spot and can't really think about it much, you can only make it so hard)
All of that lends them to being practically the perfect LLM use case. I would expect a modern LLM to vastly outperform me in almost any interview question. Maybe that changes for non-juniors who advance far enough to have niche specialist knowledge, but if we're talking about the generic Leetcode-style stuff, I have no doubts that an LLM would do perfectly fine compared to me.
It's an indictment of how bad coding interviews are/were
So many words just to say interview process is broken. It always been that way , anyone really think that someone that prepared and solve few leet code question can plan complete distributed system?
The reality is that no correlation was found between interview success and success at work especially for SW engineers, AI toola didn't change it not remote interviews.
Welp, back to nepotism, I guess.
I interviewed a guy a couple of months ago that had perfect responses to every tech question I threw at him. He even did really well on the white boarding session. The only thing was he would wait for 10-20 seconds to respond to everything. Not long enough to get called out but just long enough to notice. He aced everything. He’s a horrible employee, a senior that doesn’t seem to know anything. I almost suggested he start using his interview LLM when regular folks were asking him questions.
I do in-person whiteboard interviews.
> Then there’s the pacing. A human pauses to think. AI-assisted candidates pause to receive a perfect answer. You can mostly feel the rhythm shift. Their eyes drift slightly. You think we don’t see that, don’t you?
I really hope most interviewers have at least the barebones skills to be able to discern AI-using interviewees, like what the author claims to have. I'm trying to get hired at the junior level, and the thought of competing with people who have no qualms with effectively cheating in real time is pretty scary. I'm human, I will inevitably not know something or make minor missteps - someone with an AI or a quick-witted friend by their side can spit out perfect, fully-rounded, flawless, HR-optimized stories and replies with a satisfying conclusion for the behavioral questions, and basically always-correct, optimal solutions for the technical questions.