Commentary

The Skill AI Can't Replace ... maybe

7 min read

I have been in a video podcast rabbit hole, self imposed and happy about it. I came across an interview between Sinead Bovell and an AI economist, Ajay Agrawal. The topic was about what skills you need to stay relevant and employed in the age of AI.

The main takeaway was that judgement was the differentiator. Now … we all use judgement to varying degrees in our work. The more seasoned you are, the better your judgement. So basically, to survive you need to be senior plus. Now this may sound like the classic “need experience to get experience” catch-22 cranked up to 1000. And yes … yes it is. But I don’t want to talk about the erosion of current and future software engineers. That is a very real issue that leaders seem to see as a later problem and the whole topic deserves its own article or three. So let’s just focus on this magic bullet called judgement.

So let’s define judgement

Judgement is the level up from decision making. Most anyone, unless they are drowning in anxiety, can make a decision. The skill is in making decisions in a context where:

  • The data points are incomplete or ambiguous
  • Things are complex
  • There’s no clear “right” answer
  • Trade-offs are everywhere
  • You are accountable for the result

Judgement is about awareness of past, present, and potential circumstances and then using your experience and hard learned lessons to navigate the problem in order to achieve a specific result. And yes, this is already a key skill for employability. The difference may be that AI makes the lack of this skill more … visible.

Now how can we cultivate it?

Okay, let’s start the listicle ( ^_^)/

1. Engage with the problem first

AI is really good at giving you “a solution”. It is trained to always do so. But you must give it all the context. You must understand the real problem you’re trying to solve plus the constraints, hopes, and wishes of your team. This goes beyond surface level. You need to know the problem well enough to explain it to someone with no context succinctly and thoroughly, such that they could attempt to solve it as well. Without this you get things that seem to work, but will break down under stressing conditions.

Take engineers favorite use of AI, writing tests. Many hate it … I understand, but I’d argue the tests are an outline of what to solve for any given piece of code. It is declarative about what the feature set should do. In my experience, AI will often get a little too imperative with tests. After all, there are many examples and thus training data of faulty and imperative-heavy tests out there. So instead we end up checking implementation instead of behavior.

This may seem good enough if you don’t understand that the problem tests are supposed to solve is:

how can I ensure that next year when me or some other rando engineer starts shuffling stuff around, we will know right away if the intended behavior has been broken with some level of confidence.

In all honesty, tests may be a better way of getting the code you need from AI. You know, testing first should really be its own development paradigm… since it can drive the development … if only someone created a whole process about this …

2. Learn to question the outputs

AI is quite good at luring you into a false sense of safety. It is very confident about everything it says and humans love to defer to the confident one in the room. I don’t think AI has the capability (yet) to intentionally deceive and mislead. It is just doing what it was trained to do, provide an answer that has a high probability of looking right based on some complex relationships it has built between parts of words and concepts.

But you must remain skeptical of everything AI pops out. It is trying its best, but you are the adult in the room who should know better about what the solution needs to look like. You need to be able to judge whether a solution effectively fits the real problem at hand. How can you judge that properly? Well …

3. Know what you are doing fundamentally

AI, specifically LLMs, in our workflows are meant to just speed up what you could already do yourself. So if you can’t come up with the solution given some time and a search box, you might be in over your head and need to slow down. You are missing some key information to make a good judgement. How can you evaluate the tradeoffs to a certain approach if you don’t understand how the code works? If you don’t know what an effective solution would look like? Take time to learn the how and why so you can make an informed decision about the direction of your code.

4. Practice having hard conversations

Good judgement may mean telling people things they don’t want to hear. AI can generate code, and tests, and documentation, but it can’t tell your PM that the timeline is unrealistic. It won’t push back on technical decisions that will haunt you and the team six months from now. That judgement call is all on you. But.. maybe you won’t be there anyways?

I want to make sure this doesn’t sound like you have to have all the right answers and be the hero everyone deserves at the company. Sometimes you will hit the nail on the head and prevent some level of disaster, but equally so you may be way off key and just waste everyone’s time. Good judgement is a muscle, you gotta lift bro.

A side note. Just because you may have good judgement about something, doesn’t mean others do or even agree with your proposed approach. You can be the person that helps the team stop and take that moment to think about things, but be careful about constantly being a blocker. As a part of a team, sometimes you have to roll with judgements, good or bad. Just take note of ways to mitigate any fallout so that the team can learn from it and get back on track.

5. Reflect, reflect … please just reflect

You won’t get anywhere if you can’t even attempt to objectively reflect on judgement calls you’ve made. Others may gleefully ensure you reflect on some level whether you want to or not, but you really should take that initiative yourself, habitually. Build up some pattern recognition so you can understand not just what went wrong and how you can do better next time, but why you made that judgement in the first place.

Maybe there was key information that you didn’t realize you needed. Maybe you didn’t understand all the stakeholders that impacted the desired result. Evaluate the “why” so that you can identify a better process that can more holistically improve your judgement.

Maybe start a career journal. Our minds love to selectively remember things, so write it down sometimes. Or maybe do audio recordings if that is more your thing. Whatever helps you build that reflection muscle.

Wrapping up

I recommend you watch the whole interview yourself and form your own opinions about this. I have cited it below. As with most things worth discussing, there’s nuance here, some of which I may not have captured fully. Feel free to reach out and continue the discussion with me!

Until next time,

Beatrice Williams

Nov 7, 2025

References

  1. Bovell, S. (2025, October). The AI Economist: The skill you need to stay employed in the age of AI [Video]. YouTube. https://www.youtube.com/watch?v=UhfpHwcrx6c