Why “Show Me the Code” Matters in the Age of Generative AI
- Tobias Kilga
- May 21
- 14 min read
Updated: May 23
On my laptop, a sticker declares: “Nice story, now show me the code.”

I got that from a recent event at Squer, and it stuck with me. Every day I see that bold mantra, I’m reminded of a veteran programmer’s no-nonsense challenge: Show me the code. The phrase (popularized by Linux creator Linus Torvalds) became a little inside joke in software circles, but it carries a serious truth. In coding, it’s not enough to spin a fancy tale about a solution; you have to back it up with working code. Lately, though, I’ve realized that “show me the code” isn’t just about code at all – it’s a rallying cry we desperately need in every field as we navigate a world awash in easy answers from generative AI. In an age when AI will happily generate content or ideas at the twiddle of a thumb, that old sticker feels like a prophetic warning. Whether you’re a developer, writer, strategist, designer, or academic, the message is the same: Don’t just give me a nice-sounding story or AI-generated fluff – show me the substance. The rise of generative AI has made it incredibly easy to ask for and receive almost anything with zero effort or expertise. It feels like magic when you first ask a chatbot for a marketing plan or some Python code and get a plausible answer in seconds. But this very convenience is creating a vacuum where rigorous research, critical thinking, domain expertise, and real-world experience once stood. The sticker on my laptop isn’t just snark; it’s a challenge to all of us: in the era of GenAI, how do we reclaim depth and real understanding over surface-level outputs?
The Age of Instant Gratification: AI and the Illusion of Depth
We have entered the age ofinstant answers. Need a snippet of code? Ping an AI and you’ll get one. Need a business strategy outline, a lesson plan, or a listicle for your blog? An AI assistant can crank out a passable version while you sip your coffee. The temptation of GenAI tools is that they deliver results now. No scratching your head, no poring over documentation or journals – just type a prompt and watch the solution materialize. The outcomes are oftenimpressively coherent. In fact, one education expert noted that today’s AI can produce“convincing (though uninspired) college student quality writing” on virtually any prompt within seconds. If a bot can churn out a decent essay or a working chunk of code in the time it takes to blink, it’s fair to ask: why would we bother to do the hard work ourselves? That question –“why bother?”– captures the siren song of convenience. Why spend an hour debugging code when Copilot can fix it in a flash? Why labor over a design draft when a generative model can spit out ten variations in an instant? For many tasks, AI provides a shortcut that feels like skipping to the ending of a story without wading through the chapters. It’s immediate gratification at work. We humans have always loved our shortcuts, and GenAI is the ultimate shortcut: a genie that does the heavy lifting while we sit back. But as every coder knows, shortcuts come with trade-offs. When you copy-paste code you don’t understand, bugs and security holes slip in. When you grab an AI-written article without vetting it, subtle errors and missing nuances fly under the radar. The truth is, theeasierit becomes to get answers, the easier it is to fool ourselves that we’ve done enough. We risk becoming satisfied with a veneer of an answer – the stack of paper that looks like a thesis, the app that compiles and runs – without digging any deeper to see if it’s built on solid ground.
Why “Show Me the Code” Resonates Beyond Programming
The rise of generative AI is not just changing what we produce; it’s changing how we think (or don’t think). With an ever-ready oracle at our fingertips, many of us have started to skip the thinking part altogether. Why critically evaluate when the AI sounds so confident? Why double-check facts when the paragraph reads so smoothly? This erosion of scrutiny is starting to show. A recent study warned that AI might be actively “eroding its users’ critical thinking skills.” In that survey of professionals across various fields, those who placed the most trust in AI’s answers ended up thinking less critically about the results. It’s not exactly shocking – if you trust the tool completely, you won’t question it – but it is a trap. As the researchers pointed out, the more we rely on AI’s seeming expertise, the more we risk letting dangerous errors slip by unnoticed. In other words, blind trust in a convenient answer can blind us to the answer’s flaws. The data is sobering. In the Microsoft/CMU study, knowledge workers admitted that for 40% of the tasks they completed with generative AI, they applied no critical thinking at all. Nearly half the time, the humans just took the AI output and ran with it, no questions asked. It’s like handing in an assignment straight from Wikipedia without reading it – except now it’s an AI doing the writing, and it sounds authoritative enough that you might not even feel the need to read it. This kind of cognitive offloading isn’t entirely new (remember the old “Google effect,” where people stopped memorizing info they could just search on a whim?). But GenAI has supercharged the effect: why reason through a problem when an AI will give you an instant answer that looks reasoned? Over time, it’s as if our mental muscles for analysis and problem-solving are being left to atrophy. As one group of researchers put it, by automating routine tasks and removing those “routine opportunities to practice our judgment,” we risk leaving our cognitive muscles “atrophied and unprepared” for the tough problems. Use it or lose it – and lately, we haven’t been using it. Perhaps the most alarming aspect is how this easy-answer culture is affecting learning and expertise. Teachers are finding that students will happily let ChatGPT write their essays, effectively bypassing the need to reason critically or do research. Why struggle through analyzing Shakespeare or running a lab experiment when an AI can generate a summary or a dataset in seconds? In the short term, the assignment is done; in the long term, the student has learned nothing – except that pressing the “make my life easy” button yields results. The erosion of critical thinking isn’t just a theoretical worry; it’s visible in classrooms and offices. It shows up when a junior developer can’t explain the code they copied from an AI, or when a manager presents an AI-generated strategy that falls apart under basic questioning. We’re witnessing a slow hollowing-out of expertise: lots of outputs, not enough understanding.
It’s tempting to think this is only a problem for programmers or students, but the depth deficit is spreading across every field. Generative AI doesn’t discriminate – it’s equally happy to help a marketer, a designer, or a researcher produce work without the weight of experience behind it. Let’s look at how this shift is affecting different domains:
Developers: Lost in the Shortcut
Software engineers now have AI coding assistants that suggest lines or even whole functions on the fly. It’s great for productivity – until it isn’t. Developers might accept code that “works” without truly understanding how or why it works, missing out on the hard-won insights that come from debugging and reading documentation. The community Q&A site Stack Overflow has reported a huge decline in activity (questions on the site have plunged by over 70% since ChatGPT’s debut) because people are turning to AI for quick fixes instead of discussing with peers. But those quick fixes can be a double-edged sword: AI can confidently provide a snippet that looks legit but is subtly wrong or inefficient, and a less experienced dev might never catch it. The result? Codebases full of magical solutions that no one truly owns or can maintain. The ethos of “show me the code” for proof and peer review gets lost when the code comes from a black-box AI and everyone just assumes it’s correct.
Writers: Polished but Hollow
Bloggers, copywriters, social media managers – anyone who works with words – now have an army of AI content generators at their disposal. Need 10 variations of a product description? Done. A catchy headline? Here are five. But this flood of AI-generated text has a way of all sounding the same: a kind of generic gloss that lacks the spark of genuine insight. Writers used to dig into research, interview experts, and refine drafts to ensure quality and originality. Now there’s a temptation to take the first AI draft and hit “publish.” The internet is already groaning under the weight of what one article dubbed “AI slop,” low-quality content generated en masse. This slop can look like real writing, but it often misrepresents facts, lacks nuance, or just rehashes existing material. For the content creator, over-reliance on AI means skipping the steps that give content depth – the fact-checking, the creative brainstorming, the personal experience. An AI might be able to mimic an authoritative tone, but it can’t replicate the credibility of someone who’s spent years in the field. The danger is that we flood our blogs and feeds with high-volume, low-substance content. Lots of code, if you will, but no real algorithm behind it.
Strategists: Planning by Paint-by-Numbers
In business, marketing, or policy, coming up with a winning strategy or insightful plan is as much an art as a science. It requires analyzing data, understanding context, and often thinking outside the box. Generative AI can churn out a standard strategy document or a SWOT analysis in moments. It will dutifully list your strengths, weaknesses, opportunities, and threats, just as it was trained on countless examples. But strategies crafted this way tend to be paint-by-numbers. They might look polished, but they often lean on conventional wisdom and clichés (“leverage synergies,” anyone?) because the AI cannot truly innovate or understand your unique situation. If a strategist leans on these AI outputs without applying their own critical eye, they risk presenting a plan that’s flavorless and not tailored to reality. We’ve seen business leaders tout AI-generated plans that collapse when they encounter real-world complexity or competition. The slide deck was pretty – the ideas were paper-thin. Without the hard work of market research, critical questioning, and creative iteration, a strategy is just a nice story with no code – no executable steps that hold up in practice.
Designers: Aesthetics without Meaning
The design world is embracing AI for everything from logo creation to UI mockups to video game art. These tools can indeed spark inspiration and save time – for instance, generating dozens of concept art pieces with different styles at the click of a button. But design isn’t just about pretty pictures; it’s about solving problems and communicating messages. An AI might mash up existing visual patterns to give you a slick-looking logo, yet that logo might inadvertently copy another brand’s concept or fail to resonate with the target audience in the way a human designer’s careful work would. If designers start relying on AI to do the heavy lifting, they might skip the sketching, the prototyping, the user-testing – all the “unseen” labor that actually makes a design effective. There’s also a risk of a homogenization of style: since generative models pull from the same massive pool of existing designs, they tend to produce outputs that trend toward the average. Without a designer injecting original creativity and thought, you end up with designs that look good at first glance but have no soul or distinct identity. In short, “show me the code” in design terms might be “show me the reasoning.” What was the thought process? If the answer is “I just took what the AI gave me,” that’s not exactly a confidence booster.
Students and Scholars: Learning without Understanding
Perhaps nowhere is the impact of GenAI more hotly debated than in education and academia. Students have discovered that ChatGPT will not only write essays, but solve math problems, generate code for assignments, even answer exam questions. The ethical issues aside, this habit is robbing students of the very point of education: to learn how to think, research, and problem-solve. A student who turns in an AI-written essay on Shakespeare might get a decent grade, but they sidestep the entire analytical process of actually reading the play and forming an argument. Over time, they may earn credentials without mastering the material – a veneer of knowledge with no foundation. Educators worry (with good reason) that a generation relying on AI to do their homework will emerge with paper-thin expertise. In higher academia, we’re seeing early-career researchers ask AI to summarize papers or even write literature reviews. The result? Sometimes the summaries sound plausible but contain made-up references or misinterpreted results – an academic faux pas that could be caught only by someone who actually read the sources. Relying on AI in research can lead to embarrassing mistakes and a breakdown of rigor. The academic mindset demands skepticism, verification, and depth – “show me the evidence” as a parallel to “show me the code.” If we don’t cultivate that mindset, we risk a scholarly community that trusts fabricated sources and superficial analysis simply because it was served up neatly by an AI.
In all these fields, the pattern is the same: generative AI makes itdeceptively easy to produce something that looks fine at a cursory glance. But appearance of substance is not substance. Real expertise, whether in coding or writing or design, comes from wrestling with the material, asking hard questions, and learning from mistakes. When AI takes over the wrestling part, we might end up with polished outputs devoid of the insight and rigor that come only from human critical thinking. It’s like fast food for the mind – it fills you up quickly, but it’s not exactly nourishing.
Reclaiming Substance: How to Stay Sharp in the GenAI Era
So how do we counter this trend? How do we reclaim critical thinking in a time of generative AI? The answer isn’t to shun the AI tools – they’re here to stay, and when used wisely, they can be incredibly powerful. The answer lies in how we use them and in cultivating a mindset that constantly asks,“Where’s the real code beneath this output?”
Don' Skip the Hard Work
We must consciously choose depth over ease in our work and learning, using AI as a tool to enhance our skills – not replace them.First and foremost, we need to bring a healthy dose of skepticism back to the table. That means whenever an AI gives us an answer, we channel our inner demanding engineer and say, “Okay, nice result – now show me the code.” In practice, this might look like double-checking the sources and facts in that AI-written article, or running and testing that AI-generated code with edge cases to see if it truly holds up. It means asking for evidence and explanation. Don’t accept “because the AI said so.”
If the chatbot drafts a marketing plan, challenge its assumptions: where’s the data to back this up? If it suggests a design, probe why those choices were made. Essentially, we must become the critical editor or the senior engineer looking over the AI’s shoulder. The AI is a junior assistant at best – prolific and fast, but prone to mistakes and lacking real understanding. We have to supply the understanding and the quality control.
Reclaiming critical thinking also means embracing the process, not just the outcome. Sure, it’s faster to let the AI do the first draft, and that’s fine – but don’t let the first draft be thefinaldraft. Dive into it. If you’re a developer, use the AI’s code suggestion as a starting point, then improve it, refactor it, comment it, make it yours. If you’re a student, use the AI to gather ideas or explain a concept, but then do your own reading and build your own argument – don’t just turn in the AI’s words.
The key is to stay engaged.
We might save time with AI, but we should reinvest some of that time in deeper analysis and learning, not just move on to the next task on autopilot. There’s also a strong case to be made for deliberately practicing our craft without AI assistance on occasion. Writers still do writing exercises in longhand to keep their creativity sharp; developers solve coding problems from scratch to keep their skills honed. Think of it like going to the gym to keep your mental muscles in shape. If you always rely on the AI “elevator,” you’ll forget how to climb the stairs. So challenge yourself: write a blog post without any AI help just to see if you can articulate your thoughts clearly on your own. Or solve that bug through old-fashioned debugging before asking the AI. It might be slower and even frustrating, but it keeps you intimately familiar with the nuts and bolts of your work. Remember, “we lose what we don’t use” – if we stop exercising our creativity and reasoning, we shouldn’t be surprised when those abilities start to fade.
Critically, we need to foster a culture – in workplaces, schools, and communities – that values substance over superficial output. Managers, for example, can encourage their teams to explain the rationale behind AI-assisted work. Teachers can design assignments that require reflection onhowanswers were obtained, not just the answers themselves. The idea is to make sure we’re rewarding the process and the insight, not just a slick final product. In academia, some educators are already moving toward oral exams or iterative projects where students must discuss their thinking, precisely to ensure they can’t just submit AI-generated work undigested. In industry, imagine code reviews or content reviews where the question “Can you walk me through how you arrived at this?” becomes standard. If someone presents a strategy document, ask which parts were human insights versus AI suggestions, and how they validated the AI’s input. This isn’t about shaming the use of AI – it’s about ensuring human oversight and insight are in the loop at all times. Embracing a “show me the code” mentality in the broader sense also means rekindling our love of evidence and experimentation. If an AI gives you a conclusion, treat it like a hypothesis, not gospel. Go test that hypothesis in the real world. Does the code actually solve the user’s problem under real conditions? Does the marketing copy actually resonate with real customers? Does the academic argument hold up when confronted with primary sources?
By treating AI outputs as starting points that must be proven, we avoid the trap of complacency. We stay in the habit of questioning and verifying, which is at the heart of critical thinking. As the Big Think essay on this topic noted, it ultimately comes down to an individual choice: Do we take the convenient route of letting AI think for us, or do we actively preserve and exercise our own critical thinking? Every time we choose the latter, we sharpen our minds and reclaim a bit of our agency that automation might otherwise dull.
Call for a Critical Thinking Renaissance
Finally, let’s talk about pride of craft. There is a certain satisfaction – even joy – in owning your work, in knowing you didn’t just accept a canned answer but actually dug in and made it better. That’s something no AI can take away from us unless we let it. The phrase “Show me the code” is, at its heart, about taking pride in substance. It’s saying: don’t just tell me, show me you’ve done the work. When we rise to that challenge, we produce work that we can truly stand behind. We also become better professionals, better creators, better thinkers. Generative AI, for all its wonders, shouldn’t make us feel obsolete or lazy; it should motivate us to level up. Use the AI to get the boring boilerplate out of the way, then spend your saved time doing the creative, deep stuff that only you can do. Push the AI, refine its output, inject your unique perspective. In a world where everyone might start relying on AI for a first draft, your value will be in what you do beyond that first draft – the insight you add, the errors you catch, the novel angle you explore.
We stand at a crossroads of convenience and critical thought. The easy road – let AI do it all – is alluring, but it leads to a dead end of shallow outcomes and atrophied minds. The higher road is to integrate these tools without surrendering our curiosity and skepticism. It’s time for a renaissance of critical thinking, a conscious return to asking the tough questions, doing the legwork, and not taking every generated answer at face value. So the next time an AI hands you a nice story, remember my trusty sticker and challenge yourself (and the AI): “Nice story… now show me the code.” In that simple demand for substance, we remind ourselves to seek truth, depth, and understanding – the real essence of any craft, in any age.
How has generative AI impacted your critical thinking or craft? Share your thoughts in the comments or on LinkedIn with #ShowMeTheCode.
Further Reading and Resources
AI’s impact on learning and creativity: Study on how generative AI can boost individual creativity while potentially reducing the diversity and originality of ideas. Researchers found that while AI assistance improved individuals’ creative performance, it also led to more similar outputs, raising concerns about collective innovation. Link
The importance of critical thinking in the digital age: Article examining whether AI is eroding our critical thinking, and what we can do about it. Early evidence suggests heavy AI use correlates with poorer critical thinking skills, but experts argue this outcome isn’t inevitable if we learn to use AI as a tool for thought, not a replacement. Link
Academic rigor vs. instant content: Discussion on how AI tools like ChatGPT pose challenges to learning and research integrity. Many students now use AI to write essays and do homework, bypassing the need to think critically. This trend calls for renewed emphasis on verification, scholarly rigor, and teaching students how to think, not just how to get answers. Link