Search
Will AI Replace Human Thinking? The Case for Writing and Coding Manually
Learning to Think Again, and the Cost of AI Dependency.
There are so many (hype/boring) posts about AI coming out every day. It’s OK to use it, and everyone does it, but still learn your craft, and try to think.
Similar to what DHH said:
It’s also more fun to be competent in something than constantly waiting for an AI to complete.
The probability that AI will make us unhappy is very high IMO. Use it, yes, but not for every task. For discovering, creating a historical overview, or creating diagrams (Canva, Figma), but a big no to the writing (or coding). Someone needs to add knowledge or new insights; AI cannot train itself. So articles, books, and words will be written, and writers will be more in demand as everyone relies on AI, which at some point just plateaus.
It will be a long-term loss; people stop thinking and learning. Time will tell. My two cents, if you are a senior in something, you know better. Bsky
# Guidance on When to Use It
I heard from ThePrimeagen: It depends on how far you fix into the future. Short-term autocomplete is fine, but architectural decisions are big no, no’s.

This is about the bottom where we have time and the left where we have amount of errors. This means that the longer we use AI for fixing something in the future, like an architecture, the more errors it will produce.
If we use it for quick autocomplete or creating a well-defined algorithm function, it’s less prone to errors. In that first phase, you gain 20% productivity; in the later phases, you lose more.
This is similar to Agile, and real life.
In real life, the longer I wait with making my decision, the more information I have, the better the decison will be. And with Shape Up (Agile method of planning by 37signals) preaches with maximum deciding for 6 weeks (a cycle), don’t have roadmaps and backlogs for longer than that in the future. Similar is it with using AI, as all of it is predicted probability.
Another great illustration by Forrest Brazeal of showing the progress chart of “I’m so much more productive”, only to notice that after a while, you didn’t get anywhere, and then the next thing we think “I’m so productive”, and then to start all over.
Also, keep in mind what’s most important to your use case that AI could or should do, respecting what is important, fundamental, pointless, or tedious.
# Soulless
Nobody wants to read some soulless text, and what if it’s even good? Where do you get more from? I think this is a big trap that only over time people will realize. Sure, they help, and everyone needs to use them for “certain” tasks, but not the writing itself.
In the end, LLMs and AI require guidance; they’re just probabilities. See also Writing Manually.
There’s now a
SOUL.mdAI companies tries to infuse soul with
SOUL.mdby Peter Steinberger, the creator of OpenClaw, check out SOUL.md — What Makes an AI, Itself?. I’m not sure if it’s going to work, it’s still a statistical model. The text was written solely by OpenClaw.
# Distraction
I think we will be more distracted than ever. We can’t even have 2 seconds to think before Grammarly, Copilot, or Cursor suggests something. So instead of doing the thinking, we just cruise on. We are losing the driver’s seat.
It brings me back to the article I wrote recently about « Finding Flow». More on Don’t use AI for everything, you stop thinking-learning AI Use and Writing is Hard.
# Don’t Get Me Wrong
Don’t get me wrong, I use it every day, too. But more deliberately. I turned off my Grammarly and my Copilot (a long time ago), so I have the space to think and to learn. If you do it once or twice, that’s OK, but if you do it everywhere, you also lose the ability to learn new skills or the fun of it.
Interesting about the LCI (LLM Collaborative Intelligence), and sure, there will be a lot of benefits, but I am not sure if these insights are anything that comes close to a human insight that has felt, sensed, or experienced something through hardship. So yes, but I do not have many expectations, nor do I want it to create new insights. This is the fun part of my job :)
# AI’s are not intelligent
But also, they make trivial mistakes that no human would make, so always be careful:

Question done 2026-02-13 with Claude Opus 4.6 | Inspired by
Gabor Nagy.
# Exercising a Skill
It’s never always or never; it’s in between. The problem with learning is if you use it often, I’d argue that you, in fact, don’t learn much. You just copy and paste in writing or just tab tab tab in coding. The learning is gone. And do that often enough; our brain is not used to learning or, more critically, thinking anymore. Same as remembering, how good can we remember mobile phone numbers? not really, but I could very well during the early phone times because I trained it every day.

Tweet,
Ful article and
source paper. More on Learning with AI
It’s all a matter of exercise, and I learned for myself—it doesn’t have to be true for everyone—that I didn’t learn or think anymore. And frankly, it was also not fun anymore. That’s to be said in the stuff I know well.
In other areas, like creating an image (like the one I created for this article 😆) or updating my website’s front page with HTML/CSS, which would have taken me much longer as I don’t practice, it helped a lot. But I’d argue the fact that I didn’t learn anything new except prompting Claude Code :). It’s all tradeoffs, as always, right? :)
# Attention that You Don’t Lose the Muscle of Writing
It’s reality, the more you use AI to write, or code, or anything else, the less your muscle of that very thing is trained, and therefore lose the ability to do it.

Image by Alice Lemee on
LinkedIn
The process is somewhat like this:
graph LR
A[Stop Writing] --> B[Stop Thinking]
B --> C[Stop Learning]
C --> D[Lose Competitive Edge]
style A fill:#ff6b6b,stroke:#c92a2a,color:#fff
style B fill:#ff8787,stroke:#e03131,color:#fff
style C fill:#ffa8a8,stroke:#f03e3e,color:#fff
style D fill:#ffc9c9,stroke:#f76707,color:#000
# Other Opinions
# Paul Graham on Writing
Paul Graham says on Writes and Write-Nots (internal):
- The result will be a world divided into writes and write-nots. There will still be some people who can write.
- Yes, it’s bad. The reason is something I mentioned earlier: writing is thinking.
- In fact there’s a kind of thinking that can only be done by writing.
- If you’re thinking without writing, you only think you’re thinking.
- So a world divided into writes and write-nots is more dangerous than it sounds. It will be a world of thinks and think-nots.
# Nathan Baugh
Nathan Baugh shares on About AI and ghostwriting:
1st Order Effect:
- The world will be overrun with slop content and stories.
- We already see this. Just look at AI written comments on this platform.
2nd Order Effect:
- People will stop learning the foundational skills – storytelling, writing, rhetoric – required to communicate their experiences and ideas effectively.
- They will over rely on AI. It starts as a tool, becomes a crutch, and ends as a hindrance.
3rd Order Effect:
- People who invest in those same skills see massive returns.
- Writing sharpens your ideas. Story gives leverage to those ideas.
# Ted Gioia
The good news, and why AI won’t replace writers on 2024-08-31 by Ted Gioia. Some of the reasons why he thinks AI Writing won’t be as good:
-20240831195401987.webp)
Source on Twitter/X. Full article
Google Thinks Beethoven Looks Like Mr. Bean - by Ted Gioia.
# Mitchell Hashimoto
2.5 years into the AI craze, and I continue to firmly believe that if your company wasn’t already interesting/succeeding without AI, then doing “whatever plus AI” isn’t going to save you. For the few that seem this way (eg Cursor), I think their moat is a lot weaker than it seems. You have to play the game and the game is AI, but I don’t think it’s a defensible foundational capability. Might play out as an excellent land and grab strategy to buy you time to fill out the meat though. Mitchell Hashimoto on Twitter
He updated his thought based on 2026-02-05 standards and says:
Always Have an Agent Running My AI Adoption Journey – Mitchell Hashimoto
# Andrew Ng
Some people today are discouraging others from learning programming on the grounds AI will automate it. This advice will be seen as some of the worst career advice ever given. I disagree with the Turing Award and Nobel prize winner who wrote, “It is far more likely that the programming occupation will become extinct […] than that it will become all-powerful. More and more, computers will program themselves.” Statements discouraging people from learning to code are harmful!
In the 1960s, when programming moved from punchcards (where a programmer had to laboriously make holes in physical cards to write code character by character) to keyboards with terminals, programming became easier. And that made it a better time than before to begin programming. Yet it was in this era that Nobel laureate Herb Simon wrote the words quoted in the first paragraph. Today’s arguments not to learn to code continue to echo his comment.
As coding becomes easier, more people should code, not fewer!
Over the past few decades, as programming has moved from assembly language to higher-level languages like C, from desktop to cloud, from raw text editors to IDEs to AI assisted coding where sometimes one barely even looks at the generated code (which some coders recently started to call vibe coding), it is getting easier with each step.
I wrote previously that I see tech-savvy people coordinating AI tools to move toward being 10x professionals — individuals who have 10 times the impact of the average person in their field. I am increasingly convinced that the best way for many people to accomplish this is not to be just consumers of AI applications, but to learn enough coding to use AI-assisted coding tools effectively.
One question I’m asked most often is what someone should do who is worried about job displacement by AI. My answer is: Learn about AI and take control of it, because one of the most important skills in the future will be the ability to tell a computer exactly what you want, so it can do that for you. Coding (or getting AI to code for you) is a great way to do that.
When I was working on the course Generative AI for Everyone and needed to generate AI artwork for the background images, I worked with a collaborator who had studied art history and knew the language of art. He prompted Midjourney with terminology based on the historical style, palette, artist inspiration and so on — using the language of art — to get the result he wanted. I didn’t know this language, and my paltry attempts at prompting could not deliver as effective a result.
Similarly, scientists, analysts, marketers, recruiters, and people of a wide range of professions who understand the language of software through their knowledge of coding can tell an LLM or an AI-enabled IDE what they want much more precisely, and get much better results. As these tools are continuing to make coding easier, this is the best time yet to learn to code, to learn the language of software, and learn to make computers do exactly what you want them to do.
# Harry Dry
Big ideas are less about creativity and more about conviction. [..] So, what happened? ‘Sauce and other shit’ got incredibly cheap! [..] There is no AI prompt for conviction. Harry Dry
^64403f
More on Is AI solving this?.
# Jason Fried
As Jason Fried said, initially, it’s magical. After a while, you see it so clearly and it’s just average:

Cover letters? Yes!
The hardest thing is not making something.
The hardest thing is maintaining something.
It’s become so easy to just make stuff and vomit out ideas, and I mean this in the best possible way… Jason Fried on LinkedIn
This another valid insights, it’s hard to maintain code that is not made by you, it’s losing it’s fun. Therefore this will be a big part of a winning business, to have sustainable, and energy to want to maintain a product. And not “just making it”.
Also who takes responsibility for the generated (vibed) code?
# David Perell
David Perell has similar views as me on being soulless:
When you outsource your writing to AI, you end up with words that lack soul or personality. Gone go your quirks and your idiosyncrasies, which are the very things that make your writing irreplaceable. LinkedIn
# Ezra Klein
Ezra Klein has great insights that I very align with in terms of writing. He says that there are no shortcuts for research. When you grapple with a text or book for seven hours, it will change you. This will influence your writing, too. There’s no summary that gives you this kind of in-depth connection.
Also, you can’t prompt your way into it, as there’s no prompt that knows that you don’t know yet, or AI doesn’t know what you wanted to have read or what connections you would have made. On the contrary, you actually lose time reading something, and over time, we think we read lots of stuff, but we actually only read summaries. Full episode on The Case Against Writing with AI.
# Cal Newport
Cal Newport review on AI so far (2026-01-29): Dangerous Question: Has AI Been a Disappointment So Far?: Not too impressive for so many billion dollars investment (none coding related).
Scanning documents, summarizing a bunch of text. That’s why AI people always say we are 6 months away from X, because it’s not that impressive yet if we zoom out a bit, and what has been invested.
# Packy McCormick
He find it boring the hype around OpenClaw, and building AI’s like kids:
My hunch, from the outside, is that what we’re seeing is early forms of competition to create the best AI for yourself. Like raising kids to be the best versions of themselves, but for AIs.
Practically none of what they’re showing off their Clawdbots doing is useful. It’s a race for novelty and specialness, to say as much about the “parent” as the kid.
Read the full article from Packy McCormick at Raising a Special Little AI.
# Will It Replace X
# Writers
The better question might not be “if it can” - which I think it might never be due to the above points of not having character or genuine new insights - but probably “what management thinks”:
Can AI effectively do the jobs and an the things it’s supposedly able to do? it doesn’t matter. what matters is that the people in charge think that it can, and they’re willing to find out and they don’t care about the collateral damage to people’s careers and livelihood. Reddit - The heart of the internet
This specific quote is from r/technicalwriting, and I think in this community it’s more prominent that AI can’t really replace humans, or you need a person who constantly checks if everything produced is correct.
But if the management thinks it can replace them, then you might just get rid of the people and use AI. It’s so sad.. but I believe many will backpedal, as Klarna and others did (See down ))
It’s definitely a big help; I use it every day. I just think that being careful about what to use is important, and articulating that nuance is really hard.
# Writing Will Always Be an Asset
Are Cover letters still a thing? Yes. This reminded me of good writing is key for every job these days. Writing was always an asset, but even more these days; although people think they don’t need it, as AI is doing that. But that’s a very dangerous bet I wouldn’t take.
I wrote more on this topic in Writing Manually. And Business Insiders just wrote 2026-02-05 that The Hottest Job in Tech Pays $775,000 and Has Zero to Do With Coding - Business Insider, it’s writing:
Andreessen Horowitz launched its New Media team last year to help founders learn what they “need to win the narrative battle online.” Adobe is looking for an “AI evangelist” to lead the company’s “artificial intelligence storytelling.” Netflix, a company that sells stories to your living room, recently posted a director of product and technology communications role with a salary range of up to
$775,00. Microsoft began publishing a print magazine, Signal, last year, calling it an “antidote to the ephemeral nature of digital.” Anthropic tripled the size of its communications team last year, growing to about 80 people and is still hiring five more, each offering salaries of around$200,000or higher. OpenAI has several open communications jobs boasting salary listings of more than$400,000. The average director of communications in the US makes$106,000, according to Indeed.
reported that the percentage of job postings on LinkedIn mentioning “storyteller” doubled between 2024 to 2025.
You see the pattern? Real storytelling, that moves people, can’t be outsourced.
# Context Windows
What AI Writing can’t do, because it can only think one word at a time. For example, in the below example, as a writer you know you need to start all sentences the same, but the AI model can’t do that.
- Writing from Abundance is the art of collecting ideas so you can think better and avoid writer’s block.
- Writing from Conversation is the art of using dialogue to identify your best ideas and double down on them.
- Writing in Public is the art of broadcasting your ideas to the Internet so you become a beacon for people, opportunities, and serendipity.
Find more in Copywriting.
# Writing More like Chess
I always think it will be like chess. Only people who enjoy it will do it. Maybe AI can do it much better, but still, it’s nice to write yourself.
But from the readers perspective, I’m not sure if we ever prefer AI over a real human, even if the AI is better. There’s nothing behind, no character, no soul, no place to read more (in the same)
My response to Joe Reis comment:
AI’s getting better and better…I wonder if articles will go back to spoken word, much like what we did before writing things down? This was a fun experiment. Basically, tell Claude to take an article and make it worse. Substack Comment
# Everything is Based on Language (AI)
Code, music, movies, Twitter, Religion, everything is deeply based on language. AI is trained on it, it must have new “language” to train itself. So in a grand scheme, writers are and will be the fuel for most of what we build with AI.
Check more about AI addiction on the podcast with Tristan Harris at AI Expert: Here Is What The World Looks Like In 2 Years! Tristan Harris - YouTube.
# Take Smart Notes
Also, take take smart notes with using Use less AI.
# Data Engineers?
Probably not.
Nice comparison by Mehdi Ouazza:
- Did the music record replace musicians 100 years ago? Nope, it changed them and the industry.
- Did cloud computing take all IT jobs? Nope, it also changed the industry and our jobs.
- Same here; it will change our industry and job, but we won’t disappear.
More on Will AI replace Data Engineers.
# Programmers / Code?
It should depend on your level of expertise, IMO. A little different from code vs. text. In text, I’d always want to see the prompt or the actual text, never the generated one.
For code, the more expertise you have, the better you can navigate generation as also shown in , where expertise people can think more ahead in the future. Almost like a grandmaster in chess can better than a beginner.
On the other hand, I also thinks it changes in waves. I went through my self: “Ohh this is the best thing ever”, and after a while I thought: “oh actually, it was not that great”.
# SWEs?
Hard agree. As LLMs drive the cost of writing code toward zero, the volume of code we produce is going to explode. But the cost of complexity doesn’t go down—it actually might go up because we’re generating code faster than we can mentally model it. Hackernews comment
Great insights, and great article on The future of software engineering is SRE | Swizec Teller.
TheSeniorDev is showing 7 Myths to Debunked AI can program.
# Image Generation
Initial generation, yes. But final touch, no. Whenever I try to create images with AI, I am always initially impressed, but that quickly fades over time.
Yesterday, I updated my second brain image, but I changed it again today. I created some more with AI; prompted prompted prompted. In the end, I made one manually based on my copy. I think it’s more powerful. What do you think? I moved the examples to AI Generated Images.
Some AI-generated images I like too, but they were always missing something, and yeah, they looked so AI-generated. I started to feel the same as I did for AI writing () and AI data engineering (Will AI replace Data Engineers); now, with AI image generation, doing it yourself is more fulfilling, and you end up happier.
# How to Detect AI Writing
Dedecting AI writing is quite easy these days, see How to detect AI Writing. But if we know how AI is writing, should we stop using em dashes or thing AI does?
I don’t think so. I love the em dash. I even have a keyboard shortcut for the em dash. And sometimes when I write a negation, I’m thinking «could that look like it’s written by AI».
But at the end, convition is a good word. I can focus what an AI thinks while I write, I must write. So having something to say, and trying my best to communicate that, is the best I can do. ^ebca60
# Companies Back-paddle with AI
AI, and sometimes AI Slop, is generating more content, no matter the quality. It’s a the never ending Quality vs Quantity discussion, but now ever more important.
Here are some companies backpedaling after going full AI-first:
- Klarna
backpedaling AI customer service.:
-
“After years of depicting Klarna as an AI-first company, the fintech’s CEO reversed himself, telling Bloomberg the company was once again recruiting humans after the AI approach led to “lower quality.” An IBM survey reveals this is a common occurrence for AI use in business, where just 1 in 4 projects delivers the return it promised and even fewer are scaled up.”
-
- Duolingo getting worse with AI
-
Salesforce regrets firing 4000 experienced staff and replacing them with AI (after tree months, Salesforce regrets laid off its staff. See the
announcement earlier.)
-
senior executives publicly admitted that the company overestimated AI’s readiness for real-world deployment
-
- Next up, Shopify after the announcement to go full AI?
To be sure, in 2026, it got much better, but still, it’s still not determenistic, still hallucinate, still has no character. So use it with care.
# Open-Source Repos Closed AI Submissions
Several notable open source projects such as Ghostty, Node.js, curl, tldraw have limited or shut down external contributions after being inundated with AI generated PRs.
- Ghostty: Tweet and PR: Maintainer Mitchell Hashimoto has restricted contributions due to AI spam. This aligns with broader complaints from him about repositories being overwhelmed.
- Node.js: They’ve shut down their HackerOne bug bounty program and now require signal requirements for reports, partly to combat low-effort AI submissions.
- curl: The project has started rejecting low-quality and AI-generated PRs explicitly.
- tldraw : They’ve paused external PRs entirely due to AI slop, as detailed in their issue and blog post.
# Learning With AI
Should our kids learn with ChatGPT? Find more on Learning with AI.
# Is AI Addictive?
See more at Addictive AI.
# Others
# AI behind Many Web Articles
There’s more AI written article according to below study than by human for the first time:
Using AI is very nuanced. It’s definitely a big help, I use it every day as mentioned before. I just think to be careful what to use for is important, and articulating that nuanced is really hard.
# Job Market: Junior vs. Senior
# Illustrations
Some more illustrations from articles mentioned in this note.
# Progress Illusion

From
Simon Wardley on LinkedIn
# Future
- Nice insights, why LLMs with token pretictors are not so good for understanding the worlds. It kinda works (but not so so good) for writing, but to understand physics, and world models, this is much harder he says: Metas AI Boss Says He DONE With LLMS…
# Further Reads
- The Impact of Generative AI on Critical Thinking: Self-Reported Reductions in Cognitive Effort and Confidence Effects From a Survey of Knowledge Workers
- Smart Note Taking
- My AI Skeptic Friends Are All Nuts · The Fly Blog
- Companies that used AI to generate a quick solutions and now spending humans to fix it, expensively
- AWS CEO says AI replacing junior staff is ‘dumbest idea’
- Training Until Failure, Thinking Until Fatigue
- Slop Is Contempt by Tom White
- RW I’d Rather Read the Prompt / I’d rather Read the Prompt
- Boredom is the New Luxury
- Can LLMs give us AGI if they are bad at arithmetic? by Wes McKinney
- When the Answer Isn’t the Problem by Matthew Mullins
- Is it cringe to be extremely online now?
- When writing AI on the first look doesn’t be as good on the second look: Fake Unit Tests
- Breaking the Spell of Vibe Coding – fast.ai
- Vibe Coding, and How its Killing Open Source
Origin: Artificial General Intelligence
References: ChatGPT, My AI Logs of Will AI replace humans, My AI Prompts
Created 2024-08-31

