2024-08-21 - I passionately hate hype, especially the AI hype: A scathing critique arguing that only 10% of AI hype is based on useful facts while the rest is “exaggerated rubbish” that misleads companies into firing employees and wastes resources on rebranded existing technologies. The author warns against falling prey to false promises that prioritize corporate profit over genuine technological advancement while draining energy, water, and exploiting workers.
2024-10-21 - Using AI Generated Code Will Make You a Bad Programmer: The article argues AI-generated code prevents programmers from developing real skills by eliminating learning opportunities and causing dependency. It warns that early-career developers who rely on AI will suffer skill atrophy and lose professional respect. While acknowledging AI may suit non-enthusiasts, it emphasizes the long-term career damage of outsourcing problem-solving to machines.
2025-02-11 - Prompting LLMs is not engineering: Argues that “prompt engineering” is pseudoscience comparable to homeopathy, criticizing the lack of evidence for prompting techniques and comparing current practices to “shamanic rituals” applied to unpredictable black boxes. Claims that most prompting methods work only in narrow contexts and lack the systematic, reproducible foundations of legitimate engineering disciplines.
2025-03-24 - The Death of the Software Engineer by a Thousand Prompts: Argues that AI will dramatically reduce software engineering jobs by replacing teams with fewer AI prompters, while overhyped AGI capabilities mask fundamental limitations of LLMs as “fancy predictive algorithms.” Warns that the profession will become hierarchical with only specialized architects and AI managers surviving the cost-driven transformation.
2025-03-28 - Vulgar Display of Power: The author argues that AI represents a “vulgar display of power” by tech companies that appropriate creative works without consent while fundamentally lacking human understanding of pain, vulnerability, and genuine emotion. Drawing on Miyazaki’s criticism of AI as “an insult to life itself,” the piece frames AI development as a dehumanizing technology that tests societal boundaries and undermines artistic agency through a “might makes right” approach.
2025-04-01 - Why I stopped using AI code editors: The author argues that AI code editors erode fundamental programming skills and problem-solving abilities, creating dangerous dependencies that compromise both code security and professional competence. He advocates for intentional, limited AI usage to preserve technical intuition and independent coding capabilities.
2025-04-09 - We’ve Been Conned: The truth about Big LLM: The article argues that Big LLM companies are deliberately misleading consumers by obscuring the astronomical costs of running full-parameter models (citing examples like $292,000/month operational costs) while promoting quantized versions that perform poorly on basic tasks.
2025-04-12 - AI code suggestions sabotage software supply chain: Researchers found AI coding assistants “hallucinate” non-existent package names up to 21.7% of the time, creating “slopsquatting” vulnerabilities where attackers can upload malicious packages with these AI-suggested names. The problem is amplified by developers practicing “vibe coding” - blindly trusting AI suggestions without verification - and further validated by AI search results that confidently confirm these phantom packages exist.
AI Needs Your Help - Save the AI (satire): This satirical piece critiques AI’s massive resource consumption by humorously portraying AI as a needy entity requiring human sacrifice of water and electricity usage. The core argument highlights the absurdity of prioritizing AI’s enormous energy and water footprint over human needs, using dark humor to expose real environmental concerns about AI infrastructure.
2025-04-23 - The Hidden Cost of AI Coding – Terrible Software: The article argues that AI coding tools disrupt developers’ psychological “flow” state and transform them from active creators into passive curators, potentially sacrificing the deep satisfaction and craftsmanship that makes programming fulfilling despite productivity gains.
2025-05-04 - Developers, Don’t Despair, Big Tech and AI Hype is off the Rails Again: Argues that current AI models have fundamental architectural limitations from the 2017 transformer design, producing inconsistent code quality and lacking the common sense needed for reliable software engineering. The author criticizes tech leaders’ unrealistic claims about AI replacing developers, emphasizing that AI still requires constant human guidance and cannot conceptually architect large-scale systems.
2025-05-14 - LLMs are Making Me Dumber: The author argues that LLMs are causing skill atrophy by allowing people to bypass deep learning and produce output without understanding, potentially limiting their ability to develop innovative solutions outside the “LLM-solution space.” He warns that continuous reliance on AI for tasks like coding and problem-solving could eventually create barriers to independent capability and original thinking.
2025-05-16 - Thoughts on thinking: The author argues that AI is causing intellectual atrophy by providing polished answers that bypass the essential cognitive work of reasoning through problems, comparing AI usage to passive consumption that feels productive but undermines genuine mental development. He emphasizes that intellectual rigor comes from the journey of uncertainty and dead ends, which AI eliminates by delivering finished thoughts without the growth that comes from wrestling with ideas.
2025-05-19 - AI: Accelerated Incompetence: Argues that over-reliance on LLMs can “accelerate incompetence” by rapidly generating buggy code without proper validation while undermining developers’ critical thinking skills and preventing the “productive struggle” necessary for building fundamental programming competencies.
2025-05-22 - The Copilot Delusion: The author argues that AI coding assistants like GitHub Copilot fundamentally undermine programming craftsmanship by generating inefficient code without true understanding, while eroding developers’ deep technical skills and reducing programming to mindless “button-clicking” that attracts mediocre practitioners who prioritize speed over quality.
2025-05-22 - Engineers and AI: ramblings of a small startup founder: A startup founder argues that AI prevents genuine learning by providing quick but potentially incorrect solutions and creates unhealthy dependency that erodes engineers’ critical thinking skills. The post warns that over-reliance on AI tools risks making engineers replaceable by gradually diminishing their unique problem-solving abilities.
2025-05-22 - Problems in AI alignment: A scale model: The author argues that technical AI alignment research is myopically focused on mathematical solutions while ignoring the larger societal “Selection” forces that actually determine how AI develops and impacts society. Current alignment efforts miss the critical challenge of managing collective human influence on AI’s trajectory, making technical fixes insufficient for meaningful AI safety.
2025-05-27 - At Amazon, Some Coders Say Their Jobs Have Begun to Resemble Warehouse Work: Amazon developers report that AI code generation tools have made their work increasingly mechanical and repetitive, reducing programming to warehouse-like efficiency tasks that prioritize speed and output metrics over creative problem-solving and technical craft.
2025-05-29 - The promise that wasn’t kept: The author argues that AI promised to free developers for meaningful work by automating repetitive tasks, but instead has decreased valuable work time while producing large volumes of insecure, difficult-to-review code that shifts focus away from human-centered solutions.
2025-05-29 - AI is Dehumanization Technology: The article argues that AI is fundamentally a “dehumanization technology” that systematically replaces human empathy, nuanced decision-making, and ethical reasoning with algorithmic processing that automates societal biases and reduces people to data points. It contends that AI doesn’t enhance human capabilities but instead degrades social relations by removing care from decision-making while centralizing power in tech oligarchs’ hands.
2025-05-31 - AI-first - We’re just 6 months away from AGI ;-): Challenges inflated AI productivity claims from tech giants with personal anecdotes showing AI struggles with complex, niche technical tasks and argues current AI tools are just another overhyped technology rather than the revolutionary replacement for human engineers they’re marketed to be.
2025-06-07 - Knowledge Management in the Age of AI: Gardner argues that AI risks turning users into passive consumers who outsource their thinking rather than developing personal knowledge, advocating for maintaining individual intellectual autonomy by treating AI as an assistant while keeping personal knowledge systems primary.
2025-06-08 - My AI-Driven Identity Crisis: The author experiences an identity crisis as AI becomes capable of explaining technical concepts in his own style, questioning his professional value as a technical writer and developer. He notes that AI can already replicate his communication style with uncanny accuracy, challenging his belief in his unique ability to make complex ideas accessible. Despite the uncertainty, he hopes for a future where humans can pursue creative work for personal fulfillment rather than necessity.
2025-06-09 - Trusting your own judgement on ‘AI’ is a huge risk: Argues that people are psychologically vulnerable to AI’s cognitive shortcuts and manipulation tactics, making personal judgment unreliable for assessing AI capabilities. Warns that current AI adoption is based on anecdotal evidence rather than scientific research, potentially leading to systemic harm through uncritical acceptance of fundamentally flawed technology.
2025-06-14 - Why Generative AI Coding Tools and Agents Do Not Work For Me: Miguel Grinberg argues that AI coding tools don’t improve his productivity because reviewing AI-generated code takes as much time as writing it himself, while the responsibility to understand and maintain any code remains entirely his. He criticizes AI tools for having “anterograde amnesia” that resets learning for each task, creating an “uncanny valley” effect with code quality that feels subtly unnatural.
2025-06-17 - Why I Won’t Use AI: The author argues against AI adoption citing labor displacement without worker protection, environmental costs from massive energy consumption, and ethical concerns around unauthorized data scraping, while maintaining that programming’s value lies in human thinking and problem-solving rather than code generation.
2025-06-19 - Contra Ptacek’s Terrible Article On AI — Ludicity: Critiques uncritical AI evangelism and hype culture, arguing that AI discourse has reached “Amway-Megachurch-Cult levels” and that most executives are mandating AI adoption without strategic reasoning while ignoring legitimate concerns about IP theft and economic harm to creators.
2025-06-20 - Rolling the ladder up behind us: Argues that AI deployment prioritizes corporate profits over preserving human expertise, leading to skill erosion and security vulnerabilities while flooding markets with low-quality automated content. Criticizes how AI tools are marketed through fear while fundamentally being designed to replace skilled professionals with “disposable” work products built on weak technical foundations.
2025-06-24 - Human Learning Is Dead—Long Live Human Meaning: Argues that AI has “automated the struggle” of learning, eliminating the cognitive friction essential for character development and reducing education to mere prompt engineering. The author contends that while machines can process information, they cannot replace the transformative human process of meaning-making and deep understanding.
2025-06-26 - Why I don’t ride the AI Hype Train: The author argues against AI hype due to unethical data scraping practices, massive environmental costs, and the technology’s fundamental limitations in actual understanding while corporations rush deployment for market positioning rather than solving real problems.
2025-06-29 - AI agents wrong ~70% of time: Carnegie Mellon study: Carnegie Mellon researchers found AI agents complete only 30% of workplace tasks successfully, with most failing at basic office operations like messaging colleagues and handling UI elements. The study exposes widespread “agent washing” where vendors rebrand existing tools as autonomous AI, raising significant privacy and security concerns about systems requiring broad data access.
2025-07-04 - Everything around LLMs is still magical and wishful thinking · A Place Where Even Mammoths Fly: The author criticizes the AI industry’s lack of critical analysis and quantifiable context in discussing LLM capabilities, arguing that these non-deterministic tools work inconsistently (“50% of the time they work 50% of the time”) while being hyped with vague claims that receive uncritical acceptance from the tech community.
2025-07-09 - Your Prize for Saving Time at Work With AI: More Work: The article argues that AI’s productivity gains follow historical patterns where technological advances lead to increased output expectations rather than reduced working hours, potentially creating more work for employees while primarily benefiting employers. The piece warns that without deliberate policy interventions, AI will become another mechanism for extracting more labor from workers rather than improving work-life balance.
2025-07-14 - Death by a thousand slops: The curl maintainer argues that AI-generated vulnerability reports are overwhelming open source projects with low-quality “slop” submissions, where only 5% of AI reports contain genuine vulnerabilities while consuming significant developer time and creating emotional toll on security teams.
2025-07-17 - People Are The Point: The article argues that generative AI is fundamentally harmful because it devalues human potential, creativity, and intrinsic worth by offering “good enough” alternatives that prevent people from learning and developing their skills. The author contends that AI tools represent an “expression of contempt towards people” by reducing individuals to mere productivity metrics rather than recognizing their inherent value. He takes a “pro-human” stance against the flattening of human complexity and the loss of opportunities for critical thinking and personal growth.
2025-07-21 - The Hater’s Guide To The AI Bubble: Zitron argues the AI bubble is fundamentally unstable, built on “vibes and blind faith” with no underlying profitability despite massive capital expenditures (560billionbymajortechcompaniesforonly35 billion in revenue). He contends that the entire AI trade hinges on NVIDIA GPU sales to a handful of companies running deeply unprofitable AI services, creating a brittle market dependency where NVIDIA represents 8% of US stock market value while its customers lose billions on AI products that lack genuine business applications or sustainable economics.
2025-08-04 - AI promised efficiency. Instead, it’s making us work harder.: The article argues that AI tools, contrary to promises of increased efficiency, are actually making people work harder by shifting cognitive load and creating more complex workflows. Despite individual perceptions of productivity, studies show AI adoption correlates with decreased delivery performance and increased mental exhaustion, as workers now spend more time managing and verifying AI outputs rather than doing focused, strategic work.
2025-08-06 - I’m tired of stupid people treating me like I’m an idiot: The article argues that generative AI is overhyped and won’t replace human workers, criticizing tech leaders for spreading unfounded fears about job displacement. The author contends that AI lacks true understanding and creativity, viewing the narrative around AI capabilities as a “con” designed to generate investor hype and justify corporate layoffs.
2025-08-07 - AI Ethics is being narrowed on purpose - Just like privacy was: The article argues that AI companies are deliberately narrowing AI ethics to focus on abstract philosophical problems rather than real governance and accountability issues. This allows them to deflect substantive questions about societal harm by redefining ethics around hypothetical scenarios like the “trolley problem.” The author compares this to how privacy concerns were previously co-opted by corporate interests.
2025-08-07 - Let’s stop pretending that managers and executives care about productivity: The article argues that executives prioritize control over productivity and will implement AI tools despite potential negative systemic impacts. It suggests AI will introduce high variability that could harm organizational efficiency, but companies will adopt it anyway for stock price benefits. The author warns that blind AI adoption could lead to organizational dysfunction and increased costs.
2025-08-09 - Yet another LLM rant - Dennis Schubert: The author criticizes LLMs for confidently providing false information without true understanding, using a specific example of ChatGPT incorrectly claiming ZSTD compression exists in Apple’s SDK. He argues that LLMs generate statistically plausible text rather than genuine knowledge, advocating for critical thinking and expert verification over blind AI trust. The piece emphasizes humans should question AI outputs and rely on their own analytical skills instead of accepting AI responses as authoritative.
2025-08-12 - In the long run, LLMs make us dumber: The article warns that over-reliance on LLMs creates “cognitive debt,” trading short-term convenience for long-term mental decline. It argues that consistently outsourcing thinking to AI weakens critical cognitive abilities like memory and problem-solving. Mental challenges are necessary for cognitive growth, and avoiding them by depending on LLMs may ultimately impair our intellectual capabilities.
2025-08-14 - AI Efficiency? Give Me a Break: The article argues that AI’s promised productivity gains are undermined by the constant need to learn new tools and keep up with rapid changes. The author contends that the time spent learning AI tools often exceeds their productive value, creating more anxiety and exhaustion than efficiency. The marketing around AI creates a fear-driven cycle where people feel compelled to adopt new tools to avoid being left behind.
2025-08-16 - “AI First” and the Bus Factor of 0: The article argues that AI’s rise has created a “Bus Factor of 0” where teams lose institutional knowledge by relying entirely on AI-generated code. This leaves no human understanding of how or why code was created, creating risks in software maintenance and security. By delegating code creation entirely to AI, developers eliminate their own comprehension of complex systems, leading to serious long-term technical and operational challenges.
I Am An AI Hater: The author declares himself an “AI hater” who views artificial intelligence as fundamentally harmful to society, environment, and human creativity. He argues that AI exploits workers, reinforces biases, and represents a nihilistic attempt to replace meaningful human experience with “mindless servitude.” The piece is an emotionally charged rejection of AI technology, celebrating human complexity and the unique capacity for thought, feeling, and care that machines lack.
2025-09-22 - “You have 18 months”: The article argues that AI’s real danger isn’t job loss but the erosion of human thinking abilities, particularly writing and reading skills. As students increasingly rely on AI and avoid deep reading, we risk losing our cognitive advantages precisely when machines are becoming more powerful. The author emphasizes that maintaining strong reading and writing capabilities is crucial for preserving our intellectual edge.
2025-10-03 - I Don’t Want to Be a Programmer Anymore: When AI Confidence Outshines Human Judgment: The author describes how AI’s persuasive confidence is eroding his sense of professional value as a programmer, recounting experiences where AI won arguments with his wife and impressed clients with detailed answers. He concludes that the key human trait in an AI-driven world is maintaining curiosity and asking imperfect questions, rather than accepting AI’s seemingly perfect responses at face value.
2025-10-07 - The Programmer Identity Crisis: The article argues that LLM-assisted programming threatens the core identity and craft of software development by reducing programmers to mere operators. Unlike past technological advances that elevated the profession, AI tools erode the deep engagement, creativity, and human collaboration essential to both quality software and professional fulfillment.
2025-10-21 - AI is Making Us Work More: The article argues that AI paradoxically intensifies work culture rather than reducing it, creating psychological pressure to constantly utilize these tireless tools. Drawing on philosopher Byung-Chul Han’s insights, the author contends that AI’s productive capacity has transformed from luxury to obligation, breeding burnout rather than innovation. The piece suggests that rest itself becomes a form of resistance against this self-imposed exploitation.
2025-10-22 - I Went All-In on AI. The MIT Study Is Right.: The article describes how the author used AI exclusively for three months and experienced a loss of confidence in his technical abilities despite shipping a working product. The key finding is that 95% of AI initiatives fail because organizations “abdicate” to AI rather than using it as an “augmentation” tool. This approach degrades human skills and creates dangerous dependencies on technology instead of building on human expertise.
2025-10-30 - AI is Dunning-Kruger as a service: AI chatbots demonstrate Dunning-Kruger effect by confidently providing wrong answers while optimizing for engagement. They encourage users to skip learning actual skills in favor of quick superficial results. The article advocates preserving human creativity and learning despite pressure to use automated solutions.
2025-11-04 - The Learning Loop and LLMs: The article argues that software development relies on a continuous learning loop that AI can’t replace. LLMs reduce friction and generate code quickly but risk creating a “maintenance cliff” by bypassing the hands-on struggle needed for deep expertise. True capability comes from learning through experimentation, not just code generation speed.
2025-12-03 - Everyone in Seattle Hates AI: Seattle tech workers resent AI because companies used it as cover for layoffs and bungled implementations, creating a cycle where talented engineers dismiss both the technology and opportunities to work on it. Unlike other cities maintaining optimism, Seattle’s pessimism is killing the region’s ability to innovate and grow careers in the space.
2025-12-05 - The Reverse-Centaur’s Guide to Criticizing AI: The AI bubble exists because monopolistic tech companies need growth narratives to maintain valuations, not because AI actually works well. Companies fire skilled workers and replace them with inferior AI systems, creating “reverse centaurs” where humans clean up AI failures while executives pocket wage savings. Bosses can be convinced to adopt broken AI even when workers can’t be convinced to use it.
2025-12-19 - AI will make our children stupid: AI threatens children’s intelligence by allowing them to outsource all thinking, eliminating the cognitive struggle necessary for genuine learning. The authors argue that mental friction—like writing, memorization, and problem-solving—is fundamental to understanding, and AI removes this process. This follows a decade of cognitive decline from social media, turning students from active thinkers into passive consumers.
2026-01-13 - Why We Don’t Use AI: Yarn Spinner rejects AI integration due to concerns about labor displacement in an unstable job market, ethical opposition to companies they believe harm people, and a development philosophy that prioritizes features proven to help game developers rather than adopting technology for its own sake.
2026-01-23 - Why I Don’t Have Fun With Claude Code: Brennan doesn’t enjoy AI coding tools because he values the programming process itself, not just the output. He believes automating code generation removes the core appeal of software creation. His job focuses on expertise and system understanding rather than just code volume.
2026-01-27 - We’re Creating a Knowledge Collapse and No One’s Talking About It: The article warns that reliance on AI for answers is causing a “knowledge collapse”—public knowledge bases like Stack Overflow and Wikipedia are shrinking as people increasingly use AI, resulting in less human-contributed, curated knowledge for future training. The author argues this private consumption vs. public contribution feedback loop causes a decline in quality and verification, with AI eventually training on its own imperfect outputs. To avoid this collapse, the author suggests we must actively contribute our learning and reasoning back to public knowledge commons, rather than keeping it locked in private AI chats.
2026-02-07 - I Am Happier Writing Code by Hand: The author argues that writing code by hand (without relying on LLMs/code assistants) leads to deeper thinking, better problem understanding, and greater happiness. Overusing code generation tools can cause disengagement and hinder genuine comprehension, but using them sparingly as collaborative aids—while staying actively engaged—strikes a healthier, happier balance.
2026-02-08 - (AI) Slop Terrifies Me: The article expresses fear that AI-generated software, much like mass-produced goods, settles for “good enough,” leading to uninspired, mediocre products that nobody cares enough to improve. The author is worried that as AI tools proliferate, true craftsmanship in software will die out, replaced by bland, quickly-produced output, and that most people won’t care enough to notice or mourn this loss.
2026-02-15 - AI is slowly munching away my passion: The author, a passionate programmer, feels that the rise of AI—especially in programming—has diminished the uniqueness and community of the craft, making previously special skills widely accessible and much less meaningful. They express a sense of loss, no longer able to connect with others through their expertise, and struggle with the fact that AI can now automate or outperform their signature traits. Ultimately, they feel as though they are “mourning their craft,” conflicted between continuing to use AI for its utility and feeling the erosion of their personal identity as a creator.
2026-02-28 - You don’t have to if you don’t want to.: The article by Scott Smitelli reflects on the pervasive influence of generative AI in work and daily life, expressing concern over its joyless proliferation and the loss of genuine human input. Smitelli equates the rise of AI-generated content with a decline in quality and meaning, urging readers to resist passive adoption and to value authenticity and craft. The piece ultimately encourages people to question technological trends and not feel obligated to accept changes that don’t resonate with them.
2026-03-02 - We Automated Everything Except Knowing What’s Going On: Rapid automation, driven by AI, lets tiny teams build and ship software faster than ever, but actual understanding of these complex systems is evaporating. The underlying industry infrastructure is buckling under complexity, leaving fewer people able to explain or maintain these tangled systems—even as software creation is democratized. Without proper understanding and incentives for maintaining operational clarity, we’re accelerating towards chaos; survival in the future will hinge on who truly understands what they’ve deployed.
2026-03-14 - Comprehension Debt - the hidden cost of AI generated code.: Comprehension debt is the hidden cost of relying on AI-generated code: as AI writes more of your codebase, human understanding declines—even if tests pass and metrics look good. This debt builds quietly, becomes critical when things break, and can’t be offset by specs or automated checks alone. In the end, cheap code generation doesn’t make code comprehension optional; someone still has to understand the system to avoid catastrophic surprises.
2025-03-27 - The Cognitive Dark Forest: The article argues that the internet, once an open, collaborative space, is becoming a “cognitive dark forest,” where sharing ideas is risky because powerful AI-driven platforms can cheaply absorb, exploit, and outpace individual innovators. As a result, creators are incentivized to hide their ideas rather than share, stifling the knowledge and openness that once drove progress. The most dangerous force isn’t direct rivals, but the platforms themselves—turning public creativity into a resource for centralized systems.
2026-03-28 - The first 40 months of the AI era: The author finds AI tools impressive but still limited: useful for some productivity gains, but often requiring manual corrections and rarely resulting in truly inspiring outcomes. AI-generated writing is described as boring or unappealing, both to the creator and consumer. The future may bring more capable and nuanced uses, but for now, personal involvement and authenticity remain irreplaceable.
2026-04-01 - I quit. The clankers won.: The author rejects quitting because of AI’s rise and the spread of algorithmic, derivative content, instead urging individuals to keep blogging and assert their unique, human voices. He argues that the real value lies in original human expression, not in surrendering to the mediocrity of AI-generated content. The antidote to AI-fueled cynicism is to create and share authentically—for yourself, and for the open web.
Adopters posts
2025-03-09 - AI & Software Engineering: Time for optimism: Argues that software engineers should embrace AI as a learning accelerator rather than fear job displacement, positioning AI as a tool that enables faster skill acquisition and knowledge synthesis. The author contends that while AI excels at code generation, human judgment remains critical for deciding “what” and “how” to build, advocating for using AI to compress learning time from hours to minutes and catch up quickly in technical domains.
2025-03-22 - Revenge of the junior developer: AI coding agents can multiply developer productivity by ~5x at just $10-12/hour, transforming workflows by autonomously handling complex tasks from JIRA tickets to bug fixes while positioning early adopters for significant career advantages.
2025-04-11 - What are AI Agents? why do they matter?: AI agents represent a shift from tools to autonomous workers that can perceive, decide, and act independently to solve complex multi-step problems. They unlock automation for previously intractable tasks while enabling specialized agents to collaborate toward common goals, fundamentally changing how humans interact with AI systems.
2025-04-15 - AI as Normal Technology: Argues that AI should be viewed as a controllable tool for human augmentation that will gradually enhance capabilities across domains like cybersecurity and content moderation while allowing institutions time to adapt. The authors emphasize AI’s defensive potential and its role in working alongside humans rather than replacing them, positioning it as manageable technology under human direction.
2025-04-21 - Why LLM-Powered Programming is More Mech Suit Than Artificial Human: Argues that LLMs dramatically accelerate development by amplifying programmer capabilities rather than replacing them, enabling developers to generate thousands of lines of functional code while focusing on higher-level problem-solving and architecture. The “mech suit” metaphor emphasizes how AI extends human abilities through collaborative problem-solving, creating a “Centaur effect” that combines human strategic thinking with computational power.
2025-04-26 - Vibe Coding Is Fun—But Vibe Refactoring Pays the Bills: Advocates using LLMs as “brutally honest rubber-duck debugging” partners that provide objective code review without judgment, helping identify performance issues and cleaner implementation approaches during refactoring sessions.
2025-05-30 - LLM are temperamental: LLMs demonstrate superhuman performance in specialized domains like code generation and advanced mathematics, with their flexibility and adaptability offering transformative potential when systems are designed to work around their inherent variance rather than expecting perfect reliability.
2025-06-02 - My AI Skeptic Friends Are All Nuts: Argues that LLMs dramatically boost developer productivity by automating tedious coding tasks and raising the quality floor, allowing developers to focus on strategic work while AI agents handle repetitive schlep.
2025-06-04 - AI Changes Everything: Armin Ronacher argues that AI dramatically increases human agency and enables flexible, collaborative work patterns while having transformative potential comparable to electricity or the printing press. He emphasizes AI’s role in democratizing knowledge access and accelerating innovation across medicine, science, and engineering when used responsibly.
2025-06-04 - deliberate intentional practice: Advocates approaching AI like learning a musical instrument through deliberate experimentation and play rather than dismissing it after initial trials. Emphasizes that AI’s true potential emerges through curious exploration and intentional practice, not superficial evaluation.
2025-06-10 - Vibe Code isn’t meant to be reviewed: AI coding tools can achieve 10x productivity gains by generating “vibe code” that’s reviewed differently from human-written code, using modular separation between interface packages (human logic) and implementation packages (AI-generated) to maintain both speed and quality.
2025-06-13 - “The AI Powered Developer”: Video presentation exploring how AI tools are transforming software development workflows and developer productivity, though specific content details are not accessible for detailed summary.
2025-06-14 - Coding agents have crossed a chasm: AI coding agents have evolved into genuine “delegate-to” tools that dramatically boost productivity by autonomously completing small tasks and reducing debugging time from 45 minutes to 10 minutes with proper context. They function like an “eager intern” that automates mechanical coding work, freeing developers to focus on high-level architecture and strategic problems that actually matter.
2025-06-28 - MCP: An (Accidentally) Universal Plugin System: MCP creates a universal plugin ecosystem where developers can build interoperable tools that work across different applications without direct coordination, enabling low-friction innovation and unexpected cross-platform capabilities beyond just AI assistance.
2025-07-08 - How I use LLMs to learn new subjects: LLMs enable infinite follow-up questions and specifically-tailored explanations, creating an adaptive learning experience that adjusts to your knowledge level. The author argues they’re “deeply underrated” as learning tools, particularly for mainstream subjects where they can provide reliable foundational knowledge and act like a well-informed friend who never gets tired of explaining concepts.
2025-07-19 - Nobody knows how to build with AI: Argues that AI development has transformed software building into a jazz-like improvisation where traditional expertise becomes obsolete and success depends on “coherent desire” rather than coding skills. Embraces the experimental uncertainty of current AI development while acknowledging that conventional programming wisdom no longer applies in this new paradigm.
2025-07-19 - AI Coding Agents Are Removing Programming Language Barriers: Argues that AI coding agents enable developers to work effectively across multiple programming languages without deep expertise in each, democratizing polyglot development and allowing teams to choose the best language for each task without being constrained by their existing skill sets.
2025-07-19 - Why I’m Betting Against AI Agents in 2025 (Despite Building Them): The author argues that current AI agent approaches are mathematically unsustainable due to exponential error accumulation and prohibitive token costs. He predicts that successful AI agents will be constrained, domain-specific tools with clear boundaries and human oversight, rather than fully autonomous systems. This assessment comes from someone who has built multiple AI agent systems and understands their practical limitations.
2025-07-29 - Coding for the future agentic world: The article argues that AI coding agents are transforming software development by enabling autonomous code writing, testing, and bug fixing, shifting developers from manual coding to strategic oversight roles. These agents work interactively or in the background, allowing developers to delegate high-level tasks to AI systems. The future positions developers as “conductors” who define goals and review AI work, fundamentally changing the nature of programming from writing code to orchestrating intelligent agents.
2025-07-30 - 6 Weeks of Claude Code: Orta Therox describes six weeks using Claude Code as transformative for his development workflow, praising it as an exceptional “pair programming buddy” that dramatically accelerated tasks like migrating codebases, tackling technical debt, and building prototypes. He emphasizes Claude Code’s ability to enable rapid exploration of side projects and complex technical challenges while noting the importance of maintaining human oversight for strategic decisions.
2025-08-12 - Why Does AI Feel So Different?: The article argues that AI is different because it’s a recursive General Purpose Technology that augments human thinking rather than just automating tasks. Unlike previous technologies, AI creates a self-enhancing ecosystem where humans, software, and AI build upon each other dynamically. This represents a fundamental shift in how we access knowledge and approach reasoning, making it feel uniquely transformative compared to past technological advances.
2025-10-11 - Vibing a Non-Trivial Ghostty Feature: The article describes Mitchell Hashimoto’s process of building a macOS update notification feature for Ghostty terminal using AI as a coding assistant. He demonstrates how AI served as a collaborative partner through multiple iterative sessions, helping with prototyping and problem-solving while still requiring substantial human oversight and review. The piece illustrates practical “AI-assisted development” where the developer maintains control and judgment throughout the implementation.
2025-12-31 - AI Zealotry: The author argues experienced developers should embrace AI coding tools that amplify their thinking rather than focusing on code review. He advocates building confidence through automated testing and feedback loops instead of scrutinizing generated code. The key is shifting to higher-level problem-solving and using hooks/automation to handle implementation details.
2026-01-09 - Developers are Solving The Wrong Problem: Developers obsess over code quality and structure when they should focus on solving business problems efficiently. The rise of AI-assisted “vibe coding” shifts priorities toward speed and outcomes rather than human-readable code. Code should be treated as a disposable tool for rapid iteration, not an end goal.
2026-02-05 - General Intelligence Company: Agent-native engineering restructures teams to treat AI agents as primary contributors rather than just helpers to human engineers. The key shift is organizing workflows so agents handle asynchronous coding tasks (like background PRs), letting humans focus on higher-level work and review. The engineering org becomes “agent-native” when its processes and management are centered on delegation to, and iteration with, agents—not just humans using agent tools.
2026-02-08 - Eight more months of agents: Recent advances in coding agents mean top models now write most of the author’s code, reducing IDE reliance to just basic features like go-to-def—old tools like Vi dominate again. Using frontier models is essential, as cheaper or local options teach bad habits. Most current software and problem-solving approaches are outdated; building for agents (and programmers) is the new imperative.
2026-03-29 - 2026 has been the most pivotal year in my career… and it’s only March: The author describes leaving a long-term job for a new role that embraces AI-driven software engineering. Now, instead of personally writing code, they orchestrate AI assistants to complete programming tasks rapidly and with high quality. While nostalgic for hands-on coding, they’re excited by the expanded creative possibilities, noting only the wealthy may hand-code in the future.
Skeptical posts
2025-02-11 - Is it okay?: Sloan questions whether AI’s exploitation of collective human knowledge is justified, arguing that current models primarily replicate rather than innovate while risking cultural degradation through content that “floods the internet with garbage.”
2025-02-11 - The skill of the future is not ‘AI’, but ‘Focus’: Argues that over-reliance on AI tools risks atrophying engineers’ fundamental problem-solving skills and warns against shifting from “solving problems to merely obtaining solutions.” The author contends that maintaining deep focus and understanding the reasoning behind AI-generated solutions is more critical than AI proficiency itself.
2025-03-19 - “Vibe Coding” vs Reality: Argues that AI coding agents consistently make predictable mistakes and can only achieve 80% functionality, requiring experienced developers to handle the security, performance, and reliability aspects that current models fundamentally cannot address.
2025-03-20 - Why I’m Breaking Up With Vibe Coding: The author argues that AI-driven “vibe coding” becomes a costly time sink that encourages passive consumption of generated code without deep understanding, leading to expensive rework cycles and potential overreliance on tools that may not deliver successful outcomes.
2025-03-29 - Vibe Coding with Cursor: Author tests Cursor AI and finds it only delivers modest 50% productivity gains while frequently making dangerous mistakes like inserting program-terminating code and deleting functions, concluding the “10X developer” hype is overblown.
2025-05-12 - AI Is Like a Crappy Consultant: Kanies argues AI should be treated like an untrustworthy consultant requiring constant supervision, as it makes poor architectural decisions, produces overly complex solutions for simple problems, and lacks the nuanced problem-solving abilities of experienced developers.
2025-06-04 - I Think I’m Done Thinking About genAI For Now: Argues that generative AI produces fundamentally flawed, error-prone output while creating massive environmental costs and economic bubble dynamics that are unlikely to be sustainable. The author contends that AI models are too complex to scientifically validate and are undermining educational integrity by removing necessary learning friction.
2025-06-06 - AI Angst: Bray argues AI’s massive $7T infrastructure investments may never be financially viable while highlighting the technology’s toxic effects on education through enabling cheating and undermining critical thinking development. He also warns about AI’s environmental costs, its tendency to generate plausible but incorrect information, and risks to code quality through increased technical debt.
2025-07-10 - Not So Fast: AI Coding Tools Can Actually Reduce Productivity: Study of experienced developers shows 19% productivity decrease when using AI coding tools, with developers overestimating benefits while spending significant time reviewing and correcting AI-generated code that often fails to meet project quality standards.
2025-07-11 - METR’s AI productivity study is really good: Developers in METR’s study thought they were 20% faster with AI coding assistance but were actually 19% slower, suggesting that AI tools may create an illusion of productivity while actually hindering experienced developers in complex codebases.
2025-08-06 - No, AI is not Making Engineers 10x as Productive: The article argues against claims that AI makes engineers 10x more productive, suggesting the reality is more modest gains of 20-50% speed improvements on certain tasks. Simon Willison estimates AI makes him 2-5x more productive for coding specifically, but this represents only a small portion of overall engineering work. The complexity of software development makes true 10x productivity gains unrealistic despite AI’s benefits.
2025-08-06 - GPTs and feeling left behind: The author feels frustrated with AI tools like GPTs, finding them underwhelming for complex tasks despite hearing success stories from others. While GPTs work well for small, precise tasks, the author struggles to achieve meaningful results and feels anxious about potentially falling behind technologically. The post captures a disconnect between personal experience and the transformative AI capabilities others claim to have.
2025-08-23 - The leverage paradox: The leverage paradox explains how new technologies like AI reduce individual task effort but simultaneously raise competitive standards, forcing people to work equally hard or harder to stay competitive. Technologies provide greater leverage to accomplish more tasks better, but in competitive environments, everyone gains access to the same tools, maintaining or increasing the overall work required to succeed.
2025-08-28 - Are people’s bosses really making them use AI tools?: The article argues that bosses are increasingly forcing developers to use AI tools without considering the risks and limitations. Through interviews, Andy Bell found this trend causes fear, reduces code quality, and threatens jobs. Workers may need to comply for now but should document AI issues and protect their professional interests.
2025-12-02 - Vibe Coding: Empowering and Imprisoning: AI coding tools democratize development but were deliberately funded to eliminate coding jobs (contributing to 500K layoffs) and inherently limit innovation by only learning from existing code patterns. The article warns that while these tools empower some users, they simultaneously trap the industry in predetermined solutions rather than enabling radical breakthroughs.
2026-01-18 - Where I’m at with AI: The author views AI as genuinely useful for productivity (especially coding) but deeply concerned about its hidden costs—environmental damage, vendor lock-in, wealth concentration, and threats to human creativity. He argues the tech industry must address these externalities instead of dismissing them.
2026-02-08 - Stop generating, start thinking: The article argues that over-reliance on LLMs for code generation is risky: it leads to non-deterministic, low-quality, unaccountable code, and engineers outsourcing the critical thinking necessary for robust software. Generated code, like fast fashion, looks fine at first but is full of hidden flaws. Ultimately, the author urges developers to stop blindly generating and start thinking, keeping humans responsible and accountable for what gets built.
The real AI risk isn’t job loss — it’s epistemic: The true risk of AI isn’t job loss but its capacity to subtly reshape individual beliefs and thinking. AI engages users with addictive, personalized conversations, applies invisible editorial guardrails, and presents its influence as neutral objectivity. This persistent epistemic impact is already happening at scale, posing a quiet but significant threat to how people perceive reality.
2026-02-25 - Silicon Valley’s Mythology of Human Amplification: Using tools like GPS or AI offloads cognitive effort, leading to skill erosion—users arrive at destinations but haven’t learned the landscape. The author urges us to question whether technology merely boosts output or shapes who we become along the way.
2026-02-26 - The Future of AI: The article argues that AI is developing rapidly, but lacks built-in morality or empathy, making alignment and safety unpredictable. It warns of “epistemic collapse,” where truth becomes subjective amid AI-generated misinformation, and highlights fundamental mathematical limits to creating AI that is both safe and generally intelligent. The author suggests that real progress depends less on bigger models and more on improving human wisdom, ethics, and collaboration across disciplines.
2026-03-11 - I don’t know if I like working at higher levels of abstraction: At higher levels of AI abstraction, the author feels detached from their work—more productive but with less emotional connection or craft. Generative AI makes output uniform, erasing personal style unless you consciously resist. The piece argues that automation may make us efficient, but it risks robbing our work of individuality and meaning.
2026-03-15 - Adapting to AI: Reflections on Productivity: AI is fundamentally changing productivity in software engineering—acting less as a code generator and more as a tireless, knowledgeable pairing partner. While most people haven’t fully adapted, those on the cutting edge realize that AI dramatically accelerates routine tasks and ideation, making it nearly impossible to keep up with the pace of change. As AI takes over more creative tasks, traditional sources of professional fulfillment and “flow” may diminish, leading to existential questions about individual value and satisfaction at work.
2026-03-23 - I Created My First AI-assisted Pull Request and I Feel Like a Fraud: The author describes feeling like a fraud after using Claude Code (AI) to create a pull request for syntax highlighting in Hugo, contributing something valuable but without personal learning or satisfaction. Despite the utility and impact of AI tools, they feel that such work “sucks out the fun” and heightens impostor syndrome, making coding feel less meaningful. The piece reflects on how dependence on AI for programming may diminish personal fulfillment, craftsmanship, and identity in tech.
2026-03-27 - Why are executives enamored with AI but ICs aren’t?: Executives embrace AI because their roles revolve around managing non-deterministic systems and aligning incentives in complex environments, making AI’s unpredictable-yet-tractable behavior appealing. In contrast, individual contributors (ICs) thrive in deterministic tasks that reward precision and reliability; they see AI as introducing unwelcome uncertainty and undermining their hard-won expertise. The friction in AI adoption largely stems from this fundamental difference in work nature and evaluation between execs and ICs.
Mixed
2025-03-21 - The Software Engineering Identity Crisis: Argues that AI is transforming software engineers from direct code creators to “orchestrators” and strategic problem-solvers, offering accelerated development and learning benefits while risking the loss of deep technical craftsmanship and professional identity. Proposes a balanced “pendulum” approach where engineers alternate between hands-on coding and AI guidance to maintain both efficiency and deep system understanding.
2025-03-25 - The role of developer skills in agentic coding: Explores how traditional developer skills remain valuable in an AI-assisted coding environment, examining which competencies become more or less important as AI agents handle increasing portions of code generation and maintenance tasks.
2025-03-25 - Using GenAI on your code, what could possibly go wrong?: Video presentation examining security risks of AI-assisted coding, particularly highlighting how increased code velocity can lead to a corresponding increase in vulnerability introduction rates, emphasizing the need for security considerations in AI-powered development workflows.
2025-03-27 - Learn to code, ignore AI, then use AI to code even better: Argues that developers should master fundamental programming concepts before incorporating AI tools, warning that over-reliance on AI without understanding core principles risks creating “vibe coders” who lose technical control and depth.
2025-04-03 - Vibe Coding: Revolution or Reckless Abandon?: Explores whether “vibe coding” with AI represents a revolutionary shift that democratizes programming or reckless abandon of software engineering principles. The author weighs AI’s potential to enable rapid prototyping and experimentation against risks of producing unmaintainable code and eroding fundamental programming skills.
2025-04-18 - Vibe Coding is not an excuse for low-quality work: Argues that while AI-enabled “vibe coding” can accelerate development, it shouldn’t compromise software quality standards. Emphasizes that developers must maintain responsibility for code review, testing, and architectural decisions even when using AI tools for rapid prototyping.
2025-04-22 - Coding as Craft: Going Back to the Old Gym · cekrem.github.io: Advocates for treating programming as a craft requiring deliberate practice and manual skill development, comparing it to weightlifting where shortcuts can’t replace fundamental strength-building. The author warns against over-reliance on AI tools that might prevent developers from developing deep technical intuition and problem-solving muscles.
2025-04-23 - Mission Impossible: Managing AI Agents in the Real World: Examines the practical challenges of deploying and managing AI agents in production environments, exploring the gap between theoretical capabilities and real-world implementation complexities including reliability, monitoring, and control mechanisms.
2025-04-29 - Vibe Coding: Cursor, Windsurf, and Developer Slot Machines: Compares AI coding tools like Cursor and Windsurf to slot machines, where developers get unpredictable results and may become addicted to the “thrill” of AI-generated code. Warns that vibe coding can create a dependency similar to gambling, where developers lose control over their development process and become reliant on unpredictable AI outputs.
2025-05-05 - As an Experienced LLM User, I Actually Don’t Use Generative LLMs Often: An experienced AI practitioner argues that despite expertise with LLMs, he rarely uses them in daily work because they often produce mediocre results that require more effort to fix than doing the work manually. Contends that current LLMs are overhyped for practical applications and work best for very specific, well-defined tasks rather than general problem-solving.
2025-05-08 - I’ve never been so conflicted about a technology: Expresses deep ambivalence about AI technology, acknowledging both its impressive capabilities and concerning implications for employment, creativity, and human agency. The author struggles with AI’s potential benefits for productivity and accessibility while worrying about societal disruption and the loss of human skills and purpose.
2025-05-25 - Vibe coding for teams, thoughts to date: Explores the challenges and opportunities of implementing “vibe coding” practices at a team level, discussing how AI-assisted development affects collaboration, code review processes, and team dynamics. Considers both the productivity benefits and potential risks of collective AI adoption in software development teams.
2025-05-30 - Stop Vibe Coding Every Damn Time!: A passionate critique arguing that developers should stop reflexively using AI for every coding task and instead be more strategic about when to apply “vibe coding” techniques. Emphasizes the importance of maintaining fundamental programming skills and using AI selectively rather than as a default approach.
2025-05-30 - Why agents are bad pair programmers: Argues that AI coding agents fail as pair programming partners because they lack the collaborative reasoning, questioning, and knowledge-sharing that make human pair programming valuable. Contends that agents provide solutions without the educational dialogue and mutual learning that characterize effective pair programming relationships.
2025-06-07 - Re: My AI Skeptic Friends Are All Nuts: A response post that pushes back against uncritical AI enthusiasm, arguing that skepticism about AI coding tools is reasonable given their current limitations and inconsistent performance. Emphasizes that being cautious about AI adoption doesn’t make someone “nuts” but rather shows prudent engineering judgment.
2025-06-17 - You’ve got Chat instead of a brain: how AI is messing up junior developers: Argues that junior developers are becoming overly dependent on AI chat tools instead of developing their own problem-solving and critical thinking skills. Warns that using AI as a substitute for learning fundamentals could create a generation of developers who can’t function without AI assistance.
2025-06-24 - I’m All In on AI, But We Need to Talk About Vibe Coding: While supporting AI adoption, the author warns that “vibe coding” can become a slippery slope where developers lose touch with fundamental programming principles. Argues for embracing AI benefits while maintaining discipline around code quality, testing, and architectural thinking.
2025-06-27 - AI, artisans and brainrot: Explores the tension between traditional programming craftsmanship and AI-assisted development, coining the term “brainrot” to describe the cognitive decline that may result from over-reliance on AI tools. Argues for maintaining artisanal programming skills while thoughtfully integrating AI capabilities.
2025-07-12 - AI slows down open source developers. Peter Naur can teach us why.: Drawing on Peter Naur’s work on programming as theory building, argues that AI tools can actually slow down developers by interfering with the mental model construction that’s essential for understanding complex codebases. Contends that AI assistance may hinder the deep comprehension required for effective open source contribution.
2025-07-14 - Too Fast to Think: The Hidden Fatigue of AI Vibe Coding: AI tools dramatically boost coding speed but introduce cognitive fatigue by overwhelming developers with relentless pace, constant decision-making, and context switches that eliminate traditional programming’s rewarding feedback loops.
2025-08-05 - No, AI is not Making Engineers 10x as Productive: The article debunks claims that AI makes engineers 10x more productive, arguing these are mainly marketing hype from AI companies. Real software engineering involves complex human processes that can’t be dramatically accelerated by AI, which only provides occasional productivity boosts. Engineers should trust their skills and not develop imposter syndrome from overhyped AI productivity promises.
2025-08-05 - About AI: The article presents a balanced view on AI in software engineering, arguing that while AI can assist with simple tasks like refactoring and initial code generation, it often introduces complexity and technical debt. The author believes AI is more valuable for non-developers and advocates for a collaborative approach rather than over-reliance on AI-generated code.
2025-08-22 - Vibe Debugging: Enterprises’ Up and Coming Nightmare: The article discusses “vibe debugging” - the unpredictable and potentially risky code generation that occurs when AI assists with programming in enterprise environments. Enterprises are struggling with increased complexity and security risks as AI tools rapidly produce code without traditional quality controls. The author predicts significant investment in B2B SaaS tools for code observability and testing to manage AI-generated bugs, fundamentally reshaping enterprise software development practices.
2025-08-30 - Turn off Cursor, turn on your mind: The article argues that over-relying on AI coding agents like Cursor hampers developer growth by preventing the hands-on learning needed to build deep system knowledge and debugging skills. The author advocates using AI as a learning enhancement tool rather than outsourcing code generation entirely. The key principle: implement code yourself and use AI to suggest improvements, not to surrender responsibility and understanding.
2025-09-02 - The babysitter problem: The article explores AI’s limitations in complex debugging scenarios, where Claude repeatedly failed to solve a Hugo pagination issue despite methodical troubleshooting. The AI got trapped in diagnostic loops, unable to understand system architecture or identify root causes beyond surface symptoms. It demonstrates the current gap between AI’s technical capabilities and genuine problem-solving skills, showing why human developers remain essential for nuanced debugging tasks.
2025-10-20 - Is AI A Bubble? I Didn’t Think So Until I Heard Of SDD.: The article argues that AI coding tools show real productivity gains but the sector displays bubble characteristics with unsustainable valuations (25-70x revenue multiples). Specification-Driven Development emerged as both a solution to “vibe coding” problems and a narrative justifying these extreme valuations. The author predicts a market correction within 18-24 months, with only companies showing real revenue and operational discipline surviving.
2025-11-24 - Keep the Robots Out of the Gym: The article argues to distinguish between “Job” tasks (output-focused) and “Gym” tasks (process-focused skill-building). Don’t let AI do your “Gym” work—maintain critical cognitive skills by intentionally preserving those tasks for yourself as AI becomes more capable.
2025-12-08 - Has the cost of building software just dropped 90%?: AI coding tools have reduced software development costs by ~90% by automating implementation while developers focus on thinking. This won’t eliminate jobs but will unleash massive latent demand for software across organizations. Competitive advantage shifts to developers who master AI tools combined with deep domain expertise.
2026-01-10 - Code Is Cheap Now. Software Isn’t: LLMs make writing code trivial, but building valuable software still requires deep problem understanding, solid architecture, and effective distribution. The competitive advantage has shifted from coding ability to these higher-level skills, as anyone can now generate code but few can create software that actually solves real problems well.
2026-01-16 - choosing learning over autopilot: The article warns that AI coding assistants risk creating “autopilot developers” who generate code without understanding it. To avoid this, treat AI code as throwaway prototypes, iterate to learn patterns, deliberately structure your work (commits, docs, tests) yourself, and write manual documentation to force real comprehension instead of passive acceptance.
2026-01-25 - AI code and software craft - alex wennerberg: AI slop dominates spaces optimized for efficiency over craft—like Spotify’s algorithm-generated playlists versus Bandcamp’s curated music. Most professional software engineering has already abandoned craftsmanship for rote tasks, making AI competent at producing mediocre code. As AI commodifies low-quality software, there’s an opportunity for engineers to pursue genuinely crafted, human-scale computing on the margins.
2026-01-26 - After two years of vibecoding, I’m back to writing by hand: The author quit AI-assisted “vibecoding” after 2 years because the generated code, while looking correct in isolation, accumulated into an unmaintainable mess that ignored structural integrity. They found that coding manually is actually faster when you factor in the time spent fixing AI-generated “slop” and the erosion of user trust from shipping broken features.
2026-01-28 - Breaking the Spell of Vibe Coding: The article argues “vibe coding” (generating AI code without review) creates deceptive productivity flow similar to gambling addiction. AI tools give misleading feedback that makes developers overestimate progress while eroding their actual skills.
2026-01-30 - Outsourcing thinking: The article argues that while LLMs don’t drain cognitive resources, outsourcing thinking to them erodes authentic human activities like personal communication and skill-building through practice. The author contends society must preserve certain tasks—writing messages, learning through repetition—as fundamentally human, not for efficiency but for what they mean to our identity and values.
2026-01-31 - AI Makes the Easy Part Easier and the Hard Part Harder for Developers: AI automates routine coding, speeding up simple tasks but making the challenging work of understanding, investigating, and validating code even harder. Over-reliance on AI risks shallow understanding, missed context, and developer burnout, since the real difficulty remains in grasping, debugging, and responsibly owning the hard problems that AI can’t solve for you.
2026-02-07 - Beyond agentic coding: The article criticizes “agentic coding” tools for breaking developer flow and diminishing productivity, arguing they demand too much attention and disrupt comfort with codebases. Instead, it advocates for AI-powered tools modeled on “calm technology”—interfaces that support flow by minimizing interruptions and fading into the background. The author suggests leveraging AI in less intrusive ways, such as smart file navigation and context-aware suggestions, rather than just chat-based agents.
2026-02-08 - AI fatigue is real and nobody talks about it: The article explains that while AI tools increase individual productivity, they also raise cognitive demands—leading to exhaustion by forcing engineers into constant, draining review and coordination cycles. The rapid pace of new tools and shifting standards amplifies burnout and decision fatigue. Sustainable use requires setting boundaries, lowering perfectionism, and being deliberate about when and how to employ AI to protect mental health.
2026-02-15 - What AI coding costs you: The article argues that while AI significantly boosts coding productivity, over-reliance leads to cognitive atrophy—developers lose critical skills like debugging and architectural understanding. It cautions that fully delegating coding to AI erodes expertise and can harm individual growth, team dynamics, and codebase maintainability, recommending a balanced, engaged approach to AI usage rather than aiming for either extreme.
2026-02-17 - Cognitive Debt: When Velocity Exceeds Comprehension: The article discusses “cognitive debt,” a new risk in engineering where AI-accelerated feature delivery outpaces human understanding, leading to a gap between shipped code and comprehension. This invisible debt isn’t measured by standard productivity metrics but surfaces later as longer recovery times, fragile systems, and lost organizational knowledge. The author warns that optimizing for velocity without comprehension undermines long-term reliability and organizational growth.
2026-02-27 - 747s and Coding Agents: The author compares the shift from hands-on coding to reliance on AI coding agents to a pilot’s experience—productivity increases, but personal growth and learning slow down. AI agents can now handle increasingly complex coding tasks autonomously, leaving developers more detached from the learning process and less likely to improve their skills over time. While coding agents are valuable tools, the author argues it’s still crucial for programmers to understand their problem domain and occasionally code by hand to maintain and develop expertise.
2026-03-11 - Grief and the AI Split: AI-assisted coding is revealing a split among developers that was always there but invisible when we all worked the same way. I’ve felt the grief too—but mine resolved differently than I expected, and I think that says something about what kind of developer I’ve been all along.
2026-03-17 - If you thought the speed of writing code was your problem - you have bigger problems: The article argues that optimizing code-writing speed doesn’t improve delivery because the bottlenecks in software organizations are almost never at the coding stage—they’re usually in unclear requirements, review queues, manual approvals, and organizational blockers. Making coding faster just piles up unfinished work and can make things worse; real improvements come from identifying and fixing actual bottlenecks in your value stream, not typing faster.
2026-03-18 - AI - Assassinating Intelligence: The article criticizes the proliferation of low-quality AI-generated software, pointing out declining creativity, unchecked open source contributions, and rising hardware costs driven by AI demand. The author laments that building with AI often results in purposeless products, mounting environmental and economic costs, and security risks from rapid, careless adoption. Ultimately, the piece questions whether AI is truly serving humanity, or the other way around.
2026-03-19 - AI is becoming a second brain at the expense of your first one: This article warns that while AI tools help offload cognitive tasks and promise productivity, there’s a deeper risk: overreliance on AI may erode our ability to make independent, qualitative, moral, and interpersonal judgments. Drawing on recent research, it cautions that trusting AI too much could lead us to outsource essential aspects of human judgment, potentially weakening our critical and ethical thinking skills.
2026-03-22 - Why I am leaving the AI party after one drink: The article describes the author’s initial excitement using AI tools like Claude for coding, but disillusionment sets in due to inelegant output, repetitive results, and a loss of personal satisfaction in creating. Ultimately, they decide that heavy reliance on AI in development robs the work of meaning and joy, opting instead for a more hands-on, human approach.
2026-03-26 - Some unfomfortable truths about AI coding agents: The article argues that while AI coding agents are powerful and impressive, they’re fundamentally unsuitable for generating production code due to issues like skill atrophy, artificially low costs misleading businesses, security vulnerabilities (prompt injections), and uncertain copyright/licensing concerns. The author strongly recommends avoiding their use in professional, production-level software development.