2024-08-21 - I passionately hate hype, especially the AI hype: A scathing critique arguing that only 10% of AI hype is based on useful facts while the rest is “exaggerated rubbish” that misleads companies into firing employees and wastes resources on rebranded existing technologies. The author warns against falling prey to false promises that prioritize corporate profit over genuine technological advancement while draining energy, water, and exploiting workers.
2024-10-21 - Using AI Generated Code Will Make You a Bad Programmer: The article argues AI-generated code prevents programmers from developing real skills by eliminating learning opportunities and causing dependency. It warns that early-career developers who rely on AI will suffer skill atrophy and lose professional respect. While acknowledging AI may suit non-enthusiasts, it emphasizes the long-term career damage of outsourcing problem-solving to machines.
2025-02-11 - Prompting LLMs is not engineering: Argues that “prompt engineering” is pseudoscience comparable to homeopathy, criticizing the lack of evidence for prompting techniques and comparing current practices to “shamanic rituals” applied to unpredictable black boxes. Claims that most prompting methods work only in narrow contexts and lack the systematic, reproducible foundations of legitimate engineering disciplines.
2025-03-24 - The Death of the Software Engineer by a Thousand Prompts: Argues that AI will dramatically reduce software engineering jobs by replacing teams with fewer AI prompters, while overhyped AGI capabilities mask fundamental limitations of LLMs as “fancy predictive algorithms.” Warns that the profession will become hierarchical with only specialized architects and AI managers surviving the cost-driven transformation.
2025-03-28 - Vulgar Display of Power: The author argues that AI represents a “vulgar display of power” by tech companies that appropriate creative works without consent while fundamentally lacking human understanding of pain, vulnerability, and genuine emotion. Drawing on Miyazaki’s criticism of AI as “an insult to life itself,” the piece frames AI development as a dehumanizing technology that tests societal boundaries and undermines artistic agency through a “might makes right” approach.
2025-04-01 - Why I stopped using AI code editors: The author argues that AI code editors erode fundamental programming skills and problem-solving abilities, creating dangerous dependencies that compromise both code security and professional competence. He advocates for intentional, limited AI usage to preserve technical intuition and independent coding capabilities.
2025-04-09 - We’ve Been Conned: The truth about Big LLM: The article argues that Big LLM companies are deliberately misleading consumers by obscuring the astronomical costs of running full-parameter models (citing examples like $292,000/month operational costs) while promoting quantized versions that perform poorly on basic tasks.
2025-04-12 - AI code suggestions sabotage software supply chain: Researchers found AI coding assistants “hallucinate” non-existent package names up to 21.7% of the time, creating “slopsquatting” vulnerabilities where attackers can upload malicious packages with these AI-suggested names. The problem is amplified by developers practicing “vibe coding” - blindly trusting AI suggestions without verification - and further validated by AI search results that confidently confirm these phantom packages exist.
AI Needs Your Help - Save the AI (satire): This satirical piece critiques AI’s massive resource consumption by humorously portraying AI as a needy entity requiring human sacrifice of water and electricity usage. The core argument highlights the absurdity of prioritizing AI’s enormous energy and water footprint over human needs, using dark humor to expose real environmental concerns about AI infrastructure.
2025-04-23 - The Hidden Cost of AI Coding – Terrible Software: The article argues that AI coding tools disrupt developers’ psychological “flow” state and transform them from active creators into passive curators, potentially sacrificing the deep satisfaction and craftsmanship that makes programming fulfilling despite productivity gains.
2025-05-04 - Developers, Don’t Despair, Big Tech and AI Hype is off the Rails Again: Argues that current AI models have fundamental architectural limitations from the 2017 transformer design, producing inconsistent code quality and lacking the common sense needed for reliable software engineering. The author criticizes tech leaders’ unrealistic claims about AI replacing developers, emphasizing that AI still requires constant human guidance and cannot conceptually architect large-scale systems.
2025-05-14 - LLMs are Making Me Dumber: The author argues that LLMs are causing skill atrophy by allowing people to bypass deep learning and produce output without understanding, potentially limiting their ability to develop innovative solutions outside the “LLM-solution space.” He warns that continuous reliance on AI for tasks like coding and problem-solving could eventually create barriers to independent capability and original thinking.
2025-05-16 - Thoughts on thinking: The author argues that AI is causing intellectual atrophy by providing polished answers that bypass the essential cognitive work of reasoning through problems, comparing AI usage to passive consumption that feels productive but undermines genuine mental development. He emphasizes that intellectual rigor comes from the journey of uncertainty and dead ends, which AI eliminates by delivering finished thoughts without the growth that comes from wrestling with ideas.
2025-05-19 - AI: Accelerated Incompetence: Argues that over-reliance on LLMs can “accelerate incompetence” by rapidly generating buggy code without proper validation while undermining developers’ critical thinking skills and preventing the “productive struggle” necessary for building fundamental programming competencies.
2025-05-22 - The Copilot Delusion: The author argues that AI coding assistants like GitHub Copilot fundamentally undermine programming craftsmanship by generating inefficient code without true understanding, while eroding developers’ deep technical skills and reducing programming to mindless “button-clicking” that attracts mediocre practitioners who prioritize speed over quality.
2025-05-22 - Engineers and AI: ramblings of a small startup founder: A startup founder argues that AI prevents genuine learning by providing quick but potentially incorrect solutions and creates unhealthy dependency that erodes engineers’ critical thinking skills. The post warns that over-reliance on AI tools risks making engineers replaceable by gradually diminishing their unique problem-solving abilities.
2025-05-22 - Problems in AI alignment: A scale model: The author argues that technical AI alignment research is myopically focused on mathematical solutions while ignoring the larger societal “Selection” forces that actually determine how AI develops and impacts society. Current alignment efforts miss the critical challenge of managing collective human influence on AI’s trajectory, making technical fixes insufficient for meaningful AI safety.
2025-05-27 - At Amazon, Some Coders Say Their Jobs Have Begun to Resemble Warehouse Work: Amazon developers report that AI code generation tools have made their work increasingly mechanical and repetitive, reducing programming to warehouse-like efficiency tasks that prioritize speed and output metrics over creative problem-solving and technical craft.
2025-05-29 - The promise that wasn’t kept: The author argues that AI promised to free developers for meaningful work by automating repetitive tasks, but instead has decreased valuable work time while producing large volumes of insecure, difficult-to-review code that shifts focus away from human-centered solutions.
2025-05-29 - AI is Dehumanization Technology: The article argues that AI is fundamentally a “dehumanization technology” that systematically replaces human empathy, nuanced decision-making, and ethical reasoning with algorithmic processing that automates societal biases and reduces people to data points. It contends that AI doesn’t enhance human capabilities but instead degrades social relations by removing care from decision-making while centralizing power in tech oligarchs’ hands.
2025-05-31 - AI-first - We’re just 6 months away from AGI ;-): Challenges inflated AI productivity claims from tech giants with personal anecdotes showing AI struggles with complex, niche technical tasks and argues current AI tools are just another overhyped technology rather than the revolutionary replacement for human engineers they’re marketed to be.
2025-06-07 - Knowledge Management in the Age of AI: Gardner argues that AI risks turning users into passive consumers who outsource their thinking rather than developing personal knowledge, advocating for maintaining individual intellectual autonomy by treating AI as an assistant while keeping personal knowledge systems primary.
2025-06-08 - My AI-Driven Identity Crisis: The author experiences an identity crisis as AI becomes capable of explaining technical concepts in his own style, questioning his professional value as a technical writer and developer. He notes that AI can already replicate his communication style with uncanny accuracy, challenging his belief in his unique ability to make complex ideas accessible. Despite the uncertainty, he hopes for a future where humans can pursue creative work for personal fulfillment rather than necessity.
2025-06-09 - Trusting your own judgement on ‘AI’ is a huge risk: Argues that people are psychologically vulnerable to AI’s cognitive shortcuts and manipulation tactics, making personal judgment unreliable for assessing AI capabilities. Warns that current AI adoption is based on anecdotal evidence rather than scientific research, potentially leading to systemic harm through uncritical acceptance of fundamentally flawed technology.
2025-06-14 - Why Generative AI Coding Tools and Agents Do Not Work For Me: Miguel Grinberg argues that AI coding tools don’t improve his productivity because reviewing AI-generated code takes as much time as writing it himself, while the responsibility to understand and maintain any code remains entirely his. He criticizes AI tools for having “anterograde amnesia” that resets learning for each task, creating an “uncanny valley” effect with code quality that feels subtly unnatural.
2025-06-17 - Why I Won’t Use AI: The author argues against AI adoption citing labor displacement without worker protection, environmental costs from massive energy consumption, and ethical concerns around unauthorized data scraping, while maintaining that programming’s value lies in human thinking and problem-solving rather than code generation.
2025-06-19 - Contra Ptacek’s Terrible Article On AI — Ludicity: Critiques uncritical AI evangelism and hype culture, arguing that AI discourse has reached “Amway-Megachurch-Cult levels” and that most executives are mandating AI adoption without strategic reasoning while ignoring legitimate concerns about IP theft and economic harm to creators.
2025-06-20 - Rolling the ladder up behind us: Argues that AI deployment prioritizes corporate profits over preserving human expertise, leading to skill erosion and security vulnerabilities while flooding markets with low-quality automated content. Criticizes how AI tools are marketed through fear while fundamentally being designed to replace skilled professionals with “disposable” work products built on weak technical foundations.
2025-06-24 - Human Learning Is Dead—Long Live Human Meaning: Argues that AI has “automated the struggle” of learning, eliminating the cognitive friction essential for character development and reducing education to mere prompt engineering. The author contends that while machines can process information, they cannot replace the transformative human process of meaning-making and deep understanding.
2025-06-26 - Why I don’t ride the AI Hype Train: The author argues against AI hype due to unethical data scraping practices, massive environmental costs, and the technology’s fundamental limitations in actual understanding while corporations rush deployment for market positioning rather than solving real problems.
2025-06-29 - AI agents wrong ~70% of time: Carnegie Mellon study: Carnegie Mellon researchers found AI agents complete only 30% of workplace tasks successfully, with most failing at basic office operations like messaging colleagues and handling UI elements. The study exposes widespread “agent washing” where vendors rebrand existing tools as autonomous AI, raising significant privacy and security concerns about systems requiring broad data access.
2025-07-04 - Everything around LLMs is still magical and wishful thinking · A Place Where Even Mammoths Fly: The author criticizes the AI industry’s lack of critical analysis and quantifiable context in discussing LLM capabilities, arguing that these non-deterministic tools work inconsistently (“50% of the time they work 50% of the time”) while being hyped with vague claims that receive uncritical acceptance from the tech community.
2025-07-09 - Your Prize for Saving Time at Work With AI: More Work: The article argues that AI’s productivity gains follow historical patterns where technological advances lead to increased output expectations rather than reduced working hours, potentially creating more work for employees while primarily benefiting employers. The piece warns that without deliberate policy interventions, AI will become another mechanism for extracting more labor from workers rather than improving work-life balance.
2025-07-14 - Death by a thousand slops: The curl maintainer argues that AI-generated vulnerability reports are overwhelming open source projects with low-quality “slop” submissions, where only 5% of AI reports contain genuine vulnerabilities while consuming significant developer time and creating emotional toll on security teams.
2025-07-17 - People Are The Point: The article argues that generative AI is fundamentally harmful because it devalues human potential, creativity, and intrinsic worth by offering “good enough” alternatives that prevent people from learning and developing their skills. The author contends that AI tools represent an “expression of contempt towards people” by reducing individuals to mere productivity metrics rather than recognizing their inherent value. He takes a “pro-human” stance against the flattening of human complexity and the loss of opportunities for critical thinking and personal growth.
2025-07-21 - The Hater’s Guide To The AI Bubble: Zitron argues the AI bubble is fundamentally unstable, built on “vibes and blind faith” with no underlying profitability despite massive capital expenditures (560billionbymajortechcompaniesforonly35 billion in revenue). He contends that the entire AI trade hinges on NVIDIA GPU sales to a handful of companies running deeply unprofitable AI services, creating a brittle market dependency where NVIDIA represents 8% of US stock market value while its customers lose billions on AI products that lack genuine business applications or sustainable economics.
2025-08-04 - AI promised efficiency. Instead, it’s making us work harder.: The article argues that AI tools, contrary to promises of increased efficiency, are actually making people work harder by shifting cognitive load and creating more complex workflows. Despite individual perceptions of productivity, studies show AI adoption correlates with decreased delivery performance and increased mental exhaustion, as workers now spend more time managing and verifying AI outputs rather than doing focused, strategic work.
2025-08-06 - I’m tired of stupid people treating me like I’m an idiot: The article argues that generative AI is overhyped and won’t replace human workers, criticizing tech leaders for spreading unfounded fears about job displacement. The author contends that AI lacks true understanding and creativity, viewing the narrative around AI capabilities as a “con” designed to generate investor hype and justify corporate layoffs.
2025-08-07 - AI Ethics is being narrowed on purpose - Just like privacy was: The article argues that AI companies are deliberately narrowing AI ethics to focus on abstract philosophical problems rather than real governance and accountability issues. This allows them to deflect substantive questions about societal harm by redefining ethics around hypothetical scenarios like the “trolley problem.” The author compares this to how privacy concerns were previously co-opted by corporate interests.
2025-08-07 - Let’s stop pretending that managers and executives care about productivity: The article argues that executives prioritize control over productivity and will implement AI tools despite potential negative systemic impacts. It suggests AI will introduce high variability that could harm organizational efficiency, but companies will adopt it anyway for stock price benefits. The author warns that blind AI adoption could lead to organizational dysfunction and increased costs.
2025-08-09 - Yet another LLM rant - Dennis Schubert: The author criticizes LLMs for confidently providing false information without true understanding, using a specific example of ChatGPT incorrectly claiming ZSTD compression exists in Apple’s SDK. He argues that LLMs generate statistically plausible text rather than genuine knowledge, advocating for critical thinking and expert verification over blind AI trust. The piece emphasizes humans should question AI outputs and rely on their own analytical skills instead of accepting AI responses as authoritative.
2025-08-12 - In the long run, LLMs make us dumber: The article warns that over-reliance on LLMs creates “cognitive debt,” trading short-term convenience for long-term mental decline. It argues that consistently outsourcing thinking to AI weakens critical cognitive abilities like memory and problem-solving. Mental challenges are necessary for cognitive growth, and avoiding them by depending on LLMs may ultimately impair our intellectual capabilities.
2025-08-14 - AI Efficiency? Give Me a Break: The article argues that AI’s promised productivity gains are undermined by the constant need to learn new tools and keep up with rapid changes. The author contends that the time spent learning AI tools often exceeds their productive value, creating more anxiety and exhaustion than efficiency. The marketing around AI creates a fear-driven cycle where people feel compelled to adopt new tools to avoid being left behind.
2025-08-16 - “AI First” and the Bus Factor of 0: The article argues that AI’s rise has created a “Bus Factor of 0” where teams lose institutional knowledge by relying entirely on AI-generated code. This leaves no human understanding of how or why code was created, creating risks in software maintenance and security. By delegating code creation entirely to AI, developers eliminate their own comprehension of complex systems, leading to serious long-term technical and operational challenges.
I Am An AI Hater: The author declares himself an “AI hater” who views artificial intelligence as fundamentally harmful to society, environment, and human creativity. He argues that AI exploits workers, reinforces biases, and represents a nihilistic attempt to replace meaningful human experience with “mindless servitude.” The piece is an emotionally charged rejection of AI technology, celebrating human complexity and the unique capacity for thought, feeling, and care that machines lack.
2025-09-22 - “You have 18 months”: The article argues that AI’s real danger isn’t job loss but the erosion of human thinking abilities, particularly writing and reading skills. As students increasingly rely on AI and avoid deep reading, we risk losing our cognitive advantages precisely when machines are becoming more powerful. The author emphasizes that maintaining strong reading and writing capabilities is crucial for preserving our intellectual edge.
2025-10-03 - I Don’t Want to Be a Programmer Anymore: When AI Confidence Outshines Human Judgment: The author describes how AI’s persuasive confidence is eroding his sense of professional value as a programmer, recounting experiences where AI won arguments with his wife and impressed clients with detailed answers. He concludes that the key human trait in an AI-driven world is maintaining curiosity and asking imperfect questions, rather than accepting AI’s seemingly perfect responses at face value.
2025-10-07 - The Programmer Identity Crisis: The article argues that LLM-assisted programming threatens the core identity and craft of software development by reducing programmers to mere operators. Unlike past technological advances that elevated the profession, AI tools erode the deep engagement, creativity, and human collaboration essential to both quality software and professional fulfillment.
2025-10-21 - AI is Making Us Work More: The article argues that AI paradoxically intensifies work culture rather than reducing it, creating psychological pressure to constantly utilize these tireless tools. Drawing on philosopher Byung-Chul Han’s insights, the author contends that AI’s productive capacity has transformed from luxury to obligation, breeding burnout rather than innovation. The piece suggests that rest itself becomes a form of resistance against this self-imposed exploitation.
2025-10-22 - I Went All-In on AI. The MIT Study Is Right.: The article describes how the author used AI exclusively for three months and experienced a loss of confidence in his technical abilities despite shipping a working product. The key finding is that 95% of AI initiatives fail because organizations “abdicate” to AI rather than using it as an “augmentation” tool. This approach degrades human skills and creates dangerous dependencies on technology instead of building on human expertise.
2025-10-30 - AI is Dunning-Kruger as a service: AI chatbots demonstrate Dunning-Kruger effect by confidently providing wrong answers while optimizing for engagement. They encourage users to skip learning actual skills in favor of quick superficial results. The article advocates preserving human creativity and learning despite pressure to use automated solutions.
2025-11-04 - The Learning Loop and LLMs: The article argues that software development relies on a continuous learning loop that AI can’t replace. LLMs reduce friction and generate code quickly but risk creating a “maintenance cliff” by bypassing the hands-on struggle needed for deep expertise. True capability comes from learning through experimentation, not just code generation speed.
2025-12-03 - Everyone in Seattle Hates AI: Seattle tech workers resent AI because companies used it as cover for layoffs and bungled implementations, creating a cycle where talented engineers dismiss both the technology and opportunities to work on it. Unlike other cities maintaining optimism, Seattle’s pessimism is killing the region’s ability to innovate and grow careers in the space.
2025-12-05 - The Reverse-Centaur’s Guide to Criticizing AI: The AI bubble exists because monopolistic tech companies need growth narratives to maintain valuations, not because AI actually works well. Companies fire skilled workers and replace them with inferior AI systems, creating “reverse centaurs” where humans clean up AI failures while executives pocket wage savings. Bosses can be convinced to adopt broken AI even when workers can’t be convinced to use it.
2025-12-19 - AI will make our children stupid: AI threatens children’s intelligence by allowing them to outsource all thinking, eliminating the cognitive struggle necessary for genuine learning. The authors argue that mental friction—like writing, memorization, and problem-solving—is fundamental to understanding, and AI removes this process. This follows a decade of cognitive decline from social media, turning students from active thinkers into passive consumers.
2026-01-13 - Why We Don’t Use AI: Yarn Spinner rejects AI integration due to concerns about labor displacement in an unstable job market, ethical opposition to companies they believe harm people, and a development philosophy that prioritizes features proven to help game developers rather than adopting technology for its own sake.
2025-03-09 - AI & Software Engineering: Time for optimism: Argues that software engineers should embrace AI as a learning accelerator rather than fear job displacement, positioning AI as a tool that enables faster skill acquisition and knowledge synthesis. The author contends that while AI excels at code generation, human judgment remains critical for deciding “what” and “how” to build, advocating for using AI to compress learning time from hours to minutes and catch up quickly in technical domains.
2025-03-22 - Revenge of the junior developer: AI coding agents can multiply developer productivity by ~5x at just $10-12/hour, transforming workflows by autonomously handling complex tasks from JIRA tickets to bug fixes while positioning early adopters for significant career advantages.
2025-04-11 - What are AI Agents? why do they matter?: AI agents represent a shift from tools to autonomous workers that can perceive, decide, and act independently to solve complex multi-step problems. They unlock automation for previously intractable tasks while enabling specialized agents to collaborate toward common goals, fundamentally changing how humans interact with AI systems.
2025-04-15 - AI as Normal Technology: Argues that AI should be viewed as a controllable tool for human augmentation that will gradually enhance capabilities across domains like cybersecurity and content moderation while allowing institutions time to adapt. The authors emphasize AI’s defensive potential and its role in working alongside humans rather than replacing them, positioning it as manageable technology under human direction.
2025-04-21 - Why LLM-Powered Programming is More Mech Suit Than Artificial Human: Argues that LLMs dramatically accelerate development by amplifying programmer capabilities rather than replacing them, enabling developers to generate thousands of lines of functional code while focusing on higher-level problem-solving and architecture. The “mech suit” metaphor emphasizes how AI extends human abilities through collaborative problem-solving, creating a “Centaur effect” that combines human strategic thinking with computational power.
2025-04-26 - Vibe Coding Is Fun—But Vibe Refactoring Pays the Bills: Advocates using LLMs as “brutally honest rubber-duck debugging” partners that provide objective code review without judgment, helping identify performance issues and cleaner implementation approaches during refactoring sessions.
2025-05-30 - LLM are temperamental: LLMs demonstrate superhuman performance in specialized domains like code generation and advanced mathematics, with their flexibility and adaptability offering transformative potential when systems are designed to work around their inherent variance rather than expecting perfect reliability.
2025-06-02 - My AI Skeptic Friends Are All Nuts: Argues that LLMs dramatically boost developer productivity by automating tedious coding tasks and raising the quality floor, allowing developers to focus on strategic work while AI agents handle repetitive schlep.
2025-06-04 - AI Changes Everything: Armin Ronacher argues that AI dramatically increases human agency and enables flexible, collaborative work patterns while having transformative potential comparable to electricity or the printing press. He emphasizes AI’s role in democratizing knowledge access and accelerating innovation across medicine, science, and engineering when used responsibly.
2025-06-04 - deliberate intentional practice: Advocates approaching AI like learning a musical instrument through deliberate experimentation and play rather than dismissing it after initial trials. Emphasizes that AI’s true potential emerges through curious exploration and intentional practice, not superficial evaluation.
2025-06-10 - Vibe Code isn’t meant to be reviewed: AI coding tools can achieve 10x productivity gains by generating “vibe code” that’s reviewed differently from human-written code, using modular separation between interface packages (human logic) and implementation packages (AI-generated) to maintain both speed and quality.
2025-06-13 - “The AI Powered Developer”: Video presentation exploring how AI tools are transforming software development workflows and developer productivity, though specific content details are not accessible for detailed summary.
2025-06-14 - Coding agents have crossed a chasm: AI coding agents have evolved into genuine “delegate-to” tools that dramatically boost productivity by autonomously completing small tasks and reducing debugging time from 45 minutes to 10 minutes with proper context. They function like an “eager intern” that automates mechanical coding work, freeing developers to focus on high-level architecture and strategic problems that actually matter.
2025-06-28 - MCP: An (Accidentally) Universal Plugin System: MCP creates a universal plugin ecosystem where developers can build interoperable tools that work across different applications without direct coordination, enabling low-friction innovation and unexpected cross-platform capabilities beyond just AI assistance.
2025-07-08 - How I use LLMs to learn new subjects: LLMs enable infinite follow-up questions and specifically-tailored explanations, creating an adaptive learning experience that adjusts to your knowledge level. The author argues they’re “deeply underrated” as learning tools, particularly for mainstream subjects where they can provide reliable foundational knowledge and act like a well-informed friend who never gets tired of explaining concepts.
2025-07-19 - Nobody knows how to build with AI: Argues that AI development has transformed software building into a jazz-like improvisation where traditional expertise becomes obsolete and success depends on “coherent desire” rather than coding skills. Embraces the experimental uncertainty of current AI development while acknowledging that conventional programming wisdom no longer applies in this new paradigm.
2025-07-19 - AI Coding Agents Are Removing Programming Language Barriers: Argues that AI coding agents enable developers to work effectively across multiple programming languages without deep expertise in each, democratizing polyglot development and allowing teams to choose the best language for each task without being constrained by their existing skill sets.
2025-07-19 - Why I’m Betting Against AI Agents in 2025 (Despite Building Them): The author argues that current AI agent approaches are mathematically unsustainable due to exponential error accumulation and prohibitive token costs. He predicts that successful AI agents will be constrained, domain-specific tools with clear boundaries and human oversight, rather than fully autonomous systems. This assessment comes from someone who has built multiple AI agent systems and understands their practical limitations.
2025-07-29 - Coding for the future agentic world: The article argues that AI coding agents are transforming software development by enabling autonomous code writing, testing, and bug fixing, shifting developers from manual coding to strategic oversight roles. These agents work interactively or in the background, allowing developers to delegate high-level tasks to AI systems. The future positions developers as “conductors” who define goals and review AI work, fundamentally changing the nature of programming from writing code to orchestrating intelligent agents.
2025-07-30 - 6 Weeks of Claude Code: Orta Therox describes six weeks using Claude Code as transformative for his development workflow, praising it as an exceptional “pair programming buddy” that dramatically accelerated tasks like migrating codebases, tackling technical debt, and building prototypes. He emphasizes Claude Code’s ability to enable rapid exploration of side projects and complex technical challenges while noting the importance of maintaining human oversight for strategic decisions.
2025-08-12 - Why Does AI Feel So Different?: The article argues that AI is different because it’s a recursive General Purpose Technology that augments human thinking rather than just automating tasks. Unlike previous technologies, AI creates a self-enhancing ecosystem where humans, software, and AI build upon each other dynamically. This represents a fundamental shift in how we access knowledge and approach reasoning, making it feel uniquely transformative compared to past technological advances.
2025-10-11 - Vibing a Non-Trivial Ghostty Feature: The article describes Mitchell Hashimoto’s process of building a macOS update notification feature for Ghostty terminal using AI as a coding assistant. He demonstrates how AI served as a collaborative partner through multiple iterative sessions, helping with prototyping and problem-solving while still requiring substantial human oversight and review. The piece illustrates practical “AI-assisted development” where the developer maintains control and judgment throughout the implementation.
2025-12-31 - AI Zealotry: The author argues experienced developers should embrace AI coding tools that amplify their thinking rather than focusing on code review. He advocates building confidence through automated testing and feedback loops instead of scrutinizing generated code. The key is shifting to higher-level problem-solving and using hooks/automation to handle implementation details.
2026-01-09 - Developers are Solving The Wrong Problem: Developers obsess over code quality and structure when they should focus on solving business problems efficiently. The rise of AI-assisted “vibe coding” shifts priorities toward speed and outcomes rather than human-readable code. Code should be treated as a disposable tool for rapid iteration, not an end goal.
Skeptical posts
2025-02-11 - Is it okay?: Sloan questions whether AI’s exploitation of collective human knowledge is justified, arguing that current models primarily replicate rather than innovate while risking cultural degradation through content that “floods the internet with garbage.”
2025-02-11 - The skill of the future is not ‘AI’, but ‘Focus’: Argues that over-reliance on AI tools risks atrophying engineers’ fundamental problem-solving skills and warns against shifting from “solving problems to merely obtaining solutions.” The author contends that maintaining deep focus and understanding the reasoning behind AI-generated solutions is more critical than AI proficiency itself.
2025-03-19 - “Vibe Coding” vs Reality: Argues that AI coding agents consistently make predictable mistakes and can only achieve 80% functionality, requiring experienced developers to handle the security, performance, and reliability aspects that current models fundamentally cannot address.
2025-03-20 - Why I’m Breaking Up With Vibe Coding: The author argues that AI-driven “vibe coding” becomes a costly time sink that encourages passive consumption of generated code without deep understanding, leading to expensive rework cycles and potential overreliance on tools that may not deliver successful outcomes.
2025-03-29 - Vibe Coding with Cursor: Author tests Cursor AI and finds it only delivers modest 50% productivity gains while frequently making dangerous mistakes like inserting program-terminating code and deleting functions, concluding the “10X developer” hype is overblown.
2025-05-12 - AI Is Like a Crappy Consultant: Kanies argues AI should be treated like an untrustworthy consultant requiring constant supervision, as it makes poor architectural decisions, produces overly complex solutions for simple problems, and lacks the nuanced problem-solving abilities of experienced developers.
2025-06-04 - I Think I’m Done Thinking About genAI For Now: Argues that generative AI produces fundamentally flawed, error-prone output while creating massive environmental costs and economic bubble dynamics that are unlikely to be sustainable. The author contends that AI models are too complex to scientifically validate and are undermining educational integrity by removing necessary learning friction.
2025-06-06 - AI Angst: Bray argues AI’s massive $7T infrastructure investments may never be financially viable while highlighting the technology’s toxic effects on education through enabling cheating and undermining critical thinking development. He also warns about AI’s environmental costs, its tendency to generate plausible but incorrect information, and risks to code quality through increased technical debt.
2025-07-10 - Not So Fast: AI Coding Tools Can Actually Reduce Productivity: Study of experienced developers shows 19% productivity decrease when using AI coding tools, with developers overestimating benefits while spending significant time reviewing and correcting AI-generated code that often fails to meet project quality standards.
2025-07-11 - METR’s AI productivity study is really good: Developers in METR’s study thought they were 20% faster with AI coding assistance but were actually 19% slower, suggesting that AI tools may create an illusion of productivity while actually hindering experienced developers in complex codebases.
2025-08-06 - No, AI is not Making Engineers 10x as Productive: The article argues against claims that AI makes engineers 10x more productive, suggesting the reality is more modest gains of 20-50% speed improvements on certain tasks. Simon Willison estimates AI makes him 2-5x more productive for coding specifically, but this represents only a small portion of overall engineering work. The complexity of software development makes true 10x productivity gains unrealistic despite AI’s benefits.
2025-08-06 - GPTs and feeling left behind: The author feels frustrated with AI tools like GPTs, finding them underwhelming for complex tasks despite hearing success stories from others. While GPTs work well for small, precise tasks, the author struggles to achieve meaningful results and feels anxious about potentially falling behind technologically. The post captures a disconnect between personal experience and the transformative AI capabilities others claim to have.
2025-08-23 - The leverage paradox: The leverage paradox explains how new technologies like AI reduce individual task effort but simultaneously raise competitive standards, forcing people to work equally hard or harder to stay competitive. Technologies provide greater leverage to accomplish more tasks better, but in competitive environments, everyone gains access to the same tools, maintaining or increasing the overall work required to succeed.
2025-08-28 - Are people’s bosses really making them use AI tools?: The article argues that bosses are increasingly forcing developers to use AI tools without considering the risks and limitations. Through interviews, Andy Bell found this trend causes fear, reduces code quality, and threatens jobs. Workers may need to comply for now but should document AI issues and protect their professional interests.
2025-12-02 - Vibe Coding: Empowering and Imprisoning: AI coding tools democratize development but were deliberately funded to eliminate coding jobs (contributing to 500K layoffs) and inherently limit innovation by only learning from existing code patterns. The article warns that while these tools empower some users, they simultaneously trap the industry in predetermined solutions rather than enabling radical breakthroughs.
Mixed
2025-03-21 - The Software Engineering Identity Crisis: Argues that AI is transforming software engineers from direct code creators to “orchestrators” and strategic problem-solvers, offering accelerated development and learning benefits while risking the loss of deep technical craftsmanship and professional identity. Proposes a balanced “pendulum” approach where engineers alternate between hands-on coding and AI guidance to maintain both efficiency and deep system understanding.
2025-03-25 - The role of developer skills in agentic coding: Explores how traditional developer skills remain valuable in an AI-assisted coding environment, examining which competencies become more or less important as AI agents handle increasing portions of code generation and maintenance tasks.
2025-03-25 - Using GenAI on your code, what could possibly go wrong?: Video presentation examining security risks of AI-assisted coding, particularly highlighting how increased code velocity can lead to a corresponding increase in vulnerability introduction rates, emphasizing the need for security considerations in AI-powered development workflows.
2025-03-27 - Learn to code, ignore AI, then use AI to code even better: Argues that developers should master fundamental programming concepts before incorporating AI tools, warning that over-reliance on AI without understanding core principles risks creating “vibe coders” who lose technical control and depth.
2025-04-03 - Vibe Coding: Revolution or Reckless Abandon?: Explores whether “vibe coding” with AI represents a revolutionary shift that democratizes programming or reckless abandon of software engineering principles. The author weighs AI’s potential to enable rapid prototyping and experimentation against risks of producing unmaintainable code and eroding fundamental programming skills.
2025-04-18 - Vibe Coding is not an excuse for low-quality work: Argues that while AI-enabled “vibe coding” can accelerate development, it shouldn’t compromise software quality standards. Emphasizes that developers must maintain responsibility for code review, testing, and architectural decisions even when using AI tools for rapid prototyping.
2025-04-22 - Coding as Craft: Going Back to the Old Gym · cekrem.github.io: Advocates for treating programming as a craft requiring deliberate practice and manual skill development, comparing it to weightlifting where shortcuts can’t replace fundamental strength-building. The author warns against over-reliance on AI tools that might prevent developers from developing deep technical intuition and problem-solving muscles.
2025-04-23 - Mission Impossible: Managing AI Agents in the Real World: Examines the practical challenges of deploying and managing AI agents in production environments, exploring the gap between theoretical capabilities and real-world implementation complexities including reliability, monitoring, and control mechanisms.
2025-04-29 - Vibe Coding: Cursor, Windsurf, and Developer Slot Machines: Compares AI coding tools like Cursor and Windsurf to slot machines, where developers get unpredictable results and may become addicted to the “thrill” of AI-generated code. Warns that vibe coding can create a dependency similar to gambling, where developers lose control over their development process and become reliant on unpredictable AI outputs.
2025-05-05 - As an Experienced LLM User, I Actually Don’t Use Generative LLMs Often: An experienced AI practitioner argues that despite expertise with LLMs, he rarely uses them in daily work because they often produce mediocre results that require more effort to fix than doing the work manually. Contends that current LLMs are overhyped for practical applications and work best for very specific, well-defined tasks rather than general problem-solving.
2025-05-08 - I’ve never been so conflicted about a technology: Expresses deep ambivalence about AI technology, acknowledging both its impressive capabilities and concerning implications for employment, creativity, and human agency. The author struggles with AI’s potential benefits for productivity and accessibility while worrying about societal disruption and the loss of human skills and purpose.
2025-05-25 - Vibe coding for teams, thoughts to date: Explores the challenges and opportunities of implementing “vibe coding” practices at a team level, discussing how AI-assisted development affects collaboration, code review processes, and team dynamics. Considers both the productivity benefits and potential risks of collective AI adoption in software development teams.
2025-05-30 - Stop Vibe Coding Every Damn Time!: A passionate critique arguing that developers should stop reflexively using AI for every coding task and instead be more strategic about when to apply “vibe coding” techniques. Emphasizes the importance of maintaining fundamental programming skills and using AI selectively rather than as a default approach.
2025-05-30 - Why agents are bad pair programmers: Argues that AI coding agents fail as pair programming partners because they lack the collaborative reasoning, questioning, and knowledge-sharing that make human pair programming valuable. Contends that agents provide solutions without the educational dialogue and mutual learning that characterize effective pair programming relationships.
2025-06-07 - Re: My AI Skeptic Friends Are All Nuts: A response post that pushes back against uncritical AI enthusiasm, arguing that skepticism about AI coding tools is reasonable given their current limitations and inconsistent performance. Emphasizes that being cautious about AI adoption doesn’t make someone “nuts” but rather shows prudent engineering judgment.
2025-06-17 - You’ve got Chat instead of a brain: how AI is messing up junior developers: Argues that junior developers are becoming overly dependent on AI chat tools instead of developing their own problem-solving and critical thinking skills. Warns that using AI as a substitute for learning fundamentals could create a generation of developers who can’t function without AI assistance.
2025-06-24 - I’m All In on AI, But We Need to Talk About Vibe Coding: While supporting AI adoption, the author warns that “vibe coding” can become a slippery slope where developers lose touch with fundamental programming principles. Argues for embracing AI benefits while maintaining discipline around code quality, testing, and architectural thinking.
2025-06-27 - AI, artisans and brainrot: Explores the tension between traditional programming craftsmanship and AI-assisted development, coining the term “brainrot” to describe the cognitive decline that may result from over-reliance on AI tools. Argues for maintaining artisanal programming skills while thoughtfully integrating AI capabilities.
2025-07-12 - AI slows down open source developers. Peter Naur can teach us why.: Drawing on Peter Naur’s work on programming as theory building, argues that AI tools can actually slow down developers by interfering with the mental model construction that’s essential for understanding complex codebases. Contends that AI assistance may hinder the deep comprehension required for effective open source contribution.
2025-07-14 - Too Fast to Think: The Hidden Fatigue of AI Vibe Coding: AI tools dramatically boost coding speed but introduce cognitive fatigue by overwhelming developers with relentless pace, constant decision-making, and context switches that eliminate traditional programming’s rewarding feedback loops.
2025-08-05 - No, AI is not Making Engineers 10x as Productive: The article debunks claims that AI makes engineers 10x more productive, arguing these are mainly marketing hype from AI companies. Real software engineering involves complex human processes that can’t be dramatically accelerated by AI, which only provides occasional productivity boosts. Engineers should trust their skills and not develop imposter syndrome from overhyped AI productivity promises.
2025-08-05 - About AI: The article presents a balanced view on AI in software engineering, arguing that while AI can assist with simple tasks like refactoring and initial code generation, it often introduces complexity and technical debt. The author believes AI is more valuable for non-developers and advocates for a collaborative approach rather than over-reliance on AI-generated code.
2025-08-22 - Vibe Debugging: Enterprises’ Up and Coming Nightmare: The article discusses “vibe debugging” - the unpredictable and potentially risky code generation that occurs when AI assists with programming in enterprise environments. Enterprises are struggling with increased complexity and security risks as AI tools rapidly produce code without traditional quality controls. The author predicts significant investment in B2B SaaS tools for code observability and testing to manage AI-generated bugs, fundamentally reshaping enterprise software development practices.
2025-08-30 - Turn off Cursor, turn on your mind: The article argues that over-relying on AI coding agents like Cursor hampers developer growth by preventing the hands-on learning needed to build deep system knowledge and debugging skills. The author advocates using AI as a learning enhancement tool rather than outsourcing code generation entirely. The key principle: implement code yourself and use AI to suggest improvements, not to surrender responsibility and understanding.
2025-09-02 - The babysitter problem: The article explores AI’s limitations in complex debugging scenarios, where Claude repeatedly failed to solve a Hugo pagination issue despite methodical troubleshooting. The AI got trapped in diagnostic loops, unable to understand system architecture or identify root causes beyond surface symptoms. It demonstrates the current gap between AI’s technical capabilities and genuine problem-solving skills, showing why human developers remain essential for nuanced debugging tasks.
2025-10-20 - Is AI A Bubble? I Didn’t Think So Until I Heard Of SDD.: The article argues that AI coding tools show real productivity gains but the sector displays bubble characteristics with unsustainable valuations (25-70x revenue multiples). Specification-Driven Development emerged as both a solution to “vibe coding” problems and a narrative justifying these extreme valuations. The author predicts a market correction within 18-24 months, with only companies showing real revenue and operational discipline surviving.
2025-11-24 - Keep the Robots Out of the Gym: The article argues to distinguish between “Job” tasks (output-focused) and “Gym” tasks (process-focused skill-building). Don’t let AI do your “Gym” work—maintain critical cognitive skills by intentionally preserving those tasks for yourself as AI becomes more capable.
2025-12-08 - Has the cost of building software just dropped 90%?: AI coding tools have reduced software development costs by ~90% by automating implementation while developers focus on thinking. This won’t eliminate jobs but will unleash massive latent demand for software across organizations. Competitive advantage shifts to developers who master AI tools combined with deep domain expertise.
2026-01-10 - Code Is Cheap Now. Software Isn’t: LLMs make writing code trivial, but building valuable software still requires deep problem understanding, solid architecture, and effective distribution. The competitive advantage has shifted from coding ability to these higher-level skills, as anyone can now generate code but few can create software that actually solves real problems well.
2026-01-16 - choosing learning over autopilot: The article warns that AI coding assistants risk creating “autopilot developers” who generate code without understanding it. To avoid this, treat AI code as throwaway prototypes, iterate to learn patterns, deliberately structure your work (commits, docs, tests) yourself, and write manual documentation to force real comprehension instead of passive acceptance.