It's not about
the code.
It never was.
The first large-scale qualitative study of vibe coding: why developers embrace it, where it breaks down, and how AI trust regulates the line between delegation and true co-creation.
A paradigm shift
with no grounded theory.
Vibe coding went from a tweet to a movement in weeks. Meanwhile, research was analyzing code artifacts and proposing frameworks without any systematic account of how real developers actually experience it.
Andrej Karpathy coined "vibe coding" in February 2025, describing it as programming where you "forget that the code even exists." The phrase immediately polarized: some called it "close to magic," others dismissed it as "the dumb-ass version of using AI." Both camps were reacting without data.
The r/vibecoding subreddit grew to 156,000 members in just months — developers were already sharing pain points and hard-won practices. Researchers had yet to systematically study what those practices were, why they worked, or what they signaled about the future of software development.
Definition: What actually is vibe coding?
Practice: Why and when do developers do it?
Perceptions: How is it seen — and by whom?
Pain points: Where does it break down?
Best practices: What's the emerging wisdom?
(ChatGPT)
(Copilot)
190,000 words of
lived experience.
Three data sources. One flexible qualitative methodology. A grounded theory built from the community's own words, not researchers' assumptions about what vibe coding should be.
Vibe coding is characterized by high-frequency, natural language back-and-forth with an AI agent. Unlike one-shot tools like Copilot, vibe coders iterate conversationally — refining intent, correcting misunderstandings, and rarely reading the code directly. The interaction itself is the work.
The AI doesn't just complete code — it makes higher-order decisions about architecture, features, and design. Vibe coders describe this as genuine creative partnership: "the LLM is leading the process, as opposed to you leading the process." This ranges from full co-creation to pure delegation, mediated by trust.
The defining feature of vibe coding isn't productivity — it's the psychological state it enables. Developers consistently describe entering flow: deep focus, time distortion, intrinsic reward. Natural language removes syntax friction, letting developers stay in creative mode. "It brings back the joy of programming."
Trust is the hinge of the entire theory. More trust enables deeper co-creation and higher flow. But uncalibrated trust introduces risk at every level — software reliability, developer skill, and society. Experienced practitioners are already self-regulating: high trust for prototypes, much lower for security-critical or production systems.
Flow is real.
So are the risks.
13 pain points. 20+ best practices. A trust calibration problem that most developers are already navigating — informally. Here's what the data actually showed.
"It's not about chaos. It's about flow writing code in a rhythm where your mind is free to create, unburdened by boilerplate."LA6 — LinkedIn post, defining the vibe
"Vibe coding is when you ask ChatGPT to build software for you — spinning up code, APIs, or entire servers through conversation alone."L32 — LinkedIn, practitioner definition
"I got too deep in the vibe, took my eye off the ball, and the whole thing spun out of control. I had 30 files in my change log with hours of work uncommitted. It was a fuckup cascade."R35 — Reddit, on version control collapse
"Vibe coding is just approving pull requests you don't understand."R41 — Reddit, on the trust paradox
Delegation
Generation
Co-Creation
Partnership
Trust isn't just a personality trait — it's contextual and project-specific. Experienced vibe coders actively regulate it: high trust for weekend projects and prototypes, low trust for anything involving passwords, user data, or production deployment. The research found this self-regulation emerging naturally in the community, without formal guidance.
Technical debt, buggy or insecure products, unverified dependencies, code too large to review or debug, hard to transition from prototype to production.
Legal liability for data protection failures, skill atrophy in junior developers, potential addiction to AI-assisted flow states, over-reliance that creates knowledge gaps.
Climate cost of wasteful AI iterations, new phishing and scam attack surfaces, erosion of OSS trustworthiness, threats to software supply chain integrity.
Most practitioners are already self-regulating. They distinguish between contexts where vibe coding is appropriate (weekend projects, prototypes, custom tooling for personal use) and contexts where it's not (safety-critical systems, anything with user authentication or sensitive data). The community developed this wisdom organically — without industry guidance, ethics frameworks, or formal training. The research surfaced it; the practitioners built it.
What to do
with this.
Five audiences. Different implications. The research is clear that vibe coding is neither a silver bullet nor a fad — it's a distinct practice with its own conditions for success.
The community's two most universal best practices: plan before you vibe (define goals and architecture first) and use external version control religiously — or ask the AI to log its own changes. Both address the highest-severity pain points. Also: break tasks into chunks that fit in the model's context window, and "fire" conversations when quality degrades.
Vibe coding creates collaboration asymmetries that no one has solved yet. When one team member is "just talking to Claude through a person," it breaks code review norms and shared ownership. Teams need explicit policies: when it's appropriate, how to handle review load, and how to maintain a shared mental model of codebases.
Vibe coding is genuinely effective for rapid prototyping, internal tooling, and exploratory work. Productivity gains are real. But the research confirms the community's own warning: "Don't try to deploy it. That requires engineering, not vibes." Treat it as a prototyping accelerator, not a production pipeline.
If natural language programming is the primary interface for a generation of new developers, CS curricula face a dual mandate: teach effective AI collaboration as a first-class skill, and preserve deep algorithmic understanding as a safeguard against AI-generated failures. One without the other produces incomplete developers.
"Almost all of the PyTest functions I write, I start with having Claude take a crack at it, because I'm just not fluent enough with that framework yet."I5 — Machine learning engineer, on learning via co-creation
"I think there's some new kind of IDE that's gonna come out that's not gonna look like VS Code at all. And 1,000 people need to start these — one that really has the genie baked into the bones of it is gonna be amazing."I7 — Interviewee with 40+ years of programming experience
This paper was prioritized by Jenn Wu, a product and AI mentor who spots research worth paying attention to.
Book a mentorship session →