Vibe coders don't care about quality and wouldn't understand why any of these things are a problem in the first place.
Common vibe coding artifacts:
• Code duplication (from copy-pasted snippets)
• Dead code from quick iterations
• Over-engineered solutions for simple problems
• Inconsistent patterns across modules
pyscn performs structural analysis:
• APTED tree edit distance + LSH
• Control-Flow Graph (CFG) analysis
• Coupling Between Objects (CBO)
• Cyclomatic Complexity
Try it without installation:
uvx pyscn analyze . # Using uv (fastest)
pipx run pyscn analyze . # Using pipx
(Or install: pip install pyscn)
Built with Go + tree-sitter. Happy to dive into the implementation details!Vibe coders don't care about quality and wouldn't understand why any of these things are a problem in the first place.
I find for every 5 minutes of Claude writing code, I need to spend about 55 minutes cleaning up the various messes. Removing dead code that Claude left there because it was confused and "trying things". Finding opportunities for code reuse, refactoring, reusing functions. Removing a LOT of scaffolding and unnecessary cruft (e.g. this class with no member variables and no state could have just been a local function). And trivial stylistic things that add up, like variable naming, lint errors, formatting.
It takes 5 minutes to make some ugly thing that works, but an hour to have an actual finished product that's sanded and polished. Would it have taken an hour just to write the code myself without assistance? Maybe? Probably? Jury is still out for me.
He literally bucketed an entire group of people by a weak label and made strong claims about competence and conscientiousness.
There was a time when hand soldered boards were not only seen as superior to automated soldering, but machine soldered boards were looked down on. People went gaga over a good hand soldered board and the craft.
People that are using AI to assist them to code today, the "vibe coders", I think would also appreciate tooling that assists in maintaining code quality across their project.
I think a comparison that fits better is probably PCB/circuit design software. Back in the day engineering firms had rooms full of people drafting and doing calculations by hand. Today a single engineer can do more in an hour then 50 engineers in a day could back then.
The critical difference is, you still have to know what you are doing. The tool helps, but you still have to have foundational understanding to take advantage of it.
If someone wants to use AI to learn and improve, that's fine. If they want to use it to improve their workflow or speed them up that's fine too. But those aren't "vibe coders".
People who just want the AI to shit something out they can use with absolutely no concern for how or why it works aren't going to be a group who care to use a tool like this. It goes against the whole idea.
It's more useful as a research assistant, documentation search, and writing code a few lines at a time.
Or yesterday for work I had to generate a bunch of json schemas from Python classes. Friggin great for that. Highly structured input, highly structured output, repetitious and boring.
But "vibe coding" is this vague term that is used on the entire spectrum, from people that do "build me a billion dollar SAAS now" kind of vibe coders, to the "build this basic boilerplate component" type of vibe coders. The former never really get too far.
The later have staying power because they're actually able to make progress, and actually build something tangible.
So now I'm assuming you're not against AI generated code, right?
If that's the case then it's clear that this kind of tool can be useful.
I think AI is useful for research and digging through documentation. Also useful for generating small chunks of code at a time, documentation, or repetitive tasks witb highly structured inputs and outputs. Anything beyond that, in my opinion, is a waste of time. Especially these crazy ass agent workflows where you write ten pages of spec and hope the thing doesn't go off the rails.
Doesn't matter how nice a house you build if you build it on top of sand.
But in about 45 minutes I got 700 lines of relatively compact web code to use plotly, jszip, and paraparse to suck in video files, CSV telemetry, and logfiles, help you sync them up, and then show overlays of telemetry on the video. It can also save a package zip file of the whole situation for later use/review. Regex search of logs. Things linked so if you click on a log line, it goes to that part of the video. WASD navigation of the timeline. Templating all the frameworks into the beginning of the zip file so it works offline. etc.
I am not an expert web developer. It would have taken me many hours to do this myself. It looks crisp and professional and has a big featureset complexity.
(Oh, yah, included in the 45 minutes but not the line count: it gave me a ringbuffer for telemetry and a CSV dumper for it and events, too).
The last couple of revisions, it was struggling under the weight of its context window a bit and I ended up making the suggested changes by hand rather than taking a big lump of code from it. So this feels like an approximate upper limit for the complexity of what I can get from ChatGPT5-thinking without using something like Claude Code. Still, a whole lot of projects are this size or smaller.
And even the tools get better, they'll never get to the point where you don't need experts to utilize them, as long as LLMs are the foundation.
"... fully give in to the vibes, embrace exponentials, and forgete that the code even exists."
If you're "vibe coding" you don't know and you don't care what the code is doing.