I just tried vibe coding with Claude
from entwine@programming.dev to programming@programming.dev on 11 Apr 16:29
https://programming.dev/post/48654729

…and I still don’t get it. I paid for a month of Pro to try it out, and it is consistently and confidently producing subtly broken junk. I had tried doing this before in the past, but gave up because it didn’t work well. I thought that maybe this time it would be far along enough to be useful.

The task was relatively simple, and it involved doing some 3d math. The solutions it generated were almost write every time, but critically broken in subtle ways, and any attempt to fix the problems would either introduce new bugs, or regress with old bugs.

I spent nearly the whole day yesterday going back and forth with it, and felt like I was in a mental fog. It wasn’t until I had a full night’s sleep and reviewed the chat log this morning until I realized how much I was going in circles. I tried prompting a bit more today, but stopped when it kept doing the same crap.

The worst part of this is that, through out all of this, Claude was confidently responding. When I said there was a bug, it would “fix” the bug, and provide a confident explanation of what was wrong… Except it was clearly bullshit because it didn’t work.

I still want to keep an open mind. Is anyone having success with these tools? Is there a special way to prompt it? Would I get better results during certain hours of the day?

For reference, I used Opus 4.6 Extended.

#programming

threaded - newest

ghodawalaaman@programming.dev on 11 Apr 16:45 next collapse

I only use AI for generating ok looking UI.

Anthropic says Methos will find bugs on FreeBSD, Bank system etc. What a bullshit.

OwOarchist@pawb.social on 11 Apr 18:02 next collapse

Oh, it will ‘find bugs’ alright. And then flood FreeBSD’s bug report system with bullshit bug reports that turn out to be nothing, but require expert human review to discern that.

homoludens@feddit.org on 11 Apr 18:35 collapse

It’s easier finding bugs than producing correct, readbale, maintainable code though-

cecilkorik@lemmy.ca on 11 Apr 16:47 next collapse

No, I think you do get it. That’s exactly right. Everything you described is absolutely valid.

Maybe the only piece you’re missing is that “almost right, but critically broken in subtle ways” turns out to actually be more than good enough for many people and many purposes. You’re describing the “success” state.

/s but also not /s because this is the unfortunate reality we live in now. We’re all going to eat slop and sooner or later we’re going to be forced to like it.

GiorgioPerlasca@lemmy.ml on 11 Apr 17:28 collapse

Or maybe we will be forced to switch off LLMs and start solving the bugs introduced by their usage using our minds.

cecilkorik@lemmy.ca on 11 Apr 19:00 collapse

As a professional software developer, I truly hope that is the case (and I plan to charge at least 10x my current rate after the AI bubble pops when I’m looking for my next job as I expect there to be a massive shortage of people skilled enough to actually deal with the nightmare spaghetti AI code bases)

Fun times ahead.

Alexstarfire@lemmy.world on 11 Apr 16:52 next collapse

I haven’t used tools to make stuff from scratch but we do use them, or similar, where I work. What kind of stuff are you prompting it for? I find it works best when you give it a very small/simple task to do. And it’s pretty good when it comes to making tests for existing code.

But if the main problem is getting math equations and such wrong I’m not sure there is much we can do to help. You’d have to provide it the equations at a minimum and probably explain to it how they should be used.

But there are definitely times where it can be very frustrating. I had a similar issue yesterday as you did. It made a code change and it wasn’t working how it was supposed to. I kept telling it the problem and it kept trying to fix it but failing. I gave up after far too long and looked at all the code changes it made since it was working correctly before. It just put a change slightly too far down in a process and all I had to do was move it up, wholesale, by like 10 lines and it fixed my problem. Like, how could it not figure out something that simple?

So, it’s not the best at actually fixing things but does work more often than not. But if you can tell it exactly what code is causing the problem and where you want it to be instead, it’ll fix it.

OwOarchist@pawb.social on 11 Apr 18:01 collapse

I find it works best when you give it a very small/simple task to do.

If it’s a small/simple task, why do I need help at all?

Alexstarfire@lemmy.world on 11 Apr 18:23 collapse

Because it might be something that needs to be done in lots of places. Or it may just be something you don’t want to do so you fire it off then go look at or work on something else.

Now, that might be useless for your work flow, but not every tool is useful in every circumstance.

And you can still use it for larger tasks, but often I need to come behind it and clean up its work. Just like you would an intern or junior dev.

jubilationtcornpone@sh.itjust.works on 11 Apr 16:52 next collapse

I rarely use LLM’s for generating code. Usually, by the time I’ve provided all the necessary context, I might as well have just written the code myself. I do use LLM’s for doing research. As long as it’s understood that the response is only as accurate as the source material, they often do a decent job of distilling down to what I’m actually looking for.

Gsus4@mander.xyz on 11 Apr 16:59 next collapse

Their usual (crap) defense is:

a) you’re not paying enough, so of course it is crap

b) you’re not prompting right, you need to use detailed, precise language…

c) that is just anecdotal evidence, you need to do an actual study, yadda yadda.

d) it will improve…

(any other anyone has noticed?)

bruce_babbler@lemmy.zip on 11 Apr 17:04 next collapse

You’re probably done with this. But if you give claude a test case or two (or have it try to make them) you can have claude run the test case, and then it will iterate.

Also, aggressively use plan mode and if claude screws up more than three times do /clear, explain that it’s screwing up to it and then give it new instructions.

Feyd@programming.dev on 11 Apr 17:08 next collapse

producing subtly broken junk

The difference between you and people that say it’s amazing is that you are capable of discerning this reality.

OwOarchist@pawb.social on 11 Apr 18:00 collapse

What I don’t get, though, is how the vibe code bros can’t discern this reality.

How can they sit there and not see that their vibe-coded app just doesn’t do what they wanted it to do? Eventually, you’ve got to try actually running the app, right? And how do you keep drinking the AI kool-aid when you find out that the app doesn’t work?

Feyd@programming.dev on 11 Apr 18:10 next collapse

They’re the same people that copied code from stack overflow that you had to tell them how to actually fix every PR. The difference is the C suite types are backing them this time

Oisteink@lemmy.world on 11 Apr 20:05 collapse

I do apps that work, i do patches that are production quality. Half the cs world does… I do full stack ai debugging of esp32 projects.

It’s a powerful tool, you just need to learn it’s strong and weak points, just like any other tool you use.

lakemalcom@sh.itjust.works on 11 Apr 17:08 next collapse

I have yet to be able to vibe code anything relatively involved. The closest I’ve come is a ffmpeg wrapper script to edit out scenes from a video with a fade in/fade out title card. But even then, I ended up at some point having to debug and add my own arg support because it kept screwing things up. The first draft did do something, though.

I find at this point that it’s still only useful if I have a very clear goal in mind with a lot of context on the area I need to make changes to. That lets me get a more specific prompt, and then I’ll still need to review the output. I have only ever gotten a successful one shot like this with tests.

daesorin@programming.dev on 11 Apr 17:16 next collapse

I did the same today. With both Gemini and Claude, and all I can say is that coding is hell.

zerofk@lemmy.zip on 11 Apr 17:17 next collapse

“Almost but not quite” is exactly my experience with Claude.

The only time I’ve had real success is telling it to do a simple API change that touches a dozen files. It took a while and I’m not sure it was faster than doing it manually, but at least it was less boring.

Possibly important context: I only started really using it a few weeks ago.

dgdft@lemmy.world on 11 Apr 17:51 next collapse

Vibe coding, in the sense of telling the model to make codebase changes, then directly using the output produced, is 100% marketing bullshit that does not scale beyond toy examples.

Here’s the rub: Claude is extremely useful as an advanced autocomplete, if and only if you’re guiding it architecturally through every task it runs, and you vet + revise the output yourself between iterations. You cannot effectively pilot entirely from chat in a mature codebase, and you must compile robust documentation and instructions for Claude to know how to work with your codebase.

You also must aggressively manage information in the context window yourself and keep it clean. You mentioned going in circles trying to get the robot to correct itself: huge mistake. Rewind to before the error, and give it better instructions to steer it away from the pitfall it fell into. Same vein, you also need to reset ASAP after pushing into the >100k token mark, because the models start melting into putty soon after (yes, even the “extended” 1M-window ones).

I’m someone who has massively benefited from using modern LLMs in my work, but I’m also a massive hater at the same time: They’re just a tool, not magic, and have to be used with great care and attention to get reasonable results. You absolutely cannot delegate your thinking to them, because it will bite you, hard and fast.

For your use case (3D math), what I recommend is decomposing your end goal into a series of pure functions that you’ll string together. Once you have that list, that’s where Claude comes in. Have it stub those functions for you, then have it implement them one at a time, reviewing the output of every one before proceeding.

yardy_sardley@lemmy.ca on 11 Apr 18:02 next collapse

I used Opus 4.6 Extended

Stop being cheap, OP. You clearly just need to shell out multiple billions of dollars for access to mythos /s

pixxelkick@lemmy.world on 11 Apr 18:13 next collapse
  1. Did you have MCP tooling setup so it can get lsp feedback? This helps a lot with code quality as it’ll see warnings/hints/suggestions from the lsp

  2. Unit tests. Unit tests. Unit tests. Unit tests.

I cannot stress enough how much less stupid LLMs get when they jave proper solid Unit tests to run themselves and compare expected vs actual outcomes.

Instead of reasoning out “it should do this” they can just run the damn test and find out.

They’ll iterate on it til it actually works and then you can look at it and confirm if its good or not.

I use Sonnet 4.5 / 4.6 extensively and, yes, its prone to getting the answer almost right but a wrong in the end.

But the unit tests catch this, and it corrects.

Example: I am working on my own fame engine with monogame and its about 95% vibe coded.

This transform math is almost 100% vibe coded: github.com/SteffenBlake/…/TransformRegistry.cs

The reason its solid is because of this: github.com/…/TransformRegistryIntegrationTests.cs

Also vibe coded and then sanity checked by me by hand to confirm the math checks out for the tests.

And yes, it caught multiple bugs, but the agent automatically could respond to that, fix the bug, rerun the tests, and iterate til everything was solid.

Test Driven Development is huge for making agents self police their own code.

colournoun@beehaw.org on 11 Apr 18:32 next collapse

regress with old bugs

Have it write a test suite that enforces the correct behavior, and tell it that the test suite must pass after any change. Make sure it’s not cheating (return true) inside the test suite.

AlphaOmega@lemmy.world on 11 Apr 18:54 next collapse

This sounds on par for all the AI I have been dealing with. I find it works best if you give it a lot of rules, then treat it like a 12 year old and expect wild mistakes for anything more complicated than a simple calculator. I work primarily with Gemini and have it build simple HTML/CSS and it’s infuriating how many times I have told it to use &amp ; instead of &.
Now every time it does anything, it’s always telling me how it included the correct ampersand. It can’t tell me why it screwed up like 5 times prior, it just makes up some BS and apologizes profusely.
The more rules you give it, even if it ignores them sometimes, the better.

Oisteink@lemmy.world on 11 Apr 20:14 collapse

In my view it’s about quality and not quantity of the AGENTS/CLAUDE.md

My experience is that starting with what i dont want and the what i do works best. «never rely on training for API documentation, use context7» «don’t use ls/find/grep for symbols, use serena»

Not the best examples, but but.

homes@piefed.world on 11 Apr 19:03 next collapse

I tried using Claude to convert some bash scripts to docker compose files, and it made several mistakes with case-sensitivity and failure to properly encapsulate certain path declarations that had spaces in them. if it could make such incredibly simple mistakes in converting a script to a markup language, I wouldn’t dare trust it to actually compose anything in an actual programming language like Python or Rust or C# or Swift whatever you’re using.

OpenStars@piefed.social on 11 Apr 19:21 next collapse

The solutions it generated were almost write every time, but critically broken in subtle ways, and any attempt to fix the problems would either introduce new bugs, or regress with old bugs.

This is part of your problem right there. The correct word there, instead of “write”, is “right”. You emotionally typed out a message, got your dopamine hit, then felt satisfied, and now the rest of us have to figure out what you meant to say.

Which is fine, but now imagine that not only you can do this, but AI can do it as well…

If you want something done correctly, then you must do it yourself.

infiniteface@programming.dev on 11 Apr 19:33 next collapse

opus 4.6 is a dream for me. Though I’m in the web dev area which is quite mature and with a lot of training data. The life saver to avoid regression is to comprehensively test your code. This works as a kind of quality checkpoint during development.

Secondly, give it the right tooling and context, that means at the very least a good acp server (editor) and appropriate mcp servers. Search for what’s appropriate in your domain. For 3d math, at the very least I’d think it would need a visual snapshotting tool. There are probably tons of relevant ones.

Thirdly, consistently expand on your CLAUDE.md, add and develop new skills as you go (let it write its own on your instructions). Force it to read them.

It probably depends on a lot of factors, but disciplined usage of these approaches will go a long way. Opus’ context window is huge, which makes the approach more consistent.

Prove_your_argument@piefed.social on 11 Apr 19:44 next collapse

Have you been coding professionally long?

I find that the only time I can use these chatbots for a task I really need to already know what i’m doing so that I can read the output and fix the issues. This is like having junior devs on your team and being a code reviewer more than being a full time coder. They get a lot of things wrong but there’s so much usable that you can save a ton of time over doing everything yourself from scratch.

Just like with junior devs, you can send them back to fix what you know is wrong and give them feedback to improve various things you would prefer done another way. There’s no emotions though, so you can just be blunt and concise with feedback.

GiorgioPerlasca@lemmy.ml on 11 Apr 19:56 collapse

Nice comparison, but the bugs created by junior software developers are usually much easier to find than the bugs created by LLMs.

bluGill@fedia.io on 11 Apr 19:52 collapse

Claude is very good when driven by someone who knows how to do the job and demands perfection. However if you give it a prompt and take the first result it is normally junk, make it iterate and things get better.