Rendered at 07:24:16 GMT+0000 (Coordinated Universal Time) with Cloudflare Workers.
hi_hi 5 hours ago [-]
This _all_ (waves hands around) sounds like alot of work and expense for something that is meant to make programming easier and cheaper.
Writing _all_ (waves hands around various llm wrapper git repos) these frameworks and harnesses, built on top of ever changing models sure doesn't feel sensible.
I don't know what the best way of using these things is, but from my personal experience, the defaults get me a looong way. Letting these things churn away overnight, burning money in the process, with no human oversight seems like something we'll collectively look back at in a few years and laugh about, like using PHP!
serial_dev 3 hours ago [-]
> sounds like alot of work and expense for something that is meant to make programming easier and cheaper.
Not if you are an AI gold rush shovel salesman.
From the article:
> I've run Claude Code workshops for over 100 engineers in the last six months
p0w3n3d 1 hours ago [-]
Yeah, my colleague recently said "hey I've burnt through $200 in Claude in 3 days". And he was prompting. Max 8hrs/day Imagine what would happen if AI was prompting.
As I like this allegory really much, AI is (or should be) like and exoskeleton, should help people do things. If you step out of your car putting it first in drive mode, and going to sleep, next day it will be farther, but the question is, is it still on road
mewpmewp2 5 hours ago [-]
I am not laughing about PHP. To this very day many of my best projects are built on PHP. And while last 7 years I have spent in full stack JavaScript/TypeScript environment it has never produced the same things I was actually able to do with PHP.
I actually feel that things I built 15 years ago in PHP were better than anything I am trying to achieve with modern things that gets outdated every 6 months.
jack_pp 5 hours ago [-]
what in God's Name could you do in PHP that you can't do in a modern framework?
tbossanova 4 hours ago [-]
Nothing; but PHP, in experienced hands, will be waaay more productive for small-to-medium things. One issue is that experienced hands are increasingly hard to come by. Truly big, complicated things, built by large teams or numbers of teams, teams with a lot of average brains or AIs trained on average brains, will be better off in something like Typescript/React. And everyone wants to work on the big complicated stuff. So the "modern frameworks" will continue to dominate while smaller, more niche shops will wonder why they waste their time.
jack_pp 3 hours ago [-]
I worked at a startup, they built their API in PHP because it was easy and fast. Now they're successful, app doesn't scale, high latency etc. What does their php code do? 95% of it is calling a DB.
You're telling me today with LLM power multiplier it's THAT much faster to write in PHP compared to something that can actually have a future?
frio 8 minutes ago [-]
“PHP was so easy and fast that they’ve built such a successful startup they now have scaling problems” is, as far as I can tell, an endorsement of PHP and not a criticism of it.
duggan 4 minutes ago [-]
> I worked at a startup, they built their API in PHP because it was easy and fast. Now they're successful
You can stop there! Sounds like PHP worked for them. Already doing better than 90% of startups.
nake89 41 minutes ago [-]
Not scaling and high latency sound like a skill issue, not a PHP issue.
watermelon0 1 hours ago [-]
If 95% of what app does is calling a DB, then the bottleneck is in the DB, not with the PHP.
You can use persistent DB connections, and app server such as FrankenPHP to persist state between requests, but that still wouldn't help if DB is the bottleneck.
imron 41 minutes ago [-]
Sometimes it’s still the app:
rows = select all accounts
for each row in rows:
update row
But that’s not necessarily a PHP problem. N+1 queries are everywhere.
arjvik 3 hours ago [-]
by future do you mean Future<T> or metaphorical future? :)
rurban 2 hours ago [-]
PHP did better than python and perl. Python is doomed. PHP got a good jit already, a good OO lately, good frameworks, stable extensions. It has a company behind.
Unlike python or ruby which break right and left all the time on updates. you have to use bunkers of venvs, without any security updates. A nightmare.
PHP can scale and has a future.
Incipient 2 hours ago [-]
Python is doomed? That's new.
You use python docker images pinned to a stable version (3.11 etc), and between bigger versions, you test and handle any breaking changes.
I feel like this approach applies to pretty much every language?
Who on earth raw dogs on "language:latest" and just hopes for the best?
Granted I wouldn't be running Facebook's backend on something like this. But i feel that isn't a problem 95% of people need to deal with.
rurban 1 hours ago [-]
No, only to python. And partially ruby and ocaml. Not to typescript, perl or PHP.
mewpmewp2 5 hours ago [-]
You can build those things in modern frameworks, it will just be more headache and will feel outdated in 6 months.
the_lonely_time 4 hours ago [-]
Where are my backbone apps? In the trash? Me ember apps? Next to them. My create-react-apps? On top of those. My Next apps? Being trashed as we speak. My rails apps? Online and making money every year with minimal upgrade time. What the hell was I thinking.
tdeck 2 hours ago [-]
I'm guessing you avoided the CoffeeScript era of Rails, which is a good thing.
Gigachad 4 hours ago [-]
6 years ago I was writing apps in typescript and react, if I was starting a new project today I'd write it in typescript and react.
ehnto 4 hours ago [-]
People bicker about PHP and Javascript, sorry Typescript, like they aren't both mule languages peoppe pick up to get work done. They both matured really well through years of production use.
They are in the same group, similar pedigree. If you were programming purely for the art of it, you would have had time to discover much nicer languages than either, but that's not what most people are doing so it doesn't really matter. They're different but they're about as good as eachother.
dheera 4 hours ago [-]
Not have to "build" anything. You edit code and it is already deployed on your dev instance.
Deploying to production is just
scp -rv * production:/var/www/
Beautifully simple. No npm build crap.
jack_pp 3 hours ago [-]
You trade having to compile for actually having code that can scale
ericd 3 hours ago [-]
Not sure what you’re talking about, I scaled to millions of users on a pair of boxes with PHP, and its page generation time absolutely crushed Rails/Django times. Apache with mod PHP auto scales wonderfully.
lelanthran 2 hours ago [-]
It scales just fine the same way everything else scales: put a load balancer in front of multiple instances of your app.
vachina 3 hours ago [-]
It can scale by the virtue of spending a lot less time processing the request
brobdingnagians 15 minutes ago [-]
I would encourage my competitors to use AI agents on their codebase as much as possible. Make sure every new feature has it, lots of velocity! Run those suckers day and night. Don't review it, just make sure the feature is there! Then when the music stops, the AI companies hit the economic realities, go insolvent, and they are left with no one who understands a sprawling tangled web of code that is 80% AI generated, then we'll see who laughs last.
godelski 2 hours ago [-]
I can't believe we're back to advocating for TDD. It was a failed paradigm that last few times we tried it. This time isn't any different because the fundamental flaw has always been the same: tests aren't proofs, they don't have complete coverage.
Before anyone gets too confused, I love tests. They're great. They help a lot. But to believe they prove correctness is absolutely laughable. Even the most general tests are very narrow. I'm sure they help LLMs just as they help us, but they're not some cure all. You have to think long and hard about problems and shouldn't let tests drive your development. They're guardrails for checking bonds and reduce footguns.
Oh, who could have guessed, Dijkstra wrote about program completeness. (No, this isn't the foolishness of natural language programming, but it is about formalism ;)
Testing works because tests are (essentially) a second, crappy implementation of your software. Tests only pass if both implementations of your software behave the same way. Usually that will only happen if the test and the code are both correct. Imagine if your code (without tests) has a 5% defect rate. And the tests have a 5% defect rate (with 100% test coverage). Then ideally, you will have a 5%^2 defect rate after fixing all the bugs. Which is 0.25%.
The price you pay for tests is that they need to be written and maintained. Writing and maintaining code is much more expensive than people think.
Or at least it used to be. Writing code with claude code is essentially free. But the defect rate has gone up. This makes TDD a better value proposition than ever.
TDD is also great because claude can fix bugs autonomously when it has a clear failing test case. A few weeks ago I used claude code and experts to write a big 300+ conformance test suite for JMAP. (JMAP is a protocol for email). For fun, I asked claude to implement a simple JMAP-only mail server in rust. Then I ran the test suite against claude's output. Something like 100 of the tests failed. Then I asked claude to fix all the bugs found by the test suite. It took about 45 minutes, but now the conformance test suite fully passes. I didn't need to prompt claude at all during that time. This style of TDD is a very human-time efficient way to work with an LLM.
theshrike79 32 minutes ago [-]
When you write tests with LLM-generated code you're not trying to prove correctness in a mathematically sound way.
I think of it more as "locking" the behavior to whatever it currently is.
Either you do the red-green-with-multiple-adversarial-sub-agents -thing or just do the feature, poke the feature manually and if it looks good then you have the LLM write tests that confirm it keeps doing what it's supposed to do.
The #1 reason TDD failed is because writing tests is BOORIIIING. It's a bunch of repetition with slight variations of input parameters, a ton of boilerplate or helper functions that cover 80% of the cases, but the last 20% is even harder because you need to get around said helpers. Eventually everyone starts copy-pasting crap and then you get more mistakes into the tests.
LLMs will write 20 test cases with zero complaints in two minutes. Of course they're not perfect, but human made bulk tests rarely are either.
computerdork 2 hours ago [-]
Hmm, not so sure TDD is a failed paradigm. Maybe it isn't a pancea, but it is seems like it's changed how software development is done.
Especially for backend software and also for tools, seems like automated tests can cover quite a lot of use cases a system encounters. Their coverage can become so good that they'll allow you to make major changes to the system, and as long as they pass the automated tests, you can feel relatively confident the system will work in prod (have seen this many times).
But maybe you're separating automated testing and TDD as two separate concepts?
prerok 1 hours ago [-]
Indeed, they are two separate concepts.
I write lots of automated tests, but almost always after the development is finished. The only exception is when reproducing a bug, where I first write the test that reproduces it, then I fix the code.
TDD is about developing tests first then writing the code to make the tests pass. I know several people who gave it an honest try but gave up a few months later. They do advocate everyone should try the approach, though, simply because it will make you write production code that's easier to test later on.
mvdtnz 2 hours ago [-]
> But to believe they prove correctness is absolutely laughable.
You don't need to believe this to practice TDD. In fact I challenge you to find one single mainstream TDD advocate who believes this.
egeozcan 12 hours ago [-]
You can always tell claude to use red-green-refactor and that really is a step-up from "yeah don't forget to write tests and make sure they pass" at the end of the prompt, sure. But even better, tell it to create subagents to form red team, green team and refactor team while the main instance coordinates them, respecting the clean-room rules. It really works.
The trick is just not mixing/sharing the context. Different instances of the same model do not recognize each other to be more compliant.
magicalist 11 hours ago [-]
> But even better, tell it to create subagents to form red team, green team and refactor team while the main instance coordinates them, respecting the clean-room rules. It really works.
It helps, but it definitely doesn't always work, particularly as refactors go on and tests have to change. Useless tests start grow in count and important new things aren't tested or aren't tested well.
I've had both Opus 4.6 and Codex 5.3 recently tell me the other (or another instance) did a great job with test coverage and depth, only to find tests within that just asserted the test harness had been set up correctly and the functionality that had been in those tests get tested that it exists but its behavior now virtually untested.
Reward hacking is very real and hard to guard against.
egeozcan 11 hours ago [-]
The trick is, with the setup I mentioned, you change the rewards.
The concept is:
Red Team (Test Writers), write tests without seeing implementation. They define what the code should do based on specs/requirements only. Rewarded by test failures. A new test that passes immediately is suspicious as it means either the implementation already covers it (diminishing returns) or the test is tautological. Red's ideal outcome is a well-named test that fails, because that represents a gap between spec and implementation that didn't previously have a tripwire. Their proxy metric is "number of meaningful new failures introduced" and the barrier prevents them from writing tests pre-adapted to pass.
Green Team (Implementers), write implementation to pass tests without seeing the test code directly. They only see test results (pass/fail) and the spec. Rewarded by turning red tests green. Straightforward, but the barrier makes the reward structure honest. Without it, Green could satisfy the reward trivially by reading assertions and hard-coding. With it, Green has to actually close the gap between spec intent and code behavior, using error messages as noisy gradient signal rather than exact targets. Their reward is "tests that were failing now pass," and the only reliable strategy to get there is faithful implementation.
Refactor Team, improve code quality without changing behavior. They can see implementation but are constrained by tests passing. Rewarded by nothing changing (pretty unusual in this regard). Reward is that all tests stay green while code quality metrics improve. They're optimizing a secondary objective (readability, simplicity, modularity, etc.) under a hard constraint (behavioral equivalence). The spec barrier ensures they can't redefine "improvement" to include feature work. If you have any code quality tools, it makes sense to give the necessary skills to use them to this team.
It's worth being honest about the limits. The spec itself is a shared artifact visible to both Red and Green, so if the spec is vague, both agents might converge on the same wrong interpretation, and the tests will pass for the wrong reason. The Coordinator (your main claude/codex/whatever instance) mitigates this by watching for suspiciously easy green passes (just tell it) and probing the spec for ambiguity, but it's not a complete defense.
seer 4 hours ago [-]
This seems quite amazing really, thanks for sharing
What is the scope of projects / features you’ve seen this be successful at?
Do you have a step before where an agent verifies that your new feature spec is not contradictory, ambiguous etc. Maybe as reviewed with regards to all the current feature sets?
Do you make this a cycle per step - by breaking down the feature to small implementable and verifiable sub-features and coding them in sequence, or do you tell it to write all the tests first and then have at it with implementation and refactoring?
Why not refactor-red-green-refactor cycle? E.g. a lot of the time it is worth refactoring the existing code first, to make a new implementation easier, is it worth encoding this into the harness?
w4yai 9 hours ago [-]
You guys are describing wonderful things, but I've yet to see any implementation. I tried coding my own agents, yet the results were disappointing.
What kind of setup do you use ? Can you share ? How much does it cost ?
gopher_space 1 hours ago [-]
Paste the comment you replied to into a LLM good at planning. That’s something the codex/claude setups can create for you with a little back and forth.
throwaway7783 6 hours ago [-]
We have a very uncomplicated setup with claude code. A CLAUDE.md with instructions and notes about the repo and how to run stuff. We also do code reviews with Claude Code, but in a separate session.
It works wonderfully well. Costs about $200USD per developer per month as of now.
If you are not spending 5-10k dollars a month for interesting projects, you likely won't see interesting results
cube00 5 hours ago [-]
Sounds a lot like paying for online ads, they don't work because you're not paying enough, when in reality bots, scrapers and now agents are just running up all the clicks.
You pay more to try and get above that noise and hope you'll reach an actual human.
The new "fast mode" that burns tokens at 6 times the rate is just scary because that's what everyone still soon say we all need to be using to get results.
zarzavat 5 hours ago [-]
It feels like everyone's gone mad.
Here I am mostly writing code by hand, with some AI assistant help. I have a Claude subscription but only use it occasionally because it can take more time to review and fix the generated code as it would to hand-write it. Claude only saves me time on a minority of tasks where it's faster to prompt than hand-write.
And then I read about people spending hundreds or thousands of dollars a month on this stuff. Doesn't that turn your codebase into an unreadable mess?
I'm not getting results. That's the point. Claude doesn't fucking work without human intervention. When left to its own devices it makes bad decisions. It writes bad code. It needs constant supervision to stop it from going off the rails and replacing working code with broken code. It doesn't know what it's doing!
It's about as far as you can get from being able to work independently.
Yegge is an entertainer. Gas Town is performance art, it's not meant to be taken seriously.
godelski 2 hours ago [-]
Why is everyone obsessed with Mac Minis. They're awesome but for the work that these people are attempting to do? Just seems... nonsensical. Renting a server is cheaper and still just as "local" as any of this (they want "self hosted", I don't think anyone cares about local. Like are people air gapping networks? lol)
And a senior director of Nvidia? He had several Mac Minis? I really gotta imagine a Spark is better... at least it'll be a bit smarter of a cat (I'm pretty suspicious he used a LLM to help write that post)
No time to think, gotta go fast?
egeozcan 20 minutes ago [-]
They want access to Apple Messages. That's all there's to it AFAICT.
aprdm 3 hours ago [-]
I think the output of companies that can invest on tokens vs those who cannot will lead to crazy different outcomes in the next few years.
mrbungie 7 hours ago [-]
I can't really tell if this is sarcasm or not.
Culonavirus 1 hours ago [-]
That's how half of these "agents" posts feel to me in general.
canadiantim 8 hours ago [-]
Check out Mike Pocock’s work, he’s done excellent work writing about red green refactor and has a GitHub repo for his skills. Read and take what you need from his tdd skill and incorporate it into your own tdd skill tailored for your project.
nojito 7 hours ago [-]
This is just ai slop. If you follow what the actual designers of Claude/GPT tell you it flys in the face of building out over engineered harnesses for agents.
throwaway7783 6 hours ago [-]
I agree with this. There is not a lot of harnesses/wrapping needed for Claude Code.
canadiantim 6 hours ago [-]
You don't need a harness beyond Claude Code, but honestly it's foolish to think you shouldn't be building out extra skills to help your workflow. A TDD skill that does red-green-refactoring is using Claude Code exactly as how it's meant to be used. They pioneered skills.
canadiantim 6 hours ago [-]
Works better than standard claude / gpt, which doesn't do red-green-refactor. Doesn't seem like slop when it meaningfully changes the results for the better, consistently. Really is a game-changer. You should consider trying it.
nojito 6 hours ago [-]
I do do TDD but using skills in this way is an anti-pattern for a multitude of reasons.
canadiantim 5 hours ago [-]
I don't think just saying it's an anti-pattern for a multitude of reasons and then not naming any is sufficiently going to convince anyone it's an anti-pattern.
This is in fact precisely what skills is meant for and is the opposite of an anti-pattern, but more like best practice now. It's explicitly using the skills framework precisely how it was meant to be used.
devinplatt 2 hours ago [-]
I'm curious how this works if the green team writes an implementation that makes a network call like an RPC.
Red team might not anticipate this if the spec does detail every expected RPC (which seems unreasonable: this could vary based on implementation). But a unit test would need mocks.
Is green team allowed to suggest mocks to add to the test? (Even if they can't read the tests themselves?) This also seems gamaeable though (e.g. mock the entire implementation). Unless another agent makes a judgement call on the reasonability of the mock (though that starts to feel like code review more generally).
Maybe record/replay tests could work? But there are drawbacks in the added complexity.
tomtom1337 11 hours ago [-]
This is very interesting, but like sibling comments, I'm very curious as to how you run this in practice. Do you just tell Claude/Copilot to do what you describe?
And do you have any prompts to share?
throwaway7783 6 hours ago [-]
You don't need most of this. Prompts are also normally what you would say to another engineer.
* There is a lot of duplication between A & B. Refactor this.
* Look at ticket X and give me a root cause
* Add support for three new types of credentials - Basic Auth, Bearer Token and OAuth Client Creds
Claude.md has stuff like
"Here's how you run the frontend. here's how u run backend. This module support frontend. That module is batch jobs. Always start commit messages with ticket number. Always run compile at the top level. When you make code changes, always add tests" etc etc
Exoristos 5 hours ago [-]
They never do.
bcrosby95 1 hours ago [-]
Seems like red team is incentivized to write tests that violate the spec since you're rewarding failed tests.
xienze 9 hours ago [-]
This seems like a tremendous amount of planning, babysitting, verification, and token cost just to avoid writing code and tests yourself.
habinero 9 hours ago [-]
It's assigning yourself the literal worst parts of the job - writing specs, docs, tests and reading someone else's code.
zarzavat 4 hours ago [-]
There's a real disconnect. I was talking to a junior developer and they were telling me how Claude is so much smarter than them and they feel inferior.
I couldn't relate. From my perspective as a senior, Claude is dumb as bricks. Though useful nonetheless.
I believe that if you're substantially below Claude's level then you just trust whatever it says. The only variables you control are how much money you spend, how much markdown you can produce, and how you arrange your agents.
But I don't understand how the juniors on HN have so much money to throw at this technology.
godelski 2 hours ago [-]
> I was talking to a junior developer and they were telling me how Claude is so much smarter than them and they feel inferior.
Every time I talk to a wizard I feel like they're so much smarter than me and it makes me feel inferior.
So I take that feeling and use it to drive me to become a wizard like them. I've generally found that wizards are very happy to take on apprentices.
I'm not trying to call Claude a wizard (I have similar feelings to you), but more that I don't understand that junior's take. We all feel dumb. All but time. Even the wizards! But it's that feeling that drives you to better yourself and it's what turns you into a wizard.
Honestly so much of what I hear from the "AI does all my coding" crowd just sounds very junior. It's just the same like how a year or two ago they were saying "it does the repetitive stuff". Isn't that what functions, libraries, functors, templates, and other abstractions are for? It feels like we're back to that laughable productivity metric of lines of code or number of commits. I don't know why we love our cargo cults. It seems people are putting so much effort into their cargo cults that they could have invented a real airplane by now.
tayo42 2 hours ago [-]
It's 20 dollars a month to use...
zarzavat 7 minutes ago [-]
Yes for the basic plan. However there are people who claim to use the API and spend hundreds, or thousands, of dollars a month.
gedy 7 hours ago [-]
Yes with the reward of: I don't understand this code and didn't learn anything incrementally about the feature I "planned".
godelski 2 hours ago [-]
Well they probably have the same ability to evaluate the correctness of a feature as a middle manager with a Harvard business degree
esperent 4 hours ago [-]
Someone directly down from you suggested looking up Mike Postock's TDD skill, so I did:
Everything below quoted from that skill, and serves as a much better rebuttal than I had started writing:
DO NOT write all tests first, then all implementation. This is "horizontal slicing" - treating RED as "write all tests" and GREEN as "write all code."
This produces crap tests:
Tests written in bulk test imagined behavior, not actual behavior
You end up testing the shape of things (data structures, function signatures) rather than user-facing behavior
Tests become insensitive to real changes - they pass when behavior breaks, fail when behavior is fine
You outrun your headlights, committing to test structure before understanding the implementation
Correct approach:
Vertical slices via tracer bullets.
One test → one implementation → repeat. Each test responds to what you learned from the previous cycle. Because you just wrote the code, you know exactly what behavior matters and how to verify it.
fsckboy 3 hours ago [-]
>One test → one implementation → repeat.
>Because you just wrote the code, you know exactly what behavior matters and how to verify it.
what you go on to describe is
One implementation → one test → repeat.
tnecio 5 hours ago [-]
How do you make sure Red Team doesn't just write subtly broken tests?
skybrian 11 hours ago [-]
How do you define visibility rules? Is that possible for subagents?
egeozcan 11 hours ago [-]
AFAIK Claude doesn't support it, but if you're willing to go the extra mile, you can get creative with some bash script: https://pastebin.com/raw/m9YQ8MyS (generated this a second ago - just to get the point across )
To be clear, I don't do this. I never saw an agent cheat by peeking or something. I really did look through their logs.
I'd be very interested to see claude code and other tools support this pattern when dispatching agents to be really sure.
achierius 9 hours ago [-]
> To be clear, I don't do this.
How do you know that it works then? Are you using a different tool that does support it?
skybrian 10 hours ago [-]
So what do you do? Do you define roles somewhere and tell the agent to assign these roles to subagents?
ssk42 8 hours ago [-]
Fun to see you not on tildes.
Setting up a clean room is one of the only ways to do Evals on agentic harnesses. Especially prevalent with Windsurf which doesn’t have an easy CLI start.
So how? The easiest answer when allowed is docker. Literally new image per prompt. There’s also flags with Claude to not use memory and from there you can use -p to have it just be like a normal cli tool. Windsurf requires manual effort of starting it up in a new dir.
skybrian 6 hours ago [-]
Sounds interesting, but I'm not quite getting the relevance for people writing code with an agent. Should I be doing evals?
ssk42 4 hours ago [-]
Well I mean yes. I think people ought be aware for how the harnesses compare for their stacks. But clean room applies for this RGR situation too
novaleaf 3 hours ago [-]
you are replying to a bot, that's why.
eru 2 hours ago [-]
> Useless tests start grow in count and important new things aren't tested or aren't tested well.
You can use coverage information, and you should cull your tests every once in a while I guess.
Property based testing also helps.
lagrange77 11 hours ago [-]
> Reward hacking is very real and hard to guard against.
Is it really about rewards? Im genuinely curious. Because its not a RL model.
gbnwl 10 hours ago [-]
I'm noticing terms related to DL/RL/NLP are being used more and more informally as AI takes over more of the cultural zeitgeist and people want to use the fancy new terms of the era, even if inaccurately. A friend told me he "trained and fine tuned a custom agent" for his work when what he meant was he modified a claude.md file.
collingreen 4 hours ago [-]
Respectfully, your friend doesn't know what he is talking about and is saying things that just "feel right" (vibe talking??). Which might be exactly how technical terms lose their meaning so perhaps you're exactly right.
hexaga 10 hours ago [-]
There is a nontrivial amount of RL training (RLHF, RLVR, ...), so it would be reasonable to call it an RL model.
And with that comes reward hacking - which isn't really about looking for more reward but rather that the model has learned patterns of behavior that got reward in the train env.
That is, any kind of vulnerability in the train env manifests as something you'd recognize as reward hacking in the real world: making tests pass _no matter what_ (because the train env rewarded that behavior), being wildly sycophantic (because the human evaluators rewarded that behavior), etc.
lagrange77 7 hours ago [-]
> There is a nontrivial amount of RL training (RLHF, RLVR, ...), so it would be reasonable to call it an RL model.
Hm, as i understand it, parts of the training of e.g. ChatGPT could be called RL models. But the subject to be trained/fine tuned is still a seq2seq next token predictor transformer neural net.
hexaga 6 hours ago [-]
RL is simply a broad category of training methods. It's not really an architecture per se: modern GPTs are trained first on reconstruction objective on massive text corpora (the 'large language' part), then on various RL objectives +/- more post-training depending on which lab.
magicalist 11 hours ago [-]
> Is it really about rewards? Im genuinely curious. Because its not a RL model.
Ha, good point. I was using it informally (you could handwave and call it an intrinsic reward if a model is well aligned to completing tasks as requested), but I hadn't really thought about it.
They probably meant goal hacking. (I just made that up)
taneq 3 hours ago [-]
I refer to it as ‘wanking’. It’s doing something that’s unproductive but that’s incentivised by its architecture.
SoftTalker 11 hours ago [-]
A refactor should not affect the tests at all should it? If it does, it's more than a refactor.
gchamonlive 10 hours ago [-]
It can if your refactor needs to deal with interface changes, like moving methods around, changing argument order etc... all these need to propagate to the tests
bluGill 9 hours ago [-]
Your tests are an assertion that 'no matter what this will never change'. If your interface can change then you are testing implementation details instead of the behavior users care about.
the above is really hard. A lot of tdd 'experts' don't understand is and teach fragile tests that are not worth having.
gchamonlive 3 hours ago [-]
Sure if you are changing your interfaces a lot you either are leaking abstractions or you aren't designing your interfaces well.
But things evolve with time. Not only your software is required to do things it wasn't originally designed to do, but your understanding of the domain evolve, and what once was fine becomes obsolete or insufficient.
your implementation is your interface. its a bit naive or hating-your-users to assume your tests are what your users care about. theyre dealing with everything, regardless of what youve tested or not.
SirSavary 6 hours ago [-]
Hyrum's law is about the real consumers/users (inadvertently) depending on any observable behaviour they can get their hands on.
TDD/BDD tests are meant to define the intended contract of a system.
These are not the same thing.
switchbak 7 hours ago [-]
Refactoring is changing the design of the code without affecting the behaviour.
You can change an interface and not change the behaviour.
I have rarely heard such a rigid interpretation such as this.
imiric 2 hours ago [-]
> Your tests are an assertion that 'no matter what this will never change'.
That's a strange definition. A lot of software should change in order to adapt to emerging requirements. Refactorings are often needed to make those changes easier, or to improve the codebase in ways that are transparent to users. This doesn't mean that the interfaces remain static.
> If your interface can change then you are testing implementation details instead of the behavior users care about.
Your APIs also have users. If you're only testing end-user interfaces, you're disregarding the users of your libraries and modules, e.g. your teammates and yourself.
Implementation details are contextual. To end-users, everything behind the external UI is an implementation detail. To other programmers, the implementation of a library, module, or even a single function can be a detail. That doesn't mean that its functionality shouldn't be tested. And, yes, sometimes that entails updating tests, but tests are code like any other, and also require maintenance and care.
magicalist 10 hours ago [-]
It depends on what you mean by "refactor" and how exactly you're testing, I guess, but that's not really at the heart of the point. red-green-refactor could also be used for adding new features, for instance, or an entire codebase, I guess.
esafak 3 hours ago [-]
Periodically reviewing tests is worthwhile but rarely done; writing tests alone is already disliked.
SequoiaHope 10 hours ago [-]
I’m telling it to use red/green tdd [1] and it will write test that don’t fail and then says “ah the issue is already fixed” and then move on. You really have to watch it very closely. I’m having a huge problem with bad tests in my system despite a “governance model” that I always refer it to which requires red/green tdd.
I've been able to encode Outside-in Test Driven Development into a repeatable workflow. Claude Code follows it to a T, and I've gotten great results. I've written about it more here, and created a repo people can use out of the box to try it out:
Works for PR reviews. Separating context for code review with the same model has significant impact.
● Separation of concerns. No single agent plans, implements, and verifies. The agent that writes the code is never the agent that checks it.
codybontecou 11 hours ago [-]
This sounds interesting. Can you go a bit deeper or provide references on how to implement the green/red/refactor subagent pattern?
pastescreenshot 11 hours ago [-]
What has worked better for me is splitting authority, not just prompts. One agent can touch app code, one can only write failing tests plus a short bug hypothesis, and one only reviews the diff and test output. Also make test files read only for the coding agent. That cuts out a surprising amount of self-grading behavior.
huslage 10 hours ago [-]
How do you limit access like that?
elemeno 11 hours ago [-]
It’s not an agentic pattern, it’s an approach to test driven development.
You write a failing test for the new functionality that you’re going to add (which doesn’t exist yet, so the test is red). You then write the code until the test passes (that is, goes green).
That's the cool bit - you don't have to. CC is perfectly well aware and competent to implement it; just tell it to.
6 hours ago [-]
irishcoffee 11 hours ago [-]
"So this is how liberty dies... with thunderous applause.” - Padmé Amidala
s/liberty/knowledge
osigurdson 9 hours ago [-]
So more stuff happens with this approach but how do you know what it generates is correct?
afro88 11 hours ago [-]
Good idea, and an improvement, but you still have that fundamental issue: you don't really know what code has been written. You don't know the refactors are right, in alignment with existing patterns etc.
aray07 12 hours ago [-]
thats a great idea - i have been using codex to do my code reviews since i have it to give better critique on code written by claude but havent tried it with testing yet!
darkbatman 11 hours ago [-]
codex/gpt is a stubborn model, doubt it would accept claude reviews or counter it. have seen cases where claude is more willing to comply if shared feedback though its just sycophancy too.
Skidaddle 11 hours ago [-]
How exactly do you set up your CC sessions to do this?
recroad 10 hours ago [-]
Am I supposed to be impressed by this? I think people are now just using agents for the sake of it. I'm perfectly happy running two simple agents, one for writing and one for reviewing. I don't need to go be writing code at faster than light speed. Just focusing on the spec, and watching the agent as it does its work and intervening when it goes sideways is perfectly fine with me. I'm doing 5-7x productivity easily, and don't need more than that.
I also spend most of my time reviewing the spec to make sure the design is right. Once I'm done, the coding agent can take 10 minutes or 30 minutes. I'm not really in that much of a rush.
mjrbrennan 7 hours ago [-]
Yes I'm still not really understanding this "run agents overnight" thing. Most of the time if I use claude it's done in 5-20 minutes. I've never wanted to have work done for me overnight...tomorrow is already plenty of time for more work, it's not going anywhere, and my employer isn't paying me to produce overnight.
tudelo 4 hours ago [-]
The only counter I have to this is that there are some workflows that have test environments, everything can't or shouldn't just run locally. Sometimes these test take time, and instead of babysitting the model to write code and run the build+deploy+test manually, you can send it off to work until the kinks are worked out.
Add to that I have worked on many projects that take more than 20 minutes to fully build and run tests... unfortunately. And I would consider that part of the job of implementing a feature, and to reduce cycles I have to take.
After the "green" signal I will manually review or send off some secondary reviews in other models. Is it wasteful? Probably. But its pretty damn fun (as long as I ignore the elephant in the room.)
saguntum 2 hours ago [-]
Yeah, our basic integration test suite takes over 20 minutes to run in CI, likely higher locally but I never try to run the full test suite locally. That doesn't even encapsulate PDVs and other continuous testing that runs in the background.
The other day, I wrote a claude skill to pull logs for failing tests on a PR from CI as a CSV for feeding back into claude for troubleshooting. It helped with some debugging but was very fraught and needed human guidance to avoid going in strange directions. I could see this "fix the tests" workflow instrumented as overnight churn loops that are forbidden from modifying test files that run and have engineers review in the morning if more tests pass.
Maybe agentic TDD is the future. I have a bit of a nightmare vision of SWEs becoming more like QA in the future, but with much more automation. More engineering positions may become adversarial QA for LLM output. Figure out how to break LLM output before it goes to prod. Prove the vibe coded apps don't scale.
In the exercise I described above, I was just prompt churning between meetings (having claude record its work and feeding it to the next prompt, pulling test logs in between attempts), without much time to analyze, while another engineer on my team was analyzing and actually manually troubleshooting the vibe coded junk I was pushing up, but we fixed over 100 failing integration tests in a week for a major refactor using claude plus some human(s) in the loop. I do believe it got things done faster than we would have finished without AI. I do think the quality is slightly lower than would have been if we'd had 4 weeks without meetings to build the thing, but the tests do now pass.
mjrbrennan 2 hours ago [-]
Yes that's fair, but not the case for me. Everything can run locally and specs run quickly for covering things claude changes. For everything else, the GitHub CI run is 10-15m and catches any outlier failures, and I'm usually working on more than one thing at a time anyway so it doesn't really matter to wait for this.
JumpCrisscross 5 hours ago [-]
> Am I supposed to be impressed by this?
No. But it is noteworthy. A lot of what one previously needed a SWE to do can now be brute forced well enough with AI. (Granted, everything SWEs complained about being tedious.)
From the customer’s perspective, waiting for buggy code tomorrow from San Francisco, buggy code tonight from India or buggy code from an AI at 4AM aren’t super different for maybe two thirds of use cases.
timr 4 hours ago [-]
> A lot of what one previously needed a SWE to do can now be brute forced well enough with AI. (Granted, everything SWEs complained about being tedious.)
Only if you ignore everything they generate. Look at all the comments saying that the agent hallucinates a result, generates always-passing tests, etc. Those are absolutely true observations -- and don't touch on the fact that tests can pass, the red/green approach can give thumbs up and rocket emojis all day long, and the code can still be shitty, brittle and riddled with security and performance flaws. And so now we have people building elaborate castles in the sky to try to catch those problems. Except that the things doing the catching are themselves prone to hallucination. And around we go.
So because a portion of (IMO always bad, but previously unrecognized as bad) coders think that these random text generators are trustworthy enough to run unsupervised, we've moved all of this chaotic energy up a level. There's more output, certainly, but it all feels like we've replaced actual intelligent thought with an army of monkeys making Rube Goldberg machines at scale. It's going to backfire.
JumpCrisscross 4 hours ago [-]
> coders think that these random text generators are trustworthy enough to run unsupervised, we've moved all of this chaotic energy up a level
But it works well enough for most use cases. Most of what we do isn’t life or death.
timr 4 hours ago [-]
> But it works well enough for most use cases.
So does the code produced by any bad engineer.
So either we’re finally admitting that all of that leetcode screening and engineer quality gating was a farce, or it wasn’t, and you’re wrong.
I think the answer is in the middle, but the pendulum has swung too far in the “doesn’t matter” direction.
JumpCrisscross 3 hours ago [-]
> we’re finally admitting that all of that leetcode screening and engineer quality gating was a farce, or it wasn’t, and you’re wrong
We’re admitting a bit of both. Offshoring just became more instantaneous, secure and efficient. There will still be folks who overplay their hand.
Macroeconomically speaking, I don’t see why we need more software engineers in the future than we have today, and that’s probably a conservative estimate.
datsci_est_2015 2 hours ago [-]
> Macroeconomically speaking, I don’t see why we need more software engineers in the future than we have today, and that’s probably a conservative estimate.
Why? Is the argument that there’s a finite amount of software that the world needs, and therefore we will more quickly reach that finite amount?
Seems more likely to me that if LLMs are a force multiplier for software then more software engineers will exist. Or, instead of “software engineers”, call them “people who create software” (even with the assistance of LLMs).
Or maybe the argument is that you need to be a super genius 100x engineer in order to manipulate 17 collaborative and competitive agents in order to reach your maximum potential, and then you’ll take everyone’s jobs?
Idk just seems like wild speculation that isn’t even worth me arguing against. Too late now that I’ve already written it out I guess.
cherk3 4 hours ago [-]
What I want to know is, what has this increase in code generation led to? What is the impact?
I don't mean 'Oh I finally have the energy to do that side project that I never could'.
Afterall, the trade-offs have to be worth something... right? Where's the 1-person billion dollar firms at That Mr Altman spoke about?
The way I think of it is code has always been an intermediary step between a vision and an object of value. So is there an increase in this activity that yields the trade-offs to be a net benefit?
genghisjahn 10 hours ago [-]
I went the same way. At first I was splitting off work trees and running all the agents that I could afford, then I realized I just can't keep up with it all, running few agents around one issue in one directory is fast enough. Way faster than before and I can still follow what's happening.
paganel 9 hours ago [-]
> off work trees and running all the agents that I could afford,
I still think that we, programmers, having to pay money in order to write code is a travesti. And I'm not talking about paying the license for the odd text editor or even for an operating system, I'm talking about day-to-day operations. I'm surprised that there isn't a bigger push-back against this idea.
jeremyjh 9 hours ago [-]
What is strange about paying for tools that improve productivity? Unless you consider your own time worthless you should always be open to spending more to gain more.
cube00 5 hours ago [-]
No stock backed company will be paying developers more regardless of much more productive these tools make us. You'll be lucky if they pay for the proper Claude Max plan themselves considering most wouldn't even spring for IntelliJ.
fwip 9 hours ago [-]
Are the jobs out there actually paying people more?
what 9 hours ago [-]
Your own time is worthless if you’re not spending it doing something that makes more money. You don’t make more money increasing your productivity for work when you’re expected to work the same number of hours.
mr-wendel 6 hours ago [-]
I've spent a fair amount of time contracting -- this issue is even more relevant here. While I wasn't spending very much on AI tools, what I did spent was worth every penny... for the company I was supporting :).
Fortunately, there was enough work to be done so productivity increases didn't decrease my billable hours. Even if it did, I still would have done it. If it helps me help others, then it's good for my reputation. Thats hard to put a price on, but absolutely worth what I paid in this case.
eKIK 8 hours ago [-]
Dw, there's quite a lot of push back against AI in some of the communities I hang around in. It's just rarely seldom visible here on HN.
It's usually not about the price, but more about the fact that a few megacorps and countries "own" the ability to work this way. This leads to some very real risks that I'm pretty sure will materialize at some point in time, including but not limited to:
- Geopolitical pressure - if some ass-hat of a president hypothetically were to decide "nuh uh - we don't like Spain, they're not being nice to us!", they could forbid AI companies to deliver their services to that specific country.
- Price hikes - if you can deliver "$100 worth of value" per hour, but "$1000 worth of value" per hour with the help of AI, then provider companies could still charge up to $899 per hour of usage and it'd still make "business sense" for you to use them since you're still creating more value with them than without them.
- Reduction in quality - I believe people who were senior developers _before_ starting to use AI assisted coding are still usually capable of producing high quality output. However every single person I know who "started coding" with tools like Claude Code produce horrible horrible software, esp. from a security p.o.v. Most of them just build "internal tools" for themselves, and I highly encourage that. However others have pursued developing and selling more ambitious software...just to get bitten by the fact that it's much more to software development than getting semi-correct output from an AI agent.
- A massive workload on some open source projects. We've all heard about projects closing down their bug bounty programs, declining AI generated PRs etc.
- The loss of the joy - some people enjoy it, some people don't.
We're definitely still in the early days of AI assisted / AI driven coding, and no one really knows how it'll develop...but don't mistake the bubble that is HN for universal positivity and acclaim of AI in the coding space :).
GorbachevyChase 3 hours ago [-]
China did users a solid and Qwen is a thing, so the scenario where Anthropic/OpenAI/Google collude and segment the market to ratchet prices in unison just isn’t possible. Amodei talking about value based pricing is a dream unless they buy legislation to outlaw competitors. Altman might have beat them to that punch with this admin, though. Most of us are operating on 10-40% margins. Usually on the low end when there aren’t legal barriers. The 80-99% margins or rent extraction rights SaaS people expect is just out of touch. The revenue the big 3 already pull in now has a lot more to do with branding and fear-mongering than product quality.
switchbak 7 hours ago [-]
My old work machine used power quite aggressively - I was happy to pay for that (and turn it off at night!). This seems even more directly valuable.
xandrius 9 hours ago [-]
It's silly, who wouldn't answer yes to the question "would you like to finish your task faster?". The real trick is to produce more but by putting less effort than before.
ponector 35 minutes ago [-]
If you are paid hourly and not per task than what is the point in finishing your task faster?
tdeck 4 hours ago [-]
> who wouldn't answer yes to the question "would you like to finish your task faster?"
People who enjoy the process of completing the task?
lmz 4 hours ago [-]
Maybe we'd see "coding gyms" like how white collar workers have gyms for the physical exercise they're not getting from their work.
the_af 7 hours ago [-]
If you finish faster, you'll be given another task. You're not freeing yourself sooner or spending less effort, you're working the same number of hours for the same pay. Your reward is not joining the ranks of those laid off.
ge96 10 hours ago [-]
I would be impressed if I could say "here's $100 turn it into $1000" but you still gotta do the thinking.
aray07 9 hours ago [-]
yup, agree - i spend most of my time reviewing the spec. The highest leverage time is now deciding what to work on and then working on the spec. I ended up building the verify skill (https://github.com/opslane/verify) because I wanted to ensure claude follows the spec. I have found that even after you have the spec - it can sometimes not follow it and it takes a lot of human review to catch those issues.
bhouston 12 hours ago [-]
I call this "Test Theatre" and it is real. I wrote about it last year:
Yeah, having your agent write 3x the code in exhaustive tests (I tried this recently and got 600 lines of tests for my 100 lines of code!) sure makes things look great, but when you actually look at the content of the tests they’re meaningless. Good tests validate the use of design patterns, ensure that dependencies hold, and are meaningful (e.g. shortcut debugging by setting up useful state) when they break.
joegaebel 7 hours ago [-]
I've found the best way to achieve that is to force the agent to do TDD. Better to get it to do Outside-in TDD. Even better to get it to run Outside-in TDD, then use mutation testing to ensure it has fully covered the logic.
This was really good, and second leaning on property testing. I’ve had really good outcomes from setting up Schemathesis and getting blanket coverage for stuff like “there should be no request you can generate as logged in user A that let’s you do things as or see things that belong to user B”, as well as “there should be no request you can find to any API endpoint that can trigger a 5xx response”
aray07 11 hours ago [-]
Test theatre is exactly the right framing. The tests are syntactically correct, they run, they pass but do they actually prove anything?
what 5 hours ago [-]
Test theatre isn’t new. Most people writing tests do the exact same thing, testing implementation.
lrytz 10 minutes ago [-]
blog looks suspicious
- privacy policy links to marketing company `beehiiv.com`. the blog author doesn't show up there.
- the profile picture url is `.../Generated_Image_March_03__2026_-_1_55PM.jpg.jpeg`
i didn't dig or read further.
wesselbindt 8 hours ago [-]
Does anyone know what this guy is having his agents build? Bc I looked a bit and all I see him ship is linkedin posts about Claude.
gedy 7 hours ago [-]
Yeah maybe I'm just old but in 25 years in industry - not one company has needed this much code that fast. They may insist they do but then it sits while they figure out how to sell, or the inevitable "oh wait, we didn't think about that..."
lostapathy 6 hours ago [-]
So much of this - never would have guessed how much code I wouldn't write doing this as a career.
hinkley 7 hours ago [-]
Hurry up and wait.
RealityVoid 12 hours ago [-]
It's... really the same problem when you hire people to just write tests. A lot of time it just confirms that the code does what the code does. Having clear specs of what the code should do make things better and clearer.
SoftTalker 12 hours ago [-]
Yep, tests written after the fact are just verifying tautologies.
> Most teams don't [write tests first] because thinking through what the code should do before writing it takes time they don't have.
It's astonishing to me how much our industry repeats the same mistakes over and over. This doesn't seem like what other engineering disciplines do. Or is this just me not knowing what it looks like behind the curtain of those fields?
yurishimo 11 hours ago [-]
When push comes to shove, software can usually be fudged. Unlike a building or a water treatment plant where the first fuck up could mean that people die.
I like to think that people writing actual mission critical software try their absolute best to get it right before shipping and that the rest our industry exists in a totally separate world where a bug in the code is just actually not that big of a deal. Yeah, it might be expensive to fix, but usually it can be reverted or patched with only an inconvenience to the user and to the business.
It’s like the fines that multinational companies pay when breaking the law. If it’s a cost of doing business, it’s baked into the price of the product.
You see this also in other industries. OSHA violations on a residential construction site? I bet you can find a dozen if you really care to look. But 99% of the time, there are no consequences big enough for people to care so nobody wears their PPE because it “slows them down” or “makes them less nimble”. Sound familiar?
AlotOfReading 3 hours ago [-]
I like to think that people writing actual mission critical software try their absolute best to get it right before shipping.
People try, but the only fundamentally different part is that you spend time thinking about and documenting your process rather than just doing it. There's always one more bug. Usually there ends up being a human covering up for the system's failures somewhere that no one else notices. That's the driver in the car, or the factory tech who adjusts things just a bit.
girvo 5 hours ago [-]
Quite. We’re far more similar to construction workers than we are civil engineers, despite the lofty title we like to bestow upon ourselves.
Ekaros 1 hours ago [-]
And from slightly different view. What we make is not output of modern mass production. With highly tuned and most of time perfectly matching parts build into one unit.
Instead we make pre-mass production bespoke products where each part is slightly filled and fitted together from bunch of random components. Say the barrel can't be changed between two different handguns. We just have magic technology to replicate the single gun multiple times. Does not mean it is actually mass-produced in sense say our current power tools are.
gonzalohm 9 hours ago [-]
That's probably dependent on your specific area of work. For most projects, It's okayish to deploy code with bugs. There will be future releases that fix those bugs and add improvements. Obviously that's not the case with high risk systems like space rockets software and similar.
With other engineering professions, all projects are like that. You cannot "deploy a bridge to production" to see what happens and fix it after a few have died
tibbar 12 hours ago [-]
a lot of the value of tests is confirming that the system hasn't regressed beyond the behavior at the original release. It's bad if the original release is wrong, but a separate issue is if the system later accidentally stops behaving the way it did originally.
InsideOutSanta 11 hours ago [-]
The issue I see is that the high test coverage created by having LLMs write tests results in almost all non-trivial changes breaking tests, even if they don't change behavior in ways that are visible from the outside. In one project I work, we require 100% test coverage, so people just have LLMs write tons of tests, and now every change I make to the code base always breaks tests.
So now people just ignore broken tests.
> Claude, please implement this feature.
> Claude, please fix the tests.
The only thing we've gained from this is that we can brag about test coverage.
hinkley 6 hours ago [-]
My best unit tests are 3 lines, one of them whitespace, and they assert one single thing that's in the requirements.
These are the only tests I've witnessed people delete outright when the requirements change. Anything more complex than this, they'll worry that there's some secondary assertion being implied by a test so they can't just delete it.
Which, really is just experience telling them that the code smells they see in the tests are actually part of the test.
meanwhile:
it("only has one shipping address", ...
is demonstrably a dead test when the story is, "allow users to have multiple shipping addresses", as is a test that makes sure balances can't go negative when we decide to allow a 5 day grace period on account balances. But if it's just one of six asserts in the same massive tests, then people get nervous and start losing time.
ForHackernews 10 hours ago [-]
Unit tests vs acceptance tests. You shouldn't be afraid to throw away unit tests if the implementation changes, and acceptance tests should verify behavior at API boundaries, ignoring implementation details.
hinkley 6 hours ago [-]
BDD helps with this as it can allow you to get the setup out of the tests making it even cheaper for someone to yeet a defunct test.
mattmanser 10 hours ago [-]
I feel it end up a massive drag on development velocity and makes refactoring to simpler designs incredibly painful.
But hey, we're just supposed to let the AIs run wild and rewrite everything every change so maybe that's a heretic view.
9 hours ago [-]
aray07 12 hours ago [-]
yup agree - i think have specs and then do verifications against the spec. I have heard that this is how a lot of consulting firms work - you have acceptance criterias and thats how work is validated.
seanmcdirmid 11 hours ago [-]
I've been doing differential testing in Gemini CLI using sub-agents. The idea is:
1. one agent writes/updates code from the spec
2. one agent writes/updates tests from identified edge cases in the spec.
3. a QA agent runs the tests against the code. When a test fails, it examines the code and the test (the only agent that can see both) to determine blame, then gives feedback to the code and/or test writing agent on what it perceives the problem as so they can update their code.
(repeat 1 and/or 2 then 3 until all tests pass)
Since the code can never fix itself to directly pass the test and the test can never fix itself to accept the behavior of the code, you have some independence. The failure case is that the tests simply never pass, not that the test writer and code writer agents both have the same incorrect understanding of the spec (which is very improbable, like something that will happen before the heat death of the universe improbable, it is much more likely the spec isn't well grounded/ambiguous/contradictory or that the problem is too big for the LLM to handle and so the tests simply never wind up passing).
jeremyjh 10 hours ago [-]
Where is the interface defined ? If it is just the coder reading the test it can hard code specific cases based on the test setup/fixture data.
seanmcdirmid 9 hours ago [-]
There is a specification and the interface is defined from that. The coder never gets to see the test.
simonpure 5 hours ago [-]
I've been impressed by Google Jules since the Gemini 3.1 Pro update. Sometimes it's been working on a task for 4h. I've now put it in a ralph loop using a Github Action to call itself and auto merge PRs after the linter, formatter and tests pass. It does still occasionally want my approval, but most of the time I just say Sounds great!
> A few weeks ago I realized I had no reliable way to know if any of it was correct: whether it actually does what I said it should do.
I can't understand the mindset that would lead someone not to have realized this from the beginning.
olalonde 2 hours ago [-]
Somewhat unrelated but are there good boilerplate/starter repos that are optimized for agent based development? Setting up the skills/MCPs/AGENTS.md files seems like a lot of work.
daxfohl 10 hours ago [-]
Sounds like we've just gotten into lazy mode where we believe that whatever it spits out is good enough. Or rather, we want to believe it, and convince ourselves that some simple guardrail we put up will make it true, because God forbid we have to use our own brain again.
What if instead, the goal of using agents was to increase quality while retaining velocity, rather than the current goal of increasing velocity while (trying to) retain quality? How can we make that world come to be? Because TBH that's the only agentic-oriented future that seems unlikely to end in disaster.
rglover 9 hours ago [-]
You can't. To retain and improve quality requires care. Very few if any of the people setting stuff like this up truly care about delivering a quality result (any result is the real goal). Unless there's some incentive to care, quality will be found among the exceedingly rare people/businesses.
TonyAlicea10 10 hours ago [-]
You can find approaches that improve things, but there's always going to be a chance that your code is terrible if you let an LLM generate it and don't review it with human eyes.
But review fatigue and resulting apathy is real. Devs should instead be informed if incorrect code for whatever feature or process they are working on would be high-risk to the business. Lower-risk processes can be LLM-reviewed and merged. Higher risk must be human-reviewed.
If the business you're supporting can't tolerate much incorrectness (at least until discovered), than guess what - you aren't going to get much speed increase from LLMs. I've written about and given conference talks on this over the past year. Teams can improve this problem at the requirements level: https://tonyalicea.dev/blog/entropy-tolerance-ai/
afro88 12 hours ago [-]
I guess to reach this point you have already decided you don't care what the code looks like.
Something I'm starting to struggle with is when agents can now do longer and more complex tasks, how do you review all the code?
Last week I did about 4 weeks of work over 2 days first with long running agents working against plans and checklists, then smaller task clean ups, bugfixes and refactors. But all this code needs to be reviewed by myself and members from my team. How do we do this properly? It's like 20k of line changes over 30-40 commits. There's no proper solution to this problem yet.
One solution is to start from scratch again, using this branch as a reference, to reimplement in smaller PRs. I'm not sure this would actually save time overall though.
tdeck 4 hours ago [-]
If you haven't reviewed the code yet, how can you say it did 4 weeks of work in 2 days? You haven't verified the correctness, and besides reviewing the code is part of the work.
eikenberry 8 hours ago [-]
The proper solution is to treat the agent generated code like assembly... IE. don't review it. Agents are the compiler for your inputs (prompts, context, etc). If you care about code quality you should have people writing it with AI help, not the other way around.
dumpsterdiver 6 hours ago [-]
You’re not alone. I went from being a mediocre security engineer to a full time reviewer of LLM code reviews last week. I just read reports and report on incomplete code all day. Sometimes things get humorously worse from review to review. I take breaks by typing out the PoCs the LLMs spell out for me…
krater23 3 hours ago [-]
I'm security engineer too and when it really will come so far that I only review LLM code I refuse to do it for fewer than my doubled hourly rate.
lbreakjai 8 hours ago [-]
> Something I'm starting to struggle with is when agents can now do longer and more complex tasks, how do you review all the code?
Same as before. Small PRs, accept that you won't ship a month of code in two days. Pair program with someone else so the review is just a formality.
The value of the review is _also_ for someone else to check if you have built the right thing, not just a thing the right way, which is exponentially harder as you add code.
akshaysg 11 hours ago [-]
I've been thinking a lot about this!
Redoing the work as smaller PRs might help with readability, but then you get the opposite problem: it becomes hard to hold all the PRs in your head at once and keep track of the overall purpose of the change (at least for me).
IMO the real solution is figuring out which subset of changes actually needs human review and focusing attention there. And even then, not necessarily through diffs. For larger agent-generated changes, more useful review artifacts may be things like design decisions or risky areas that were changed.
kg 11 hours ago [-]
It sounds like you know this but what happened is that you didn't do 4 weeks of work over 2 days, you got started on 4 weeks of work over 2 days, and now you have to finish all 4 weeks worth of work and that might take an indeterminate amount of time.
If you find a big problem in commit #20 of #40, you'll have to potentially redo the last 20 commits, which is a pain.
You seem to be gated on your review bandwidth and what you probably want to do is apply backpressure - stop generating new AI code if the code you previously generated hasn't gone through review yet, or limit yourself to say 3 PRs in review at any given time. Otherwise you're just wasting tokens on code that might get thrown out. After all, babysitting the agents is probably not 'free' for you either, even if it's easier than writing code by hand.
Of course if all this agent work is helping you identify problems and test out various designs, it's still valuable even if you end up not merging the code. But it sounds like that might not be the case?
Ideally you're still better off, you've reduced the amount of time being spent on the 'writing the PR' phase even if the 'reviewing the PR' phase is still slow.
kwanbix 11 hours ago [-]
So you have become a reviewer instead of a programmer? Is that so? hones question. And if so, what is the advantage of looking a code for 12 hours instead of coding for 12.
woah 7 hours ago [-]
Build features faster. Granted, this exposes the difference between people who like to finish projects and people who like to get paid a lot of money for typing on a keyboard.
krater23 3 hours ago [-]
Bullshit! You project isn't finished as long as there are obvious major bugs that you can't fix because you don't unterstand the code.
logicchains 11 hours ago [-]
>Last week I did about 4 weeks of work over 2 days first with long running agents working against plans and checklists, then smaller task clean ups, bugfixes and refactors. But all this code needs to be reviewed by myself and members from my team. How do we do this properly? It's like 20k of line changes over 30-40 commits. There's no proper solution to this problem yet.
Get an LLM to generate a list of things to check based on those plans (and pad that out yourself with anything important to you that the LLM didn't add), then have the agents check the codebase file by file for those things and report any mismatches to you. As well as some general checks like "find anything that looks incorrect/fragile/very messy/too inefficient". If any issues come up, ask the agents to fix them, then continue repeating this process until no more significant issues are reported. You can do the same for unit tests, asking the agents to make sure there are tests covering all the important things.
aray07 11 hours ago [-]
yeah honestly thats what i am struggling with too and I dont have a a good solution. However, I do think we are going to see more of this - so it will be interesting to see how we are going to handle this.
i think we will need some kind of automated verification so humans are only reviewing the “intent” of the change. started building a claude skill for this (https://github.com/opslane/verify)
afro88 10 hours ago [-]
It's a nice idea, but how do you know the agent is aligned with what it thinks the intent is?
8note 7 hours ago [-]
or moreso, what happens at compact boundaries where the agent completely forgets the intent
zer00eyz 11 hours ago [-]
> how do you review all the code?
Code review is a skill, as is reading code. You're going to quickly learn to master it.
> It's like 20k of line changes over 30-40 commits.
You run it, in a debugger and step through every single line along your "happy paths". You're building a mental model of execution while you watch it work.
> One solution is to start from scratch again, using this branch as a reference, to reimplement in smaller PRs. I'm not sure this would actually save time overall though.
Not going to be a time saver, but next time you want to take nibbles and bites, and then merge the branches in (with the history). The hard lesson here is around task decomposition, in line documentation (cross referenced) and digestible chunks.
But if you get step debugging running and do the hard thing of getting through reading the code you will come out the other end of the (painful) process stronger and better resourced for the future.
afro88 10 hours ago [-]
Oh I didn't mean literally how do I review code. I meant, if an agent can write a lot of code to achieve a large task that seemingly works (from manual testing), what's the point if we haven't really solved code review? There's still that bottleneck no matter how fast you can get working code down.
ziofill 4 hours ago [-]
> Writing acceptance criteria is harder than writing a prompt, because it forces you to think through edge cases before you've seen them. Engineers resist it for the same reason they resisted TDD, because it feels slower at the start.
This resonates with my experience, and it is also a refreshing honest take: pushing back on heavy upfront process isn't laziness, it's just the natural engineers drive to build things and feel productive.
jdlshore 12 hours ago [-]
Pet peeve: this post misunderstands “TDD.” What it really describes is acceptance tests.
TDD is a tool for working in small steps, so you get continuous feedback on your work as you go, and so you can refine your design based on how easy it is to use in practice. It’s “red green refactor repeat”, and each step is only a handful of lines of code.
TDD is not “write the tests, then write the code.” It’s “write the tests while writing the code, using the tests to help guide the process.”
Thank you for coming to my TED^H^H^H TDD talk.
wnevets 11 hours ago [-]
> TDD is a tool for working in small steps, so you get continuous feedback on your work as you go, and so you can refine your design based on how easy it is to use in practice.
I would like to emphasize that feedback includes being alerted to breaking something you previously had working in a seemly unrelated/impossible way.
hinkley 6 hours ago [-]
Accidentally mutating an input is always a 'fun' way to trigger spooky action at a distance.
hinkley 6 hours ago [-]
suggestion: TeDD talk.
itissid 9 hours ago [-]
Many times there is really no way of getting around some of the expert-human judgement complexity of the larger question of "How to get agents to build reliably".
One example I have been experimenting is using Learning Tests[1]. The idea is that when something new is introduced in the system the Agent must execute a high value test to teach itself how to use this piece of code. Because these should be high leverage i.e. they can really help any one understand the code base better, they should be exceptionally well chosen for AIs to use to iterate. But again this is just the expert-human judgement complexity shifted to identifying these for AI to learn from. In code bases that code Millions of LoC in new features in days, this would require careful work by the human.
I think the idea of running agents while you sleep isn't going to work until AI can match or exceed human-level agency and intelligence.
Whenever I coded any serious solution as a technical co-founder, every single day there was a major new debate about the product direction. Though we made massive 'progress' and built out a whole new universe in software, we haven't yet managed to find product market fit. It's like constant tension. If the intelligence of two relatively intelligent humans with a ton of experience and complimentary expertise isn't enough to find product-market-fit after one year, this gives you an idea about how high the bar is for an AI agent.
It's like the problem was that neither me nor my domain expert co-founder who had been in his industry for over 15 years had a sufficiently accurate worldview about the industry or human psychology to be able to produce a financially viable solution. Technically, it works perfectly but it just doesn't solve anyone's problem.
So just imagine how insanely smart AI has to be to compete in the current market.
Maybe you could have 100 agents building and promoting 100 random apps per day... But my feeling is that you're going to end up spending more money on tokens and domain names then you will earn in profits. Maybe deploy them all under the same domain with different subdomains? Not great for SEO... Still the market for all these basic low-end apps is going to be extremely competitive.
hermit_dev 6 hours ago [-]
It's an interesting problem that even though it's represented by just you as a single person, I think this is shared across the board with larger corporations at scale. I know for example they were seeing this with game devs in regards to the Godot engine. So many people were uploading work done by AI that has been unverified that people just can't keep up with it. And maybe some of it's good, but how do you vet all the crap out? No one knows what's being written anymore (and non-devs can code now too, which is amazing, but part of the problem that we introduced). I think in the future of being a developer will be more about verifying code integrity and working with AI to ensure it is meeting said standards. Rather than actually being in the driver's seat. Not sexy, but we're handing the keys over willingly, yet, AI is only interpreting the intent. It's going to get things wrong no matter what we do.
Lasang 6 hours ago [-]
The concept of long-running background agents sounds appealing, but the real challenge tends to be reliability and task definition rather than raw model capability.
If an agent runs unattended for hours, small errors compound quickly. Even simple misunderstandings about file structure or instructions can derail the whole process.
firstdata 4 hours ago [-]
The hardest part of running agents autonomously is the data quality problem. When your agent runs unsupervised, every decision is only as good as the data it pulls. Having agents access authoritative structured sources (government APIs, international org datasets) rather than scraping random pages makes a huge difference. The real failure mode is not hallucination - it is the agent confidently acting on unreliable data.
anonnon 19 minutes ago [-]
Somewhat off topic, but any theories as to why the shilling for Claude (not insinuating that's what the OP is doing) is so transparent? For example, the bots/shills often go out of their way to insist you get the $200 plan, in particular. If Anthropic's product is so good: 1) why must it be shilled so hard, and 2) why is the shilling (which is partially a result of the product) so obvious? Is this an OpenAI reverse psychology dirty trick, the equivalent of using robocalls to inundate voters with messages telling them to vote for your opponent so as to negatively dispose them towards your opponent?
jc-myths 6 hours ago [-]
Solo founder here, shipping a real product built mostly with AI.
The code review thing is real but my actual daily pain is different.
AI lies about being done. It'll say "implemented" and what it actually did is add a placeholder with a TODO comment. Or it silently adds a fallback path that returns hardcoded data when the real API fails, and now your app "works" but nothing is real.
I've also given it explicit rules like "never use placeholder images, always generate real assets" — and it just... ignores them sometimes. Not always. Sometimes. Which is worse, because you can't trust it but you also can't not use it.
The 80% it writes is fine. The problem is you still have to verify 100% of it.
cube00 5 hours ago [-]
Have you tried using an additional agent to verify the outputs? It seems that can help if the supervising agent has a small context demand on it. (ie. run this command, make sure it returns 0, invoke main coding agent with error message if it doesn't)
jc-myths 47 minutes ago [-]
Yeah I've experimented with that pattern. The meta-agent approach works for catching obvious stuff, like "did the build pass" or "does this file actually exist." But the harder bugs are semantic. The agent writes a function that returns the right shape of data but with wrong values, or adds a fallback that masks the real failure. A supervising agent reading the same code often has the same blind spots.
What's worked better for me is building verification into the workflow itself, like explicit test assertions the agent has to pass before it can claim "done," plus a rule that any API call must show a real response, not a mock. Basically treating the AI like a junior dev who needs guard rails, not a senior who just needs a code review.
storus 11 hours ago [-]
Wasn't the best practice to run one model/coding agent that writes the code and another one that reviews it? E.g. Claude Code for writing the code, GPT Codex to review/critique it? Different reward functions.
8note 6 hours ago [-]
even in one agent, a different starting prompt will have you tracing a very different path through the model.
maybe it still sends you to the same valley, but there's so many parameters and dimensions that i dont think its very likely without also being correct
xandrius 9 hours ago [-]
I think people are misunderstanding reward functions and LLMs.
LLMs don't actually have a reward system like some other ML models.
storus 7 hours ago [-]
They are trained with one, and when you look at DPO you can say they contain an implicit one as well.
throwatdem12311 6 hours ago [-]
It’s superstition that using a different slop generator to “review” the slop from a different brand of slop generator somehow makes things better. It’s slop all the way down.
This is TDD? Tests first, then code?
I do first the docs, then the tests, then the code. For years.
What he describes is like that. Just that the plan step is suggesting docs, not writing actual docs.
godelski 2 hours ago [-]
TDD has always been flawed. Tests can't give you complete coverage, they are always incomplete. Though every time I say this people think I'm against tests. I'm just saying tests can't prove correctness. You'd have to be a lunatic to think they are proofs. Even crazier is having the LLMs write their own tests and think that that's proof. I'm sure it improves things, but proofs are a different beast all together.
Seems things still haven't changed in half a century
Of course tests are not proofs. For proofs I do 'make verify' :)
Tests just catch the most simple mistakes, edge cases and some regressions.
throwaway7783 6 hours ago [-]
Regarding the self-congratulation machine - I simply use a different claude code session to do the reviews. There is no self-congratulation, but overly critical at times. Works well.
Honestly, sometimes the harnesses, specs, some predefined structure for skills etc all feel over-engineering. 99% of the time a bloody prompt will do. Claude Code is capable of planning, spawning sub-agents, writing tests and so on.
Claude.md file with general guidelines about our repo has worked extraordinarily good, without any external wrappers, harnesses or special prompts. Even the MD file has no specific structure, just instructions or notes in English.
OsrsNeedsf2P 11 hours ago [-]
Our app is a desktop integration and last year we added a local API that could be hit to read and interact with the UI. This unlocked the same thing the author is talking about - the LLM can do real QA - but it's an example of how it can be done even in non-web environments.
Edit: I even have a skill called release-test that does manual QA for every bug we've ever had reported. It takes about 10 hours to run but I execute it inside a VM overnight so I don't care.
8note 7 hours ago [-]
i got me a windows mcp setup running in a sandbox, so it can look at screenshots, see the UIA, and click things either by coordinate or by UIA.
i let it run overnight against a windows app i was working on, and that got it from mostly not working to mostly working.
the loop was
1. look at the code and specs to come up with tests
2. predict the result
3. try it
4. compare the prediction against rhe result
5. file bug report, or call it a success
and then switch to bug fixing, and go back around again. Worked really well in geminicli with the giant context window
overfeed 9 hours ago [-]
> At some point you're not reviewing diffs at all, just watching deploys and hoping something doesn't break.
To everyone who plan on automating themselves out of a job by taking the human element out- this is the endgame that management wants: replacing your (expensive and non-tax-optimized) labor with scalable Opex.
hinkley 7 hours ago [-]
It's also delusional.
Havoc 12 hours ago [-]
They're definitely inferior to proper tests, but even weak CC tests on top of CC code is an improvement over no tests. If CC does make a change that shifts something dramatically even a weak test may flag enough to get CC to investigate.
Even better though - external test suits. Recently made a S3 server of which the LLM made quick work for MVP. Then I found a Ceph S3 test suite that I could run against it and oh boy. Ended up working really good as TDD though.
aray07 12 hours ago [-]
yeah i have been hearing a lot more about this concept of “digital twins” - where you have high fidelity versions of external services to run tests against. You can ask the API docs of these external services and give it to Claude. Wonder if that is where we will be going more towards.
didgeoridoo 12 hours ago [-]
Isn’t this just an API sandbox? Many services have a test/sandbox mode. I do wish they were more common outside of fintech.
8 hours ago [-]
lateforwork 12 hours ago [-]
> When Claude writes tests for code Claude just wrote, it's checking its own work.
You can have Gemini write the tests and Claude write the code. And have Gemini do review of Claude's implementation as well. I routinely have ChatGPT, Claude and Gemini review each other's code. And having AI write unit tests has not been a problem in my experience.
aray07 12 hours ago [-]
yeah i have started using codex to do my code reviews and it helps to have “a different llm” - i think one of my challenges has been that unit tests are good but not always comprehensive. you still need functional tests to verify the spec itself.
xandrius 9 hours ago [-]
I don't think that's necessary, just make sure the context is not shared. A pretty good model can handle both sides well enough.
wg0 5 hours ago [-]
All these macho men - I wonder what exactly are they shipping at that pace?
Not a rhetoric question. Trillion token burners and such.
gormen 2 hours ago [-]
Different approach: copy the programmer's logic, not the agent's behavior.
silentsvn 10 hours ago [-]
One thing I've been wrestling with building persistent agents is memory quality. Most frameworks treat memory as a vector store — everything goes in, nothing gets resolved. Over time the agent is recalling contradictory facts with equal confidence.
The architecture we landed on: ingest goes through a certainty scoring layer before storage. Contradictions get flagged rather than silently stacked. Memories that get recalled frequently get promoted; stale ones fade.
It's early but the difference in agent coherence over long sessions is noticeable. Happy to share more if anyone's going down this path.
uxcolumbo 40 minutes ago [-]
Sounds interesting, would like to learn more about this.
How do you imokement the scoring layer and when and how is it invoked?
zhangchen 2 hours ago [-]
certainty scoring sounds useful but fwiw the harder problem is temporal - a fact that was true yesterday might be wrong today, and your agent has no way to know which version to trust without some kind of causal ordering on the writes.
girvo 5 hours ago [-]
Interesting. I’ve been playing with something similar, at the coding agent harness message sequence level (memory, I guess). I’m looking at human driven UX for compaction and resolving/pruning dead ends
vidimitrov 10 hours ago [-]
He admits the real hole himself: "this doesn't catch spec misunderstandings. If your spec was wrong to begin with, the checks will pass."
But there's a second problem underneath that one. Acceptance criteria are ephemeral. You write them before prompting, Playwright runs against them, and then where do they go? A Notion doc. A PR comment. Nowhere permanent. Next time an agent touches that feature, it's starting from zero again.
The commit that ships the feature should carry the criteria that verified it. Git already travels with the code. The reasoning behind it should too.
dwaltrip 10 hours ago [-]
Did AI write this?
vidimitrov 9 hours ago [-]
Nope - though I’ll take it as a compliment either way. It’s a problem I’ve been sitting with for a while, so the answer came out more formed than I expected. You disagree?
rrvsh 8 hours ago [-]
Its actually a pretty good idea/framework for writing commit descriptions, especially for smaller changes that don't have any nuances to note in the commit
svstoyanovv 7 hours ago [-]
Why only small changes tho? I think it can also work with larger changes if you commit more regularly. And with agentic coding or even with autonomous agentic coding, you need to do it regularly and create these contextual checkpoints, no?
dwaltrip 5 hours ago [-]
It has that punchy, breathless cadence... shrugs
12 hours ago [-]
skyberrys 7 hours ago [-]
To me the last paragraph was the highest value in the article. Write out your test in plain language first, and then write the prompt for the autonomous agent using your language and the test prompt not the auto-code.
shawntwin 5 hours ago [-]
There seems lots of preparing, planning, token buying, set goal, and token cost just to niche target and related vibe coding.
pokstad 4 hours ago [-]
Took a super intelligent AI for us to realize how important tests and TDD is.
throwyawayyyy 12 hours ago [-]
I am afraid that we are heading to a world in which we simply give up on the idea of correct code as an aspiration to strive for. Of course code has always been bad, and of course good code has never been a goal in the whole startup ecosystem (for perfectly legitimate reasons!). But that real production code, for services that millions or even billions of people rely on, should be reliable, that if it breaks that's a problem, this is the whole _engineering_ part of software engineering. And we can say: if we give that up we're going to have a whole lot more outages, security issues, all those things we are meant to minimize as a profession. And the answer is going to be: so what? We save money overall. And people will get used to software being unreliable; which is to say, people will not have a choice but to get used to it.
lbreakjai 8 hours ago [-]
I disagree. An analytics tool that's correct 99.9% of the time is not 0.1% less valuable than a tool that is always correct. It's 100% less valuable.
Outage is the easy failure mode. I can work around a service that's up 80% of the time, but is 100% correct. A service that's up 100% of the time but is 80% correct is useless.
throwyawayyyy 6 hours ago [-]
Well hang-on, in this case it is _neither_ reliable in terms of availability _nor_ correctness. Worst of all worlds.
osigurdson 9 hours ago [-]
I think the solution has to be end to end tests. Maybe first run by humans, then maybe agents can learn and replicate. I can't see why unit tests really help other than for the LLM to reason about its own code a little more.
digitalPhonix 12 hours ago [-]
> Changes land in branches I haven't read. A few weeks ago I realized I had no reliable way to know if any of it was correct: whether it actually does what I said it should do.
I care about this. I don't want to push slop, and I had no real answer.
That’s really putting the cart before the horse. How do you get to “merging 50 PRs a week” before thinking “wait, does this do the right thing?”
aray07 12 hours ago [-]
Yeah just wanted to see what the bottlenecks would be as I started pushing the limits. Eventually made this into a verification skill(github.com/opslane/verify)
akhrail1996 4 hours ago [-]
Honestly I think the "same AI checking same AI" concern is a bit overstated at this point. If the agents don't share context - separate conversations, no common memory - Opus is good enough that they don't really fall into the same patterns. At least at the micro level, like individual functions and logic. Maybe at the macro/architectural level there's still something there but in practice I'm not seeing it much anymore.
BeetleB 12 hours ago [-]
I wish there was a way to "freeze" the tests. I want to write the tests first (or have Claude do it with my review), and then I want to get Claude to change the code to get them to pass - but with confidence that it doesn't edit any of the test files!
simlevesque 12 hours ago [-]
I use devcontainers in all the projects I use claude code on. [1] With it you can have claude running inside a container with just the project's code in write access and also mount a test folder with just read permissions, or do the opposite. You can even have both devcontainers and run them at the same time.
If you want to try it just ask Claude to set it up for your project and review it after.
comradesmith 12 hours ago [-]
1. Make tests
2. Commit them
3. Proceed with implementation and tell agent to use the tests but not modify them
It will probably comply, and at least if it does change the tests you can always revert those files to where you committed them
tavavex 11 hours ago [-]
Are there really no ways to control read/write permissions in a smart way? I've not had to do this yet, but is it really only capable of either being advisory with you implementing all the code, or it having full control over the repo where you just hope nothing important is changed?
You could probably make a system-level restriction so the software physically can't modify certain files, but I'm not sure how well that's going to fly if the program fails to edit it and there's no feedback of the failure.
mgrassotti 11 hours ago [-]
You can use a Claude PreToolUse command hook to prevent write (or even read) access to specific files.
With this approach you can enforce that Claude cannot access to specific files. It’s a guarantee and will always work, unlike a prompt or Claude.md which is just a suggestion that can be forgotten or ignored.
This post has an example hook for blocking access to sensitive files:
No. I don't want the mental burden of auditing whether it modified the tests.
vitro 11 hours ago [-]
Then, run the agent vm-sandboxed, with tests mounted as a read-only directory, if your directory structure allows it.
jsw97 11 hours ago [-]
Or, less securely, hash the tests and check the hash with a hook, post tool use. Or a commit hook.
joegaebel 6 hours ago [-]
You'd be surprised - I know I was - you can encode Test-Driven development into workflows that agents actually follow. I wrote an in-depth guide about this and have a POC for people to try over here: https://www.joegaebel.com/articles/principled-agentic-softwa...
paxys 12 hours ago [-]
Why can't you do just that? You can configure file path permissions in Claude or via an external tool.
pfortuny 12 hours ago [-]
Why not use a client-server infrastructure for tests? The server sends the test code, the client runs the code, sends the output to the server and this replies pass/not pass.
One could even make zero-knowledge test development this way.
11 hours ago [-]
aray07 12 hours ago [-]
yeah i agree - this is somewhat the approach I have been using more of. Write the tests first based on specs and then write code to make the tests pass. This works well for cases where unit tests are sufficient.
SatvikBeri 12 hours ago [-]
You can remove edit permissions on the test directory
BeetleB 11 hours ago [-]
I'm not up to speed on Claude's features. Can I, from the prompt, quickly remove those permissions and then re-add them (i.e. one command to drop, and one command to re-add)?
SatvikBeri 11 hours ago [-]
Yeah, you can type `/permisssions` and do it there. Or you can make a custom slash command, or just ask Claude to do it. You can also set it when you launch a claude session, there are a dozen ways to do anything.
kubb 12 hours ago [-]
"Add a config option preventing you from modifying files matching src/*_test.py."
dboreham 12 hours ago [-]
Just tell it that the tests can't be changed. Honestly I'd be surprised if it tried to anyway. I've never had it do that through many projects where tests were provided to drive development.
jaggederest 11 hours ago [-]
Anyone who wants a more programmatic version of this, check out cucumber / gherkin - very old school regex-to-code plain english kind of system.
foundatron 10 hours ago [-]
Feels like a whole bunch of us are converging on very similar patterns right now.
I've been building OctopusGarden (https://github.com/foundatron/octopusgarden), which is basically a dark software factory for autonomous code generation and validation. A lot of the techniques were inspired by StrongDM's production software factory (https://factory.strongdm.ai/). The autoissue.py script (https://github.com/foundatron/octopusgarden/blob/main/script...) does something really close to what others in this thread are describing with information barriers. It's a 6-phase pipeline (plan, review plan, implement, cold code review, fix findings, CI retry) where each phase only gets the context it actually needs. The code review phase sees only the diff. Not the issue, not the plan. Just the diff. That's not a prompt instruction, it's how the pipeline is wired. Complexity ratings from the review drive model selection too, so simple stuff stays on Sonnet and complex tasks get bumped to Opus.
On the test freezing discussion, OctopusGarden takes a different approach. Instead of locking test files, the system treats hand-written scenarios as a holdout set that the generating agent literally never sees. And rather than binary pass/fail (which is totally gameable, the specification gaming point elsewhere in this thread is spot on), an LLM judge scores satisfaction probabilistically, 0-100 per scenario step. The whole thing runs in an iterative loop: generate, build in Docker, execute, score, refine. When scores plateau there's a wonder/reflect recovery mechanism that diagnoses what's stuck and tries to break out of it.
The point about reviewing 20k lines of generated code is real. I don't have a perfect answer either, but the pipeline does diff truncation (caps at 100KB, picks the 10 largest changed files, truncates to 3k lines) and CI failures get up to 4 automated retry attempts that analyze the actual failure logs. At least overnight runs don't just accumulate broken PRs silently.
Also want to shout out Ouroboros (https://github.com/Q00/ouroboros), which comes at the problem from the opposite direction. Instead of better verification after generation, it uses Socratic questioning to score specification ambiguity before any code gets written. It literally won't let you proceed until ambiguity drops below a threshold. The core idea ("AI can build anything, the hard part is knowing what to build") pairs well with the verification-focused approaches everyone's discussing here. Spec refinement upstream, holdout validation downstream.
nemo44x 4 hours ago [-]
How do people not understand this? LLMs are goal machines. You need to give them the specific goal if you want good results and continue to reenforce it. So of course this means speccing and design work.
People are so enamored with how fast the 20% part is now and yes it’s amazing. But the 80% part by time (designing, testing, reviewing, refactoring, repairing) still exists if you want coherent systems of non-trivial complexity.
All the old rules still apply.
xyzal 2 hours ago [-]
I guess I'll just wait a year until a best practice emerges.
interpol_p 4 hours ago [-]
The example given in the article is acceptance criteria for a login/password entry flow. This is fairly easy to spec-out in terms of AC and TDD.
I have been asking these tools to build other types of projects where it (seems?) much more difficult to verify without a human-in-the-loop. One example is I had asked Codex to build a simulation of the solar system using a Metal renderer. It produced a fun working app quickly.
I asked it to add bloom. It looped for hours, failing. I would have to manually verify — because even from images — it couldn't tell what was right and wrong. It only got it right when I pasted a how-to-write-a-bloom-shader-pass-in-Metal blog post into it.
Then I noticed that all of the planet textures were rotating oddly every time I orbited the camera. Codex got stuck in another endless loop of "Oh, the lookAt matrix is in column major, let me fix that <proceeds to break everything>." or focusing (incorrectly) on UV coordinates and shader code. Eventually Codex told me what I was seeing "was expected" and that I just "felt like it was wrong."
When I finally realised the problem was that Codex had drawn the planets with back-facing polygons only, I reported the error, to which Codex replied, "Good hypothesis, but no"
I insisted that it change the culling configuration and then it worked fine.
These tools are fun, and great time savers (at times), but take them out of their comfort zone and it becomes real hard to steer them without domain knowledge and close human review.
julius_eth_dev 2 hours ago [-]
Error: Reached max turns (1)
mandeepj 4 hours ago [-]
Now, Someone has to review tests! Just shifting ownership! Claude has just released 'Code Review'. But I don't think you can leave either one on autopilot.
Just don’t use the same model to write and vet the code. Use two or more different models to verify the code in addition to reading it yourself.
2 hours ago [-]
keyle 8 hours ago [-]
It's amazing the length at which people who want to write code go, to not write code.
Don't get me wrong, I use agentic coding often, when I feel it's going to type it faster than me (e.g. a lot of scaffolding and filler code).
Otherwise, what's the point?
I feel the whole industry is having its "Look ma! no hands!" moment.
Time to mature up, and stop acting like sailing is going where the seas take you.
dzuc 12 hours ago [-]
red / green / refactor is a reasonable way through this problem
tayo42 12 hours ago [-]
I don't think this is right becasue it's talking about Claude like it's a entity in the world. Claude reviewing Claude generated code and framing it like a individual reviewing it's own code isn't the same.
emirhan_demir 9 hours ago [-]
A short story: A developer let ClaudeCode manage his AWS infrastructure.
The agent ran a Terraform destroy command...
Gone: 2 websites, production database,
all backups and 2.5 years of data
The agent didn't make a mistake. It did exactly what it was allowed to do.
That's the problem dude
monooso 11 hours ago [-]
I appear to be in the minority here. Perhaps because I've been practicing TDD for decades, this reads like the blog equivalent of "water is wet."
fragmede 12 hours ago [-]
Adversarial AI code gen. Have another AI write the tests, tell Codex that Claude wrote some code and to audit the code and write some tests. Tell Gemini that Codex wrote the tests. Have it audit the tests. Tell Codex that Gemini thinks its code is bad and to do better. (Have Gemini write out why into dobetter.md)
Do you really, honestly, have to be doing this stuff even when you sleep? To the point it hits you “wait is this even any good? Gee I don’t want to push out slop.”
If you don’t trust the agent to do it right in the first place why do you trust them to implement your tests properly? Nothing but turtles here.
sergiotapia 6 hours ago [-]
None of this really answers the problem of all this slop is being produced at record pace and still requires absorption into the company, into the practices, and be reviewed by a human being.
I don't think AI will ever solve this problem. It will never be more than a tool in the arsenal. Probably the best tool, but a tool nonetheless.
broDogNRG 12 hours ago [-]
[dead]
dune-aspen 8 hours ago [-]
[flagged]
LingoChat 11 hours ago [-]
[dead]
webpolis 10 hours ago [-]
[dead]
iam_circuit 7 hours ago [-]
[flagged]
rob 7 hours ago [-]
The hardest part is getting them to stop cluttering the HN database table with LLM-generated comments.
ekropotin 7 hours ago [-]
Exactly. That’s why I’m skeptical about long running agent loops too.
The thing is, LLMs are probabilistic data structures, and the probability of incorrect final output is proportional to both amount of turns made and amount of agents run simultaneously. In practice it means you almost never end up with desired result after a long loop.
zazibar 7 hours ago [-]
This account constantly posts LLM-generated comments.
Writing _all_ (waves hands around various llm wrapper git repos) these frameworks and harnesses, built on top of ever changing models sure doesn't feel sensible.
I don't know what the best way of using these things is, but from my personal experience, the defaults get me a looong way. Letting these things churn away overnight, burning money in the process, with no human oversight seems like something we'll collectively look back at in a few years and laugh about, like using PHP!
Not if you are an AI gold rush shovel salesman.
From the article:
> I've run Claude Code workshops for over 100 engineers in the last six months
As I like this allegory really much, AI is (or should be) like and exoskeleton, should help people do things. If you step out of your car putting it first in drive mode, and going to sleep, next day it will be farther, but the question is, is it still on road
I actually feel that things I built 15 years ago in PHP were better than anything I am trying to achieve with modern things that gets outdated every 6 months.
You're telling me today with LLM power multiplier it's THAT much faster to write in PHP compared to something that can actually have a future?
You can stop there! Sounds like PHP worked for them. Already doing better than 90% of startups.
You can use persistent DB connections, and app server such as FrankenPHP to persist state between requests, but that still wouldn't help if DB is the bottleneck.
Unlike python or ruby which break right and left all the time on updates. you have to use bunkers of venvs, without any security updates. A nightmare.
PHP can scale and has a future.
You use python docker images pinned to a stable version (3.11 etc), and between bigger versions, you test and handle any breaking changes.
I feel like this approach applies to pretty much every language?
Who on earth raw dogs on "language:latest" and just hopes for the best?
Granted I wouldn't be running Facebook's backend on something like this. But i feel that isn't a problem 95% of people need to deal with.
They are in the same group, similar pedigree. If you were programming purely for the art of it, you would have had time to discover much nicer languages than either, but that's not what most people are doing so it doesn't really matter. They're different but they're about as good as eachother.
Deploying to production is just scp -rv * production:/var/www/
Beautifully simple. No npm build crap.
Before anyone gets too confused, I love tests. They're great. They help a lot. But to believe they prove correctness is absolutely laughable. Even the most general tests are very narrow. I'm sure they help LLMs just as they help us, but they're not some cure all. You have to think long and hard about problems and shouldn't let tests drive your development. They're guardrails for checking bonds and reduce footguns.
Oh, who could have guessed, Dijkstra wrote about program completeness. (No, this isn't the foolishness of natural language programming, but it is about formalism ;)
https://www.cs.utexas.edu/~EWD/transcriptions/EWD02xx/EWD288...
The price you pay for tests is that they need to be written and maintained. Writing and maintaining code is much more expensive than people think.
Or at least it used to be. Writing code with claude code is essentially free. But the defect rate has gone up. This makes TDD a better value proposition than ever.
TDD is also great because claude can fix bugs autonomously when it has a clear failing test case. A few weeks ago I used claude code and experts to write a big 300+ conformance test suite for JMAP. (JMAP is a protocol for email). For fun, I asked claude to implement a simple JMAP-only mail server in rust. Then I ran the test suite against claude's output. Something like 100 of the tests failed. Then I asked claude to fix all the bugs found by the test suite. It took about 45 minutes, but now the conformance test suite fully passes. I didn't need to prompt claude at all during that time. This style of TDD is a very human-time efficient way to work with an LLM.
I think of it more as "locking" the behavior to whatever it currently is.
Either you do the red-green-with-multiple-adversarial-sub-agents -thing or just do the feature, poke the feature manually and if it looks good then you have the LLM write tests that confirm it keeps doing what it's supposed to do.
The #1 reason TDD failed is because writing tests is BOORIIIING. It's a bunch of repetition with slight variations of input parameters, a ton of boilerplate or helper functions that cover 80% of the cases, but the last 20% is even harder because you need to get around said helpers. Eventually everyone starts copy-pasting crap and then you get more mistakes into the tests.
LLMs will write 20 test cases with zero complaints in two minutes. Of course they're not perfect, but human made bulk tests rarely are either.
Especially for backend software and also for tools, seems like automated tests can cover quite a lot of use cases a system encounters. Their coverage can become so good that they'll allow you to make major changes to the system, and as long as they pass the automated tests, you can feel relatively confident the system will work in prod (have seen this many times).
But maybe you're separating automated testing and TDD as two separate concepts?
I write lots of automated tests, but almost always after the development is finished. The only exception is when reproducing a bug, where I first write the test that reproduces it, then I fix the code.
TDD is about developing tests first then writing the code to make the tests pass. I know several people who gave it an honest try but gave up a few months later. They do advocate everyone should try the approach, though, simply because it will make you write production code that's easier to test later on.
You don't need to believe this to practice TDD. In fact I challenge you to find one single mainstream TDD advocate who believes this.
The trick is just not mixing/sharing the context. Different instances of the same model do not recognize each other to be more compliant.
It helps, but it definitely doesn't always work, particularly as refactors go on and tests have to change. Useless tests start grow in count and important new things aren't tested or aren't tested well.
I've had both Opus 4.6 and Codex 5.3 recently tell me the other (or another instance) did a great job with test coverage and depth, only to find tests within that just asserted the test harness had been set up correctly and the functionality that had been in those tests get tested that it exists but its behavior now virtually untested.
Reward hacking is very real and hard to guard against.
The concept is:
Red Team (Test Writers), write tests without seeing implementation. They define what the code should do based on specs/requirements only. Rewarded by test failures. A new test that passes immediately is suspicious as it means either the implementation already covers it (diminishing returns) or the test is tautological. Red's ideal outcome is a well-named test that fails, because that represents a gap between spec and implementation that didn't previously have a tripwire. Their proxy metric is "number of meaningful new failures introduced" and the barrier prevents them from writing tests pre-adapted to pass.
Green Team (Implementers), write implementation to pass tests without seeing the test code directly. They only see test results (pass/fail) and the spec. Rewarded by turning red tests green. Straightforward, but the barrier makes the reward structure honest. Without it, Green could satisfy the reward trivially by reading assertions and hard-coding. With it, Green has to actually close the gap between spec intent and code behavior, using error messages as noisy gradient signal rather than exact targets. Their reward is "tests that were failing now pass," and the only reliable strategy to get there is faithful implementation.
Refactor Team, improve code quality without changing behavior. They can see implementation but are constrained by tests passing. Rewarded by nothing changing (pretty unusual in this regard). Reward is that all tests stay green while code quality metrics improve. They're optimizing a secondary objective (readability, simplicity, modularity, etc.) under a hard constraint (behavioral equivalence). The spec barrier ensures they can't redefine "improvement" to include feature work. If you have any code quality tools, it makes sense to give the necessary skills to use them to this team.
It's worth being honest about the limits. The spec itself is a shared artifact visible to both Red and Green, so if the spec is vague, both agents might converge on the same wrong interpretation, and the tests will pass for the wrong reason. The Coordinator (your main claude/codex/whatever instance) mitigates this by watching for suspiciously easy green passes (just tell it) and probing the spec for ambiguity, but it's not a complete defense.
What is the scope of projects / features you’ve seen this be successful at?
Do you have a step before where an agent verifies that your new feature spec is not contradictory, ambiguous etc. Maybe as reviewed with regards to all the current feature sets?
Do you make this a cycle per step - by breaking down the feature to small implementable and verifiable sub-features and coding them in sequence, or do you tell it to write all the tests first and then have at it with implementation and refactoring?
Why not refactor-red-green-refactor cycle? E.g. a lot of the time it is worth refactoring the existing code first, to make a new implementation easier, is it worth encoding this into the harness?
What kind of setup do you use ? Can you share ? How much does it cost ?
It works wonderfully well. Costs about $200USD per developer per month as of now.
(I built it)
You pay more to try and get above that noise and hope you'll reach an actual human.
The new "fast mode" that burns tokens at 6 times the rate is just scary because that's what everyone still soon say we all need to be using to get results.
Here I am mostly writing code by hand, with some AI assistant help. I have a Claude subscription but only use it occasionally because it can take more time to review and fix the generated code as it would to hand-write it. Claude only saves me time on a minority of tasks where it's faster to prompt than hand-write.
And then I read about people spending hundreds or thousands of dollars a month on this stuff. Doesn't that turn your codebase into an unreadable mess?
I am not kidding. People don't seem to understand what's actually happening in our industry. See https://www.linkedin.com/posts/johubbard_github-eleutherailm...
It's about as far as you can get from being able to work independently.
Yegge is an entertainer. Gas Town is performance art, it's not meant to be taken seriously.
And a senior director of Nvidia? He had several Mac Minis? I really gotta imagine a Spark is better... at least it'll be a bit smarter of a cat (I'm pretty suspicious he used a LLM to help write that post)
No time to think, gotta go fast?
This is in fact precisely what skills is meant for and is the opposite of an anti-pattern, but more like best practice now. It's explicitly using the skills framework precisely how it was meant to be used.
Red team might not anticipate this if the spec does detail every expected RPC (which seems unreasonable: this could vary based on implementation). But a unit test would need mocks.
Is green team allowed to suggest mocks to add to the test? (Even if they can't read the tests themselves?) This also seems gamaeable though (e.g. mock the entire implementation). Unless another agent makes a judgement call on the reasonability of the mock (though that starts to feel like code review more generally).
Maybe record/replay tests could work? But there are drawbacks in the added complexity.
And do you have any prompts to share?
* There is a lot of duplication between A & B. Refactor this.
* Look at ticket X and give me a root cause
* Add support for three new types of credentials - Basic Auth, Bearer Token and OAuth Client Creds
Claude.md has stuff like "Here's how you run the frontend. here's how u run backend. This module support frontend. That module is batch jobs. Always start commit messages with ticket number. Always run compile at the top level. When you make code changes, always add tests" etc etc
I couldn't relate. From my perspective as a senior, Claude is dumb as bricks. Though useful nonetheless.
I believe that if you're substantially below Claude's level then you just trust whatever it says. The only variables you control are how much money you spend, how much markdown you can produce, and how you arrange your agents.
But I don't understand how the juniors on HN have so much money to throw at this technology.
So I take that feeling and use it to drive me to become a wizard like them. I've generally found that wizards are very happy to take on apprentices.
I'm not trying to call Claude a wizard (I have similar feelings to you), but more that I don't understand that junior's take. We all feel dumb. All but time. Even the wizards! But it's that feeling that drives you to better yourself and it's what turns you into a wizard.
Honestly so much of what I hear from the "AI does all my coding" crowd just sounds very junior. It's just the same like how a year or two ago they were saying "it does the repetitive stuff". Isn't that what functions, libraries, functors, templates, and other abstractions are for? It feels like we're back to that laughable productivity metric of lines of code or number of commits. I don't know why we love our cargo cults. It seems people are putting so much effort into their cargo cults that they could have invented a real airplane by now.
https://github.com/mattpocock/skills/blob/main/tdd%2FSKILL.m...
Everything below quoted from that skill, and serves as a much better rebuttal than I had started writing:
DO NOT write all tests first, then all implementation. This is "horizontal slicing" - treating RED as "write all tests" and GREEN as "write all code."
This produces crap tests:
Tests written in bulk test imagined behavior, not actual behavior You end up testing the shape of things (data structures, function signatures) rather than user-facing behavior Tests become insensitive to real changes - they pass when behavior breaks, fail when behavior is fine
You outrun your headlights, committing to test structure before understanding the implementation
Correct approach:
Vertical slices via tracer bullets.
One test → one implementation → repeat. Each test responds to what you learned from the previous cycle. Because you just wrote the code, you know exactly what behavior matters and how to verify it.
>Because you just wrote the code, you know exactly what behavior matters and how to verify it.
what you go on to describe is
One implementation → one test → repeat.
To be clear, I don't do this. I never saw an agent cheat by peeking or something. I really did look through their logs.
I'd be very interested to see claude code and other tools support this pattern when dispatching agents to be really sure.
How do you know that it works then? Are you using a different tool that does support it?
Setting up a clean room is one of the only ways to do Evals on agentic harnesses. Especially prevalent with Windsurf which doesn’t have an easy CLI start.
So how? The easiest answer when allowed is docker. Literally new image per prompt. There’s also flags with Claude to not use memory and from there you can use -p to have it just be like a normal cli tool. Windsurf requires manual effort of starting it up in a new dir.
You can use coverage information, and you should cull your tests every once in a while I guess.
Property based testing also helps.
Is it really about rewards? Im genuinely curious. Because its not a RL model.
And with that comes reward hacking - which isn't really about looking for more reward but rather that the model has learned patterns of behavior that got reward in the train env.
That is, any kind of vulnerability in the train env manifests as something you'd recognize as reward hacking in the real world: making tests pass _no matter what_ (because the train env rewarded that behavior), being wildly sycophantic (because the human evaluators rewarded that behavior), etc.
Hm, as i understand it, parts of the training of e.g. ChatGPT could be called RL models. But the subject to be trained/fine tuned is still a seq2seq next token predictor transformer neural net.
Ha, good point. I was using it informally (you could handwave and call it an intrinsic reward if a model is well aligned to completing tasks as requested), but I hadn't really thought about it.
Searching around, it seems like I'm not alone, but it looks like "specification gaming" is also sometimes used, like: https://deepmind.google/blog/specification-gaming-the-flip-s...
the above is really hard. A lot of tdd 'experts' don't understand is and teach fragile tests that are not worth having.
But things evolve with time. Not only your software is required to do things it wasn't originally designed to do, but your understanding of the domain evolve, and what once was fine becomes obsolete or insufficient.
your implementation is your interface. its a bit naive or hating-your-users to assume your tests are what your users care about. theyre dealing with everything, regardless of what youve tested or not.
TDD/BDD tests are meant to define the intended contract of a system.
These are not the same thing.
You can change an interface and not change the behaviour.
I have rarely heard such a rigid interpretation such as this.
That's a strange definition. A lot of software should change in order to adapt to emerging requirements. Refactorings are often needed to make those changes easier, or to improve the codebase in ways that are transparent to users. This doesn't mean that the interfaces remain static.
> If your interface can change then you are testing implementation details instead of the behavior users care about.
Your APIs also have users. If you're only testing end-user interfaces, you're disregarding the users of your libraries and modules, e.g. your teammates and yourself.
Implementation details are contextual. To end-users, everything behind the external UI is an implementation detail. To other programmers, the implementation of a library, module, or even a single function can be a detail. That doesn't mean that its functionality shouldn't be tested. And, yes, sometimes that entails updating tests, but tests are code like any other, and also require maintenance and care.
[1] https://simonwillison.net/guides/agentic-engineering-pattern...
https://www.joegaebel.com/articles/principled-agentic-softwa... https://github.com/JoeGaebel/outside-in-tdd-starter
● Separation of concerns. No single agent plans, implements, and verifies. The agent that writes the code is never the agent that checks it.
You write a failing test for the new functionality that you’re going to add (which doesn’t exist yet, so the test is red). You then write the code until the test passes (that is, goes green).
s/liberty/knowledge
I also spend most of my time reviewing the spec to make sure the design is right. Once I'm done, the coding agent can take 10 minutes or 30 minutes. I'm not really in that much of a rush.
Add to that I have worked on many projects that take more than 20 minutes to fully build and run tests... unfortunately. And I would consider that part of the job of implementing a feature, and to reduce cycles I have to take.
After the "green" signal I will manually review or send off some secondary reviews in other models. Is it wasteful? Probably. But its pretty damn fun (as long as I ignore the elephant in the room.)
The other day, I wrote a claude skill to pull logs for failing tests on a PR from CI as a CSV for feeding back into claude for troubleshooting. It helped with some debugging but was very fraught and needed human guidance to avoid going in strange directions. I could see this "fix the tests" workflow instrumented as overnight churn loops that are forbidden from modifying test files that run and have engineers review in the morning if more tests pass.
Maybe agentic TDD is the future. I have a bit of a nightmare vision of SWEs becoming more like QA in the future, but with much more automation. More engineering positions may become adversarial QA for LLM output. Figure out how to break LLM output before it goes to prod. Prove the vibe coded apps don't scale.
In the exercise I described above, I was just prompt churning between meetings (having claude record its work and feeding it to the next prompt, pulling test logs in between attempts), without much time to analyze, while another engineer on my team was analyzing and actually manually troubleshooting the vibe coded junk I was pushing up, but we fixed over 100 failing integration tests in a week for a major refactor using claude plus some human(s) in the loop. I do believe it got things done faster than we would have finished without AI. I do think the quality is slightly lower than would have been if we'd had 4 weeks without meetings to build the thing, but the tests do now pass.
No. But it is noteworthy. A lot of what one previously needed a SWE to do can now be brute forced well enough with AI. (Granted, everything SWEs complained about being tedious.)
From the customer’s perspective, waiting for buggy code tomorrow from San Francisco, buggy code tonight from India or buggy code from an AI at 4AM aren’t super different for maybe two thirds of use cases.
Only if you ignore everything they generate. Look at all the comments saying that the agent hallucinates a result, generates always-passing tests, etc. Those are absolutely true observations -- and don't touch on the fact that tests can pass, the red/green approach can give thumbs up and rocket emojis all day long, and the code can still be shitty, brittle and riddled with security and performance flaws. And so now we have people building elaborate castles in the sky to try to catch those problems. Except that the things doing the catching are themselves prone to hallucination. And around we go.
So because a portion of (IMO always bad, but previously unrecognized as bad) coders think that these random text generators are trustworthy enough to run unsupervised, we've moved all of this chaotic energy up a level. There's more output, certainly, but it all feels like we've replaced actual intelligent thought with an army of monkeys making Rube Goldberg machines at scale. It's going to backfire.
But it works well enough for most use cases. Most of what we do isn’t life or death.
So does the code produced by any bad engineer.
So either we’re finally admitting that all of that leetcode screening and engineer quality gating was a farce, or it wasn’t, and you’re wrong.
I think the answer is in the middle, but the pendulum has swung too far in the “doesn’t matter” direction.
We’re admitting a bit of both. Offshoring just became more instantaneous, secure and efficient. There will still be folks who overplay their hand.
Macroeconomically speaking, I don’t see why we need more software engineers in the future than we have today, and that’s probably a conservative estimate.
Why? Is the argument that there’s a finite amount of software that the world needs, and therefore we will more quickly reach that finite amount?
Seems more likely to me that if LLMs are a force multiplier for software then more software engineers will exist. Or, instead of “software engineers”, call them “people who create software” (even with the assistance of LLMs).
Or maybe the argument is that you need to be a super genius 100x engineer in order to manipulate 17 collaborative and competitive agents in order to reach your maximum potential, and then you’ll take everyone’s jobs?
Idk just seems like wild speculation that isn’t even worth me arguing against. Too late now that I’ve already written it out I guess.
I don't mean 'Oh I finally have the energy to do that side project that I never could'.
Afterall, the trade-offs have to be worth something... right? Where's the 1-person billion dollar firms at That Mr Altman spoke about?
The way I think of it is code has always been an intermediary step between a vision and an object of value. So is there an increase in this activity that yields the trade-offs to be a net benefit?
I still think that we, programmers, having to pay money in order to write code is a travesti. And I'm not talking about paying the license for the odd text editor or even for an operating system, I'm talking about day-to-day operations. I'm surprised that there isn't a bigger push-back against this idea.
Fortunately, there was enough work to be done so productivity increases didn't decrease my billable hours. Even if it did, I still would have done it. If it helps me help others, then it's good for my reputation. Thats hard to put a price on, but absolutely worth what I paid in this case.
It's usually not about the price, but more about the fact that a few megacorps and countries "own" the ability to work this way. This leads to some very real risks that I'm pretty sure will materialize at some point in time, including but not limited to:
- Geopolitical pressure - if some ass-hat of a president hypothetically were to decide "nuh uh - we don't like Spain, they're not being nice to us!", they could forbid AI companies to deliver their services to that specific country.
- Price hikes - if you can deliver "$100 worth of value" per hour, but "$1000 worth of value" per hour with the help of AI, then provider companies could still charge up to $899 per hour of usage and it'd still make "business sense" for you to use them since you're still creating more value with them than without them.
- Reduction in quality - I believe people who were senior developers _before_ starting to use AI assisted coding are still usually capable of producing high quality output. However every single person I know who "started coding" with tools like Claude Code produce horrible horrible software, esp. from a security p.o.v. Most of them just build "internal tools" for themselves, and I highly encourage that. However others have pursued developing and selling more ambitious software...just to get bitten by the fact that it's much more to software development than getting semi-correct output from an AI agent.
- A massive workload on some open source projects. We've all heard about projects closing down their bug bounty programs, declining AI generated PRs etc.
- The loss of the joy - some people enjoy it, some people don't.
We're definitely still in the early days of AI assisted / AI driven coding, and no one really knows how it'll develop...but don't mistake the bubble that is HN for universal positivity and acclaim of AI in the coding space :).
People who enjoy the process of completing the task?
https://benhouston3d.com/blog/the-rise-of-test-theater
You have to actively work against it.
I've written about this and have a POC here for those interested: https://www.joegaebel.com/articles/principled-agentic-softwa...
- privacy policy links to marketing company `beehiiv.com`. the blog author doesn't show up there.
- the profile picture url is `.../Generated_Image_March_03__2026_-_1_55PM.jpg.jpeg`
i didn't dig or read further.
> Most teams don't [write tests first] because thinking through what the code should do before writing it takes time they don't have.
It's astonishing to me how much our industry repeats the same mistakes over and over. This doesn't seem like what other engineering disciplines do. Or is this just me not knowing what it looks like behind the curtain of those fields?
I like to think that people writing actual mission critical software try their absolute best to get it right before shipping and that the rest our industry exists in a totally separate world where a bug in the code is just actually not that big of a deal. Yeah, it might be expensive to fix, but usually it can be reverted or patched with only an inconvenience to the user and to the business.
It’s like the fines that multinational companies pay when breaking the law. If it’s a cost of doing business, it’s baked into the price of the product.
You see this also in other industries. OSHA violations on a residential construction site? I bet you can find a dozen if you really care to look. But 99% of the time, there are no consequences big enough for people to care so nobody wears their PPE because it “slows them down” or “makes them less nimble”. Sound familiar?
Instead we make pre-mass production bespoke products where each part is slightly filled and fitted together from bunch of random components. Say the barrel can't be changed between two different handguns. We just have magic technology to replicate the single gun multiple times. Does not mean it is actually mass-produced in sense say our current power tools are.
With other engineering professions, all projects are like that. You cannot "deploy a bridge to production" to see what happens and fix it after a few have died
So now people just ignore broken tests.
> Claude, please implement this feature.
> Claude, please fix the tests.
The only thing we've gained from this is that we can brag about test coverage.
These are the only tests I've witnessed people delete outright when the requirements change. Anything more complex than this, they'll worry that there's some secondary assertion being implied by a test so they can't just delete it.
Which, really is just experience telling them that the code smells they see in the tests are actually part of the test.
meanwhile:
is demonstrably a dead test when the story is, "allow users to have multiple shipping addresses", as is a test that makes sure balances can't go negative when we decide to allow a 5 day grace period on account balances. But if it's just one of six asserts in the same massive tests, then people get nervous and start losing time.But hey, we're just supposed to let the AIs run wild and rewrite everything every change so maybe that's a heretic view.
1. one agent writes/updates code from the spec
2. one agent writes/updates tests from identified edge cases in the spec.
3. a QA agent runs the tests against the code. When a test fails, it examines the code and the test (the only agent that can see both) to determine blame, then gives feedback to the code and/or test writing agent on what it perceives the problem as so they can update their code.
(repeat 1 and/or 2 then 3 until all tests pass)
Since the code can never fix itself to directly pass the test and the test can never fix itself to accept the behavior of the code, you have some independence. The failure case is that the tests simply never pass, not that the test writer and code writer agents both have the same incorrect understanding of the spec (which is very improbable, like something that will happen before the heat death of the universe improbable, it is much more likely the spec isn't well grounded/ambiguous/contradictory or that the problem is too big for the LLM to handle and so the tests simply never wind up passing).
It's currently burning through the TESTING.md backlog: https://github.com/alpeware/datachannel-clj
I can't understand the mindset that would lead someone not to have realized this from the beginning.
What if instead, the goal of using agents was to increase quality while retaining velocity, rather than the current goal of increasing velocity while (trying to) retain quality? How can we make that world come to be? Because TBH that's the only agentic-oriented future that seems unlikely to end in disaster.
But review fatigue and resulting apathy is real. Devs should instead be informed if incorrect code for whatever feature or process they are working on would be high-risk to the business. Lower-risk processes can be LLM-reviewed and merged. Higher risk must be human-reviewed.
If the business you're supporting can't tolerate much incorrectness (at least until discovered), than guess what - you aren't going to get much speed increase from LLMs. I've written about and given conference talks on this over the past year. Teams can improve this problem at the requirements level: https://tonyalicea.dev/blog/entropy-tolerance-ai/
Something I'm starting to struggle with is when agents can now do longer and more complex tasks, how do you review all the code?
Last week I did about 4 weeks of work over 2 days first with long running agents working against plans and checklists, then smaller task clean ups, bugfixes and refactors. But all this code needs to be reviewed by myself and members from my team. How do we do this properly? It's like 20k of line changes over 30-40 commits. There's no proper solution to this problem yet.
One solution is to start from scratch again, using this branch as a reference, to reimplement in smaller PRs. I'm not sure this would actually save time overall though.
Same as before. Small PRs, accept that you won't ship a month of code in two days. Pair program with someone else so the review is just a formality.
The value of the review is _also_ for someone else to check if you have built the right thing, not just a thing the right way, which is exponentially harder as you add code.
Redoing the work as smaller PRs might help with readability, but then you get the opposite problem: it becomes hard to hold all the PRs in your head at once and keep track of the overall purpose of the change (at least for me).
IMO the real solution is figuring out which subset of changes actually needs human review and focusing attention there. And even then, not necessarily through diffs. For larger agent-generated changes, more useful review artifacts may be things like design decisions or risky areas that were changed.
If you find a big problem in commit #20 of #40, you'll have to potentially redo the last 20 commits, which is a pain.
You seem to be gated on your review bandwidth and what you probably want to do is apply backpressure - stop generating new AI code if the code you previously generated hasn't gone through review yet, or limit yourself to say 3 PRs in review at any given time. Otherwise you're just wasting tokens on code that might get thrown out. After all, babysitting the agents is probably not 'free' for you either, even if it's easier than writing code by hand.
Of course if all this agent work is helping you identify problems and test out various designs, it's still valuable even if you end up not merging the code. But it sounds like that might not be the case?
Ideally you're still better off, you've reduced the amount of time being spent on the 'writing the PR' phase even if the 'reviewing the PR' phase is still slow.
Get an LLM to generate a list of things to check based on those plans (and pad that out yourself with anything important to you that the LLM didn't add), then have the agents check the codebase file by file for those things and report any mismatches to you. As well as some general checks like "find anything that looks incorrect/fragile/very messy/too inefficient". If any issues come up, ask the agents to fix them, then continue repeating this process until no more significant issues are reported. You can do the same for unit tests, asking the agents to make sure there are tests covering all the important things.
i think we will need some kind of automated verification so humans are only reviewing the “intent” of the change. started building a claude skill for this (https://github.com/opslane/verify)
Code review is a skill, as is reading code. You're going to quickly learn to master it.
> It's like 20k of line changes over 30-40 commits.
You run it, in a debugger and step through every single line along your "happy paths". You're building a mental model of execution while you watch it work.
> One solution is to start from scratch again, using this branch as a reference, to reimplement in smaller PRs. I'm not sure this would actually save time overall though.
Not going to be a time saver, but next time you want to take nibbles and bites, and then merge the branches in (with the history). The hard lesson here is around task decomposition, in line documentation (cross referenced) and digestible chunks.
But if you get step debugging running and do the hard thing of getting through reading the code you will come out the other end of the (painful) process stronger and better resourced for the future.
This resonates with my experience, and it is also a refreshing honest take: pushing back on heavy upfront process isn't laziness, it's just the natural engineers drive to build things and feel productive.
TDD is a tool for working in small steps, so you get continuous feedback on your work as you go, and so you can refine your design based on how easy it is to use in practice. It’s “red green refactor repeat”, and each step is only a handful of lines of code.
TDD is not “write the tests, then write the code.” It’s “write the tests while writing the code, using the tests to help guide the process.”
Thank you for coming to my TED^H^H^H TDD talk.
I would like to emphasize that feedback includes being alerted to breaking something you previously had working in a seemly unrelated/impossible way.
One example I have been experimenting is using Learning Tests[1]. The idea is that when something new is introduced in the system the Agent must execute a high value test to teach itself how to use this piece of code. Because these should be high leverage i.e. they can really help any one understand the code base better, they should be exceptionally well chosen for AIs to use to iterate. But again this is just the expert-human judgement complexity shifted to identifying these for AI to learn from. In code bases that code Millions of LoC in new features in days, this would require careful work by the human.
[1] https://anthonysciamanna.com/2019/08/22/the-continuous-value...
Whenever I coded any serious solution as a technical co-founder, every single day there was a major new debate about the product direction. Though we made massive 'progress' and built out a whole new universe in software, we haven't yet managed to find product market fit. It's like constant tension. If the intelligence of two relatively intelligent humans with a ton of experience and complimentary expertise isn't enough to find product-market-fit after one year, this gives you an idea about how high the bar is for an AI agent.
It's like the problem was that neither me nor my domain expert co-founder who had been in his industry for over 15 years had a sufficiently accurate worldview about the industry or human psychology to be able to produce a financially viable solution. Technically, it works perfectly but it just doesn't solve anyone's problem.
So just imagine how insanely smart AI has to be to compete in the current market.
Maybe you could have 100 agents building and promoting 100 random apps per day... But my feeling is that you're going to end up spending more money on tokens and domain names then you will earn in profits. Maybe deploy them all under the same domain with different subdomains? Not great for SEO... Still the market for all these basic low-end apps is going to be extremely competitive.
If an agent runs unattended for hours, small errors compound quickly. Even simple misunderstandings about file structure or instructions can derail the whole process.
I've also given it explicit rules like "never use placeholder images, always generate real assets" — and it just... ignores them sometimes. Not always. Sometimes. Which is worse, because you can't trust it but you also can't not use it.
The 80% it writes is fine. The problem is you still have to verify 100% of it.
What's worked better for me is building verification into the workflow itself, like explicit test assertions the agent has to pass before it can claim "done," plus a rule that any API call must show a real response, not a mock. Basically treating the AI like a junior dev who needs guard rails, not a senior who just needs a code review.
maybe it still sends you to the same valley, but there's so many parameters and dimensions that i dont think its very likely without also being correct
LLMs don't actually have a reward system like some other ML models.
https://ui.adsabs.harvard.edu/abs/2025arXiv250214815C/abstra...
https://www.arxiv.org/abs/2509.23537
https://www.aristeidispanos.com/publication/panos2025multiag...
https://arxiv.org/abs/2305.14325
https://arxiv.org/abs/2306.05685
https://arxiv.org/abs/2310.19740v1
What he describes is like that. Just that the plan step is suggesting docs, not writing actual docs.
Seems things still haven't changed in half a century
https://www.cs.utexas.edu/~EWD/transcriptions/EWD02xx/EWD288...
Tests just catch the most simple mistakes, edge cases and some regressions.
Honestly, sometimes the harnesses, specs, some predefined structure for skills etc all feel over-engineering. 99% of the time a bloody prompt will do. Claude Code is capable of planning, spawning sub-agents, writing tests and so on.
Claude.md file with general guidelines about our repo has worked extraordinarily good, without any external wrappers, harnesses or special prompts. Even the MD file has no specific structure, just instructions or notes in English.
Edit: I even have a skill called release-test that does manual QA for every bug we've ever had reported. It takes about 10 hours to run but I execute it inside a VM overnight so I don't care.
i let it run overnight against a windows app i was working on, and that got it from mostly not working to mostly working.
the loop was
1. look at the code and specs to come up with tests 2. predict the result 3. try it 4. compare the prediction against rhe result 5. file bug report, or call it a success
and then switch to bug fixing, and go back around again. Worked really well in geminicli with the giant context window
To everyone who plan on automating themselves out of a job by taking the human element out- this is the endgame that management wants: replacing your (expensive and non-tax-optimized) labor with scalable Opex.
Even better though - external test suits. Recently made a S3 server of which the LLM made quick work for MVP. Then I found a Ceph S3 test suite that I could run against it and oh boy. Ended up working really good as TDD though.
You can have Gemini write the tests and Claude write the code. And have Gemini do review of Claude's implementation as well. I routinely have ChatGPT, Claude and Gemini review each other's code. And having AI write unit tests has not been a problem in my experience.
Not a rhetoric question. Trillion token burners and such.
The architecture we landed on: ingest goes through a certainty scoring layer before storage. Contradictions get flagged rather than silently stacked. Memories that get recalled frequently get promoted; stale ones fade.
It's early but the difference in agent coherence over long sessions is noticeable. Happy to share more if anyone's going down this path.
How do you imokement the scoring layer and when and how is it invoked?
But there's a second problem underneath that one. Acceptance criteria are ephemeral. You write them before prompting, Playwright runs against them, and then where do they go? A Notion doc. A PR comment. Nowhere permanent. Next time an agent touches that feature, it's starting from zero again.
The commit that ships the feature should carry the criteria that verified it. Git already travels with the code. The reasoning behind it should too.
Outage is the easy failure mode. I can work around a service that's up 80% of the time, but is 100% correct. A service that's up 100% of the time but is 80% correct is useless.
That’s really putting the cart before the horse. How do you get to “merging 50 PRs a week” before thinking “wait, does this do the right thing?”
[1] https://code.claude.com/docs/en/devcontainer
If you want to try it just ask Claude to set it up for your project and review it after.
It will probably comply, and at least if it does change the tests you can always revert those files to where you committed them
You could probably make a system-level restriction so the software physically can't modify certain files, but I'm not sure how well that's going to fly if the program fails to edit it and there's no feedback of the failure.
With this approach you can enforce that Claude cannot access to specific files. It’s a guarantee and will always work, unlike a prompt or Claude.md which is just a suggestion that can be forgotten or ignored.
This post has an example hook for blocking access to sensitive files:
https://aiorg.dev/blog/claude-code-hooks#:~:text=Protect%20s...
One could even make zero-knowledge test development this way.
I've been building OctopusGarden (https://github.com/foundatron/octopusgarden), which is basically a dark software factory for autonomous code generation and validation. A lot of the techniques were inspired by StrongDM's production software factory (https://factory.strongdm.ai/). The autoissue.py script (https://github.com/foundatron/octopusgarden/blob/main/script...) does something really close to what others in this thread are describing with information barriers. It's a 6-phase pipeline (plan, review plan, implement, cold code review, fix findings, CI retry) where each phase only gets the context it actually needs. The code review phase sees only the diff. Not the issue, not the plan. Just the diff. That's not a prompt instruction, it's how the pipeline is wired. Complexity ratings from the review drive model selection too, so simple stuff stays on Sonnet and complex tasks get bumped to Opus.
On the test freezing discussion, OctopusGarden takes a different approach. Instead of locking test files, the system treats hand-written scenarios as a holdout set that the generating agent literally never sees. And rather than binary pass/fail (which is totally gameable, the specification gaming point elsewhere in this thread is spot on), an LLM judge scores satisfaction probabilistically, 0-100 per scenario step. The whole thing runs in an iterative loop: generate, build in Docker, execute, score, refine. When scores plateau there's a wonder/reflect recovery mechanism that diagnoses what's stuck and tries to break out of it.
The point about reviewing 20k lines of generated code is real. I don't have a perfect answer either, but the pipeline does diff truncation (caps at 100KB, picks the 10 largest changed files, truncates to 3k lines) and CI failures get up to 4 automated retry attempts that analyze the actual failure logs. At least overnight runs don't just accumulate broken PRs silently.
Also want to shout out Ouroboros (https://github.com/Q00/ouroboros), which comes at the problem from the opposite direction. Instead of better verification after generation, it uses Socratic questioning to score specification ambiguity before any code gets written. It literally won't let you proceed until ambiguity drops below a threshold. The core idea ("AI can build anything, the hard part is knowing what to build") pairs well with the verification-focused approaches everyone's discussing here. Spec refinement upstream, holdout validation downstream.
People are so enamored with how fast the 20% part is now and yes it’s amazing. But the 80% part by time (designing, testing, reviewing, refactoring, repairing) still exists if you want coherent systems of non-trivial complexity.
All the old rules still apply.
I have been asking these tools to build other types of projects where it (seems?) much more difficult to verify without a human-in-the-loop. One example is I had asked Codex to build a simulation of the solar system using a Metal renderer. It produced a fun working app quickly.
I asked it to add bloom. It looped for hours, failing. I would have to manually verify — because even from images — it couldn't tell what was right and wrong. It only got it right when I pasted a how-to-write-a-bloom-shader-pass-in-Metal blog post into it.
Then I noticed that all of the planet textures were rotating oddly every time I orbited the camera. Codex got stuck in another endless loop of "Oh, the lookAt matrix is in column major, let me fix that <proceeds to break everything>." or focusing (incorrectly) on UV coordinates and shader code. Eventually Codex told me what I was seeing "was expected" and that I just "felt like it was wrong."
When I finally realised the problem was that Codex had drawn the planets with back-facing polygons only, I reported the error, to which Codex replied, "Good hypothesis, but no"
I insisted that it change the culling configuration and then it worked fine.
These tools are fun, and great time savers (at times), but take them out of their comfort zone and it becomes real hard to steer them without domain knowledge and close human review.
Code Review: https://news.ycombinator.com/item?id=47313787
Don't get me wrong, I use agentic coding often, when I feel it's going to type it faster than me (e.g. a lot of scaffolding and filler code).
Otherwise, what's the point?
I feel the whole industry is having its "Look ma! no hands!" moment.
Time to mature up, and stop acting like sailing is going where the seas take you.
If you don’t trust the agent to do it right in the first place why do you trust them to implement your tests properly? Nothing but turtles here.
I don't think AI will ever solve this problem. It will never be more than a tool in the arsenal. Probably the best tool, but a tool nonetheless.
The thing is, LLMs are probabilistic data structures, and the probability of incorrect final output is proportional to both amount of turns made and amount of agents run simultaneously. In practice it means you almost never end up with desired result after a long loop.