This is a problem everywhere now, and not just in code. It now takes zero effort to produce something, whether code or a work plan or “deep research” and then lob it over the fence, expecting people to review and act upon it.
It’s an extension of the asymmetric bullshit principle IMO, and I think now all workplaces / projects need norms about this.
It feels like reputation / identity are about to become far more critical in determining whether your contribution, of whatever form, even gets considered.
My music/Youtube algos are ruined because when I flag I don't like the 100 AI songs/videos that it presents me each day the algos take it as my no longer liking those genres. Between me down rating AI music/AI history videos, Youtube now give me like half a page of recommendations then gives up. I'm now punished by Youtube/my experience is worse because Youtube's cool with hosting so much AI slop content and I chose to downrate it/try to curate if out of my feed. The way Youtube works today it punishes you (or trys to train you not to) for flagging 'don't recommend channel' when recommended a channel of AI slop. Flag AI and Youtube will degrade you algo recommendations.
My opinion swings between hype to hate every day. Yesterday all suggestions / edits / answers were hallucinated garbage, and I was ready to remove the copilot plugin altogether. Today I was stuck at a really annoying problem for hours and hours. For shits and giggles I just gave Claude a stacktrace and a description and let it go ham. It produced an amazingly accurate thought train and found my issue, which was not what I was expecting at all.
I still don't see how it's useful for generating features and codebases, but as a rubber ducky it ain't half bad.
I've been skeptical about LLMs being able to replace humans in their current state (which has gotten marginally better in the last 18 months), but let us not forget that GPT-3.5 (the first truly useful LLM) was only 3 years ago. We aren't even 10 years out from the initial papers about GPTs.
Another way to look at it is GPT3.5 was $600,000,000,000 ago.
Today's AIs are better, but are they $600B better? Does it feel like that investment was sound? And if not, how much slower will future investments be?
Another way to look at $600B of improvement was whether or not they used the $600B to improve it.
This just smells like classic VC churn and burn. You are given it and have to spend it. And most of that money wasn't actually money, it was free infrastructure. Who knows the actual "cost" of the investments, but my uneducated brain (while trying to make a point) would say it is 20% of the stated value of the investments. And maybe GPT-5 + the other features OpenAI has enabled are $100B better.
And yet, we're clearly way into the period of diminishing returns.
I'm sure there's still some improvements that can be made to the current LLMs, but most of those improvements are not in making the models actually better at getting the things they generate right.
If we want more significant improvements in what generative AI can do, we're going to need new breakthroughs in theory or technique, and that's not going to come by simply iterating on the transformers paper or throwing more compute at it. Breakthroughs, almost by definition, aren't predictable, either in when or whether they will come.
I feel like we need a different programming paradigm that's more suited to LLM's strengths; that enables a new kind of application. IE, think of an application that's more analog with higher tolerances of different kinds of user inputs.
A different way to say it. Imagine if programming a computer was more like training a child or a teenager to perform a task that requires a lot of human interaction; and that interaction requires presenting data / making drawings.
I was extremely skeptical at the beginning, and therefore critical of what was possible as my default stance. Despite all that, the latest iterations of cli agents which attach to LSPs and scan codebase context have been surprising me in a positive direction. I've given them tasks that require understanding the project structure and they've been able to do so. Therefore, for me my trajectory has been from skeptic to big proponent of the use, of course with all the caveats that at the end of the day, it is my code which will be pushed to prod. So I never went through the trough of disillusionment, but am arriving at productivity and find it great.
Well, when MS give OpenAI free use of their servers and OpenAI call it a $10 billion investment, then they use up their tokens and MS calls in $10 billion in revenue, I think so, yes.
When people talk about the “AI bubble popping” this is what they mean. It is clear that AI will remain useful, but the “singularity is nigh” hype is faltering and the company valuations based on perpetual exponential improvement are just not realistic. Worse, the marginal improvements are coming at ever higher resource requirements with each generation, which puts a soft cap on how good an AI can be and still be economical to run.
What are you basing that on? Haiku 4.5 just came out and beats Sonnet 4 at a third the cost.
GPT-5 and GPT-5-codex are significantly cheaper than the o-series full models from OpenAI, but outperform them.
I won't get into whether the improvements we're seeing are marginal or not, but whether or not that's the case, these examples clearly show you can get improved performance with decreasing resource cost as techniques advance.
Maybe, maybe not, it’s hard to tell from articles like this from OSS projects what is generally going on, especially with corporate work. There is no such rhetoric at $job, but also, the massive AI investment seemingly has yet to shift the needle. If it doesn’t they’ll likely fire a bunch of people again and continue.
It feels that way to me, too—starting to feel closer to maturity. Like Mr. Saffron here, saying “go ham with the AI for prototyping, just communicate that as a demo/branch/video instead of a PR.”
It feels like people and projects are moving from a pure “get that slop out of here” attitude toward more nuance, more confidence articulating how to integrate the valuable stuff while excluding the lazy stuff.
Useful if used well as a thought has gone from meaning a replace all developers machine to a fresh out of college junior with perfect memory bot to a will save a little typing if you type out all of your thoughts and baby sit it text box.
I get value from it everyday like a lawyer gets value from LexisNexis. I look forward to the vibe coded slop era like a real lawyer looks forward to a defendant with no actual legal training that obviously did it using LexisNexis.
Every single one of your posts from the past two weeks is hyping up AI or down voted for being highly uninformed about every topic that isn't LLM hype related. You talk like a marketer of AI, someone that works adjacent to the industry with a dependency on these tools being bought.
> Please don't post insinuations about astroturfing, shilling, brigading, foreign agents, and the like. It degrades discussion and is usually mistaken. If you're worried about abuse, email hn@ycombinator.com and we'll look at the data.
> “I am closing this but this is interesting, head over to our forum/issues to discuss”
I really like the way Discourse uses "levels" to slowly open up features as new people interact with the community, and I wonder if GitHub could build in a way of allowing people to only be able to open PRs after a certain amount of interaction, too (for example, you can only raise a large PR if you have spent enough time raising small PRs).
This could of course be abused and/or lead to unintended restrictions (e.g. a small change in lots of places), but that's also true of Discourse and it seems to work pretty well regardless.
Mailing lists are used as a filter to raise the barrier to entry to prevent people from contributing code that they have no intention of maintaining and leaving that to the project owners. Github for better or worse has made the barrier to entry much much lower and significantly easier for people to propose changes and then disappear.
The title seems perfectly engineered to get upvotes from people who don't read the article, which puts the article in front of more people who would actually read it (which is good because the article is, as you say, very interesting and worth sharing).
Agreed. Sometimes such rage/engagement-bait titles get changed on HN, but it's risky to do as a submitter cause it's unclear when you are "allowed" to change the title. And I suppose if you want upvotes, why would you change the ragebait title?
Usually engagement-bait titles are cover for uninteresting articles, but yeah in this case it's way more interesting than the title to me anyway.
i guess it makes it even more obvious when people are discussing the title instead of the actual piece, which is routine on HN but not always obvious! Although to be fair, the title describes one part of the piece, sure. the part with the least original insight.
> You can usually tell a prototype that is pretending to be a human PR, but a real PR a human makes with AI assistance can be indistinguishable.
A couple of weeks ago I needed to stuff some binary data into a string, in a way where it wouldn't be corrupted by whitespace changes.
I wrote some Rust code to generate the string. After I typed "}" to end the method: 1: Copilot suggested a 100% correct method to parse the string back to binary data, and then 2: Suggested a 100% correct unit test.
I read both methods, and they were identical to what I would write. It was as if Copilot could read my brain.
BUT: If I relied on Copilot to come up with the serialization form, or even know that it needed to pick something that wouldn't be corrupted by whitespace, it might have picked something completely wrong, that didn't meet what the project needed.
If one claims to be able to write good code with LLMs, it should just as easy to write comprehensive e2e tests. If you don't hold your code to a high testing standard than you were always going off 'vibes' whether they were from a silicon neural network or your human meatware biases.
Reviewing test code is arguably harder than reviewing implementation code because tests are enumerated success and failure scenarios. Some times the LOC of the tests is an order of magnitude larger than the implementation code.
The biggest place I've seen AI created code with tests produce a false positive is when a specific feature is being tested, but the test case overwrites a global data structure. Fixing the test reveals the implementation to be flawed.
Now imagine you get rewarded for shipping new features a test code, but are derided for refactoring old code. The person who goes to fix the AI slop is frowned upon while the AI slop driver gets recognition for being a great coder. This dynamic caused by AI coding tools is creating perverse workplace incentives.
Shouldn't there be guidelines for open source projects where it is clearly stipulated that code submitted for review must follow the project's code format and conventions?
This is the thought that I always have whenever I see the mention of coding standards. Not only should there be standards they should be enforced by tooling.
Now that being said a person should feel free to do what they want with their code. It’s somewhat tough to justify the work of setting up infrastructure to do that on small projects, but AI PRs aren’t likely a big issue fit small projects.
> that code submitted for review must follow the project's code format and conventions
...that's just scratching the surface.
The problem is that LLMs make mistakes that no single human would make, and coding conventions should anyway never be the focus of a code review and should usually be enforced by tooling.
E.g. when reading/reviewing other people's code you tune into their brain and thought process - after reading a few lines of (non-trivial) code you know subconsciously what 'programming character' this person is and what type of problems to expect and look for.
With LLM generated code it's like trying to tune into a thousand brains at the same time, since the code is a mishmash of what a thousand people have written and published on the internet. Reading a person's thought process via reading their code doesn't work anymore, because there is no coherent thought process.
Personally I'm very hesitant to merge PRs into my open source projects that are more than small changes of a couple dozen lines at most, unless I know and trust the contributor to not fuck things up. E.g. for the PRs I'm accepting I don't really care if they are vibe-coded or not, because the complexity for accepted PRs is so low that the difference shouldn't matter much.
Also there’s two main methods of reviewing. If you’re in an org, everyone is responsible for their own code, so review is mostly for being aware of stuff and helping catch mistakes. In an OSS project, everything’s is under your responsibility, and there’s a need to vet code closely. LGTM is not an option.
As if people read guidelines. Sure they're good to have so you can point to them when people violate them but people (in general) will not by default read them before contributing.
Code format and conventions are not the problem. It's the complexity of the change without testing, thinking, or otherwise having ownership of your PR.
Some people will absolutely just run something, let the AI work like a wizard and push it in hopes of getting an "open source contribution".
They need to understand due diligence and reduce the overhead of maintainers so that maintainers don't review things before it's really needed.
It's a hard balance to strike, because you do want to make it easy on new contributors, but this is a great conversation to have.
I really liked the paragraph about LLMs being "alien intelligence"
> Many engineers I know fall into 2 camps, either the camp that find the new class of LLMs intelligent, groundbreaking and shockingly good. In the other camp are engineers that think of all LLM generated content as “the emperor’s new clothes”, the code they generate is “naked”, fundamentally flawed and poison.
I like to think of the new systems as neither. I like to think about the new class of intelligence as “Alien Intelligence”. It is both shockingly good and shockingly terrible at the exact same time.
Framing LLMs as “Super competent interns” or some other type of human analogy is incorrect. These systems are aliens and the sooner we accept this the sooner we will be able to navigate the complexity that injecting alien intelligence into our engineering process leads to.
It's a similitude I find compelling. The way they produce code and the way you have to interact with them really feels "alien", and when you start humanizing them, you get emotions when interacting with it and that's not correct.
I mean, I do get emotional and frustrated even when good old deterministic programs misbehaved and there was some bug to find and squash or work-around, but the LLM interactions can bring the game to a complete new level. So, we need to remember they are "alien".
Isn't the intelligence of every other person alien to ourselves? The article ends with a need to "protect our own engineering brands" but how is that communicated? I found this [https://meta.discourse.org/t/contributing-to-discourse-devel...] which seems woefully inadequate. In practice, conventions are communicated through existing code. Are human contributors capable of grasping an "engineering brand" by working on a few PRs?
This is why at a fundamental level, the concept of AGI doesn't make a lot of sense. You can't measure machine intelligence by comparing it to a human's. That doesn't mean machines can't be intelligent...but rather that the measuring stick cannot be an abstracted human being. It can only be the accumulation of specific tasks.
The problem with AI isn’t new, it’s the same old problem with technology: computers don’t do what you want, only what you tell them.
A lot of PRs can be judged by how well they are described and justified, it’s because the code itself isn’t that important, it’s the problem that you are solving with it that is.
People are often great at defining problems, AIs less so IMHO. Partially because they simply have no understanding, partially because they over explain everything to a point where you just stop reading, and so you never get to the core of the problem. And even if you do there’s a good chance AI misunderstood the problem and the solution is wrong in a some more or less subtle way.
This is further made worse by the sheer overconfidence of AI output, which quickly erodes any trust that they did understand the problem.
> As engineers it is our role to properly label our changes.
I've found myself wanting line-level blame for LLMs. If my teammate committed something that was written directly by Claude Code, it's more useful to me to know that than to have the blame assigned to the human through the squash+merge PR process.
Ultimately somebody needs to be on the hook. But if my teammate doesn't understand it any better than I do, I'd rather that be explicit and avoid the dance of "you committed it, therefore you own it," which is better in principle than in practice IMO.
A bit of a brutal title for what's a pretty constructive and reasonable article. I like the core: AI-produced contributions are prototypes, belong in branches, and require transparency and commitment as a path to being merged.
It is possible that some projects could benefit from triage volunteers?
There are plenty of open source projects where it is difficult to get up to speed with the intricacies of the architecture that limits the ability of talented coders to contribute on a small scale.
There might be merit in having a channel for AI contributions that casual helpers can assess to see if they pass a minimum threshold before passing on to a project maintainer to assess how the change works within the context of the overall architecture.
It would also be fascinating to see how good an AI would be at assessing the quality of a set of AI generated changes absent the instructions that generated them. They may not be able to clearly identify whether the change would work, but can they at least rank a collection of submissions to select the ones most worth looking at?
At the very least the pile of PRs count as data of things that people wanted to do, even if the code was completely unusable, placing it into a pile somewhere might be minable for the intentions of erstwhile contributors.
Maybe we need open source credit scores. PRs from talented engineers with proven track records of high quality contributions would be presumed good enough for review. Unknown, newer contributors could have a size limit on their PRs, with massive PRs rejected automatically.
The Forgejo project has been gently trying to redirect new contributors into fixing bugs before trying to jump into the project to implement big features (https://codeberg.org/forgejo/discussions/issues/337). This allows a new contributor to get into the community, get used to working with the codebase, do something of clear value... but for the project a lot of it is about establishing reputation.
Will the contributor respond to code-review feedback? Will they follow-up on work? Will they work within the code-of-conduct and learn the contributor guidelines? All great things to figure out on small bugs, rather than after the contributor has done significant feature work.
>That said, there is a trend among many developers of banning AI. Some go so far as to say “AI not welcome here” find another project.
>This feels extremely counterproductive and fundamentally unenforceable to me. Much of the code AI generates is indistinguishable from human code anyway. You can usually tell a prototype that is pretending to be a human PR, but a real PR a human makes with AI assistance can be indistinguishable.
Isn't that exactly the point? Doesn't this achieve exactly what the whole article is arguing for?
A hard "No AI" rule filters out all the slop, and all the actually good stuff (which may or may not have been made with AI) makes it in.
When the AI assisted code is indistinguishable from human code, that's mission accomplished, yeah?
Although I can see two counterarguments. First, it might just be Covert Slop. Slop that goes under the radar.
And second, there might be a lot of baby thrown out with that bathwater. Stuff that was made in conjunction with AI, contains a lot of "obviously AI", but a human did indeed put in the work to review it.
I guess the problem is there's no way of knowing that? Is there a Proof of Work for code review? (And a proof of competence, to boot?)
> I guess the problem is there's no way of knowing that? Is there a Proof of Work for code review?
In a live setting, you could ask the submitter to explain various parts of the code. Async, that doesn’t work, because presumably someone who used AI without disclosing that would do the same for the explanation.
Based on interviews I've run, people who use AI heavily have no problem also using it during a live conversation to do their thinking for them there, too.
...YYyyeah, that says a lot about you, and nothing about the project in question.
"Forced you to lie"?? Are you serious?
If the project says "no AI", and you insist on using AI, that's not "forcing you to lie"; that's you not respecting their rules and choosing to lie, rather than just go contribute to something else.
Well, but why not instead of asking/accepting people will lie undetectably when you say "No AI" and it's okay you're fine with lying, just say instead "Only AI when you spend the time to turn it into a real reviewed PR, which looks like X, Y, and Z", giving some actual tips on how to use AI acceptably. Which is what OP suggests.
The way we do it is to use AI to review the PR before a human reviewer sees it. Obvious errors, non-consistent patterns, weirdness etc is flagged before it goes any further. "Vibe coded" slop usually gets caught, but "vibe engineered" surgical changes that adhere to common patterns and standards and have tests etc get to be seen by a real live human for their normal review.
I guess the main question I'm left with after reading this is "what good is a prototype, then?" In a few of the companies I've worked at there was a quarterly or biannual ritual called "hack week" or "innovation week" or "hackathon" where engineers form small teams and try to bang out a pet project super fast. Sometimes these projects get management's attention, and get "promoted" to a product or feature. Having worked on a few of these "promoted" projects, to the last they were unmitigated disasters. See, "innovation" doesn't come from a single junior engineer's 2AM beer and pizza fueled fever dream. And when you make the mistake of believing otherwise, what seemed like some bright spark's clever little dream turns into a nightmare right quick. The best thing you can do with a prototype is delete it.
Completely agree, I hate the “hackathon” for so many reasons, guess I’ll vent here too. All of this from the perspective of one frustrated software engineer in web tech.
First of all, if you want innovation, why are you forcing it into a single week? You very likely have smart people with very good ideas, but they’re held back by your number-driven bullshit. These orgs actively kill innovation by reducing talent to quantifiable rows of data.
A product hobbled together from shit prototype code very obviously stands out. It has various pages that don’t quite look/work the same, Cross-functional things that “work everywhere else” don’t in some parts.
It rewards only the people who make good presentations, or pick the “current hype thing” to work on. Occasionally something good that addresses real problems is at least mentioned but the hype thing will always win (if judged by your SLT)
Shame on you if the slop prototype is handed off to some other team than the hackathon presenters. Presenters take all the promotion points, then implementers have to sort out a bunch of bullshit code, very likely being told to just ship the prototype “it works you idiots, I saw it in the demo, just ship it.” Which is so incredibly short sighted.
I think the depressing truth is your executives know it’s all hobbled together bullshit, but that it will sell anyway, so why invest time making it actually good? They all have their golden parachutes, what do they care about the suckers stuck on-call for the house-of-cards they were forced to build, despite possessing the talent to make it stable? All this stupidity happens over and over again, not because it is wise, or even the best way to do this, the truth is just a flaccid “eh, it’ll work though, fuck it, let’s get paid.”
You touched on this but to expand on "numbers driven bullshit" a bit, it seems to me the biggest drag on true innovation is not quantifiability per se but instead how organizations react to e.g. having some quantifiable target. It leaves things like refactoring for maintainability or questioning whether a money-making product could be improved out of reach. I've seen it happen multiple times where these two forces conspire to arrive at the "eh, fuck it" place--like the code is a huge mess and difficult to work on, and the product is "fine" in that it's making revenue although customers constantly complain about it. So instead of building the thing customers actually want in a sustainable way we just... do nothing.
We have to do better than that before congratulating ourselves about all the wonderful "innovation".
> That said it is a living demo that can help make an idea feel more real. It is also enormously fun. Think of it as a delightful movie set.
[pedantry] It bothers me that the photo for "think of prototype PRs as movie sets" is clearly not a movie set but rather the set of the TV show Seinfeld. Anyone who watched the show would immediately recognize Jerry's apartment.
I'm not sure what you mean. Those two photos are very different. The floors are entirely different, the tables are entirely different, one of the chairs/couches is different, even the intercom and light switch are different.
2 months ago, after I started using Claude Code on my side project, within the space of days, I went from not allowing a single line of AI code into my codebase to almost 100% AI-written code. It basically codes in my exact style and I know ahead of time what code I expect to see so reviewing is really easy.
I cannot justify to myself writing code by hand when there is literally no difference in the output from how I would have done it myself. It might as well be reading my mind, that's what it feels like.
For me, vibe coding is essentially a 5x speed increase with no downside. I cannot believe how fast I can churn out features. All the stuff I used to type out by hand now seems impossibly boring. I just don't have the patience to hand-code anymore.
I've stuck to vanilla JavaScript because I don't have the patience to wait for the TypeScript transpiler. TS iteration speed is too slow. By the time it finishes transpiling, I can't even remember what I was trying to do. So you bet I don't have the patience to write by hand now. I really need momentum (fast iteration speed) when I code and LLMs provide that.
I dont mean to question you personally, after all this is the internet, but comments like yours do make the reader think, if he has 5x'ed his coding, was he any good to begin with? I guess what I'm saying is, without knowing your baseline skill level, I dont know whether to be impressed by your story. Have you become a super-programmer, or is it just cleaning up stupid stuff that you shouldn't have been doing in the first place? If someone is already a clear-headed, efficient, experienced programmer, would that person be seeing anywhere near the benefits you have? Again, this isn't a slight on you personally, it's just, a reader doesnt really know how to place your experience into context.
> Some go so far as to say “AI not welcome here” find another project.
This feels extremely counterproductive and fundamentally unenforceable to me.
But it's trivially enforceable. Accept PRs from unverified contributors, look at them for inspiration if you like, but don't ever merge one. It's probably not a satisfying answer, but if you want or need to ensure your project hasn't been infected by AI generated code you need to only accept contributions from people you know and trust.
I wouldn't call it "vibe coded slop" the models are getting way better and I can work with my engineers a lot faster.
I am the founder and a product person so it helps in reducing the number of needed engineers at my business. We are currently doing $2.5M ARR and the engineers aren't complaining, in fact it is the opposite, they are actually more productive.
We still prioritize architecture planning, testing and having a CI, but code is getting less and less important in our team, so we don't need many engineers.
> code is getting less and less important in our team, so we don't need many engineers.
That's a bit reductive. Programmers write code; engineers build systems.
I'd argue that you still need engineers for architecture, system design, protocol design, API design, tech stack evaluation & selection, rollout strategies, etc, and most of this has to be unambiguously documented in a format LLMs can understand.
While I agree that the value of code has decreased now that we can generate and regenerate code from specs, we still need a substantial number of experienced engineers to curate all the specs and inputs that we feed into LLMs.
We can (unreliably) write more code in natural english now. At its core it’s the same thing: detailed instructions telling the computer what it should do.
They can focus on other things that are more impactful in the business rather than just slinging code all day, they can actually look at design and the product!
Maximum headcount for engineers is around 7, no more than that now. I used to have 20, but with AI we don't need that many for our size.
Yeah or start my own company since they're basically doing everything now it sounds like.
Someone barking orders at you to generate code because they are too stupid to be able to read it is not very fun.
These people hire developers because their own brains are inferior, and now they think they can replace them because they don't want to share the wages with them.
Management may see a churn of a few years as acceptable. If management makes 1$M in that time.. they wont care. "Once I get mine, I don't care"
Like my old CEO who moved out of state to avoid a massive tax bill, got his payout, became hands off, and let the company slide to be almost worthless.
Or at my current company there is no care for quality since we're just going to launch a new generation of product in 3 years. We're doing things here that will CAUSE a ground up rewrite. We're writing code to rely on undocumented features of the mcu that the vendor have said 'we cannot guarantee it will always behave this way' But our management cycles out every 3-4 years. Just enough time to kill the old, champion the new, get their bonus, and move on.
Bonuses are handed out every January. Like clockwork there's between 3-7 directors and above who either get promoted or leave in February.
I don't see how any business person would see any value in engineering that extends past their tenure. They see value in launching/delivering/selling, and are rolling the dice that we're JUST able to not cause a nation wide outage or brick every device.
So AI is great... as long as I've 'gotten mine' before it explodes
> Please don't comment on whether someone read an article. "Did you even read the article? It mentions that" can be shortened to "The article mentions that".
Nice jewish word mostly meant to mock. Why would I care what a plugin that I don't even see in use has to say to my face (since I had to read this with all the interpretation potential and receptiveness available). The same kind of inserted judgment that lingers similar to "Yes, I will judge you if you use AI".
There’s nothing wrong with judgment. Judging someone’s character based on whether they use generative “AI” is a valid practice. You may not like being judged, but that’s another matter entirely.
Yep, if you churn out a bad change - AI or not - I'm going to be more careful with reviewing what you put out*. This is judgement, and it is a good thing - it helps us prioritise what is worth doing, and how much time should be spent on it.
If your attitude is consistently "idk the AI made it" and you refuse to review it yourself. For 1, I am insulted that you think I should pick up your slack, and 2, I'm going to judge you and everything you put out even more harshly - for my own sanity, and trying to keep debt under control.
Judgement isn't a bad thing, it's how we decide good from bad. Pretending that it is because it uniquely discriminates against bad practise only proves to me that it's worth doubling down on that judgement.
* - I won't necessarily say/do anything different, but I am more careful - and I do start to look for patterns / ways to help.
This is not judgment as much as it is programming a community and is perpetuating the opposite of correct judgment since it's inserting an emotion and opinion into a collective mind and discourse (the headline alone which might be all a lot of people scan and a tone-setter). It's going to cause reactions like the one you just had at many points in time used against people that decide to use modern tools. If Discurse wanted to start a discussion that might solve a problem they could have used a better headline.
What exactly is the problem with having an opinion? People are allowed to have opinions. People working in a field are allowed and even expected to have opinions on that field’s current state and goings-on.
Your opinion, if I had to guess, is that generative “AI” can be good and useful. My opinion is that it’s an insult to humanity that causes considerable harm and should not be used. These are both valid opinions to have, although they disagree with each other.
Don’t fall into the trap of “I’m objectively correct, everyone else just has opinions”.
>There’s nothing wrong with judgment. Judging someone’s character based on whether they use generative “AI” is a valid practice. You may not like being judged, but that’s another matter entirely.
You and I know that using AI is a metric to consider when judging ability and quality.
The difference is that it's not judgment but a broadcast, announcement.
In this case a snotty one from Discourse.
I mention that it lingers because I think that is a real psychological effect that happens.
Small announcements like this carry over into the future and flood any evaluation of yourself which can be described as torture and sabotage since it has an effect on decisions you make sometimes destroying things.
This is a problem everywhere now, and not just in code. It now takes zero effort to produce something, whether code or a work plan or “deep research” and then lob it over the fence, expecting people to review and act upon it.
It’s an extension of the asymmetric bullshit principle IMO, and I think now all workplaces / projects need norms about this.
It feels like reputation / identity are about to become far more critical in determining whether your contribution, of whatever form, even gets considered.
Perhaps it's time for Klout to rise from the ashes?
My music/Youtube algos are ruined because when I flag I don't like the 100 AI songs/videos that it presents me each day the algos take it as my no longer liking those genres. Between me down rating AI music/AI history videos, Youtube now give me like half a page of recommendations then gives up. I'm now punished by Youtube/my experience is worse because Youtube's cool with hosting so much AI slop content and I chose to downrate it/try to curate if out of my feed. The way Youtube works today it punishes you (or trys to train you not to) for flagging 'don't recommend channel' when recommended a channel of AI slop. Flag AI and Youtube will degrade you algo recommendations.
Anyone else feel like we're cresting the LLM coding hype curve?
Like a recognition that there's value there, but we're passing the frothing-at-the-mouth stage of replacing all software engineers?
My opinion swings between hype to hate every day. Yesterday all suggestions / edits / answers were hallucinated garbage, and I was ready to remove the copilot plugin altogether. Today I was stuck at a really annoying problem for hours and hours. For shits and giggles I just gave Claude a stacktrace and a description and let it go ham. It produced an amazingly accurate thought train and found my issue, which was not what I was expecting at all.
I still don't see how it's useful for generating features and codebases, but as a rubber ducky it ain't half bad.
I've been skeptical about LLMs being able to replace humans in their current state (which has gotten marginally better in the last 18 months), but let us not forget that GPT-3.5 (the first truly useful LLM) was only 3 years ago. We aren't even 10 years out from the initial papers about GPTs.
> was only 3 years ago
That's one way of looking at it.
Another way to look at it is GPT3.5 was $600,000,000,000 ago.
Today's AIs are better, but are they $600B better? Does it feel like that investment was sound? And if not, how much slower will future investments be?
Another way to look at $600B of improvement was whether or not they used the $600B to improve it.
This just smells like classic VC churn and burn. You are given it and have to spend it. And most of that money wasn't actually money, it was free infrastructure. Who knows the actual "cost" of the investments, but my uneducated brain (while trying to make a point) would say it is 20% of the stated value of the investments. And maybe GPT-5 + the other features OpenAI has enabled are $100B better.
> And most of that money wasn't actually money, it was free infrastructure.
But everyone who chipped in $$$ is counting against these top line figures, as stock prices are based on $$$ specifically.
> but my uneducated brain (while trying to make a point) would say it is 20% of the stated value of the investments
An 80% drop in valuations as people snap back to reality would be devastating to the market. But that's the implication of your line here.
And yet, we're clearly way into the period of diminishing returns.
I'm sure there's still some improvements that can be made to the current LLMs, but most of those improvements are not in making the models actually better at getting the things they generate right.
If we want more significant improvements in what generative AI can do, we're going to need new breakthroughs in theory or technique, and that's not going to come by simply iterating on the transformers paper or throwing more compute at it. Breakthroughs, almost by definition, aren't predictable, either in when or whether they will come.
I feel like we need a different programming paradigm that's more suited to LLM's strengths; that enables a new kind of application. IE, think of an application that's more analog with higher tolerances of different kinds of user inputs.
A different way to say it. Imagine if programming a computer was more like training a child or a teenager to perform a task that requires a lot of human interaction; and that interaction requires presenting data / making drawings.
I was extremely skeptical at the beginning, and therefore critical of what was possible as my default stance. Despite all that, the latest iterations of cli agents which attach to LSPs and scan codebase context have been surprising me in a positive direction. I've given them tasks that require understanding the project structure and they've been able to do so. Therefore, for me my trajectory has been from skeptic to big proponent of the use, of course with all the caveats that at the end of the day, it is my code which will be pushed to prod. So I never went through the trough of disillusionment, but am arriving at productivity and find it great.
Well, when MS give OpenAI free use of their servers and OpenAI call it a $10 billion investment, then they use up their tokens and MS calls in $10 billion in revenue, I think so, yes.
When people talk about the “AI bubble popping” this is what they mean. It is clear that AI will remain useful, but the “singularity is nigh” hype is faltering and the company valuations based on perpetual exponential improvement are just not realistic. Worse, the marginal improvements are coming at ever higher resource requirements with each generation, which puts a soft cap on how good an AI can be and still be economical to run.
What are you basing that on? Haiku 4.5 just came out and beats Sonnet 4 at a third the cost.
GPT-5 and GPT-5-codex are significantly cheaper than the o-series full models from OpenAI, but outperform them.
I won't get into whether the improvements we're seeing are marginal or not, but whether or not that's the case, these examples clearly show you can get improved performance with decreasing resource cost as techniques advance.
>When people talk about the “AI bubble popping” this is what they mean.
You mean what they have conceded so far to be what they mean. Every new model they start to see that they have to give up a little more.
Maybe, maybe not, it’s hard to tell from articles like this from OSS projects what is generally going on, especially with corporate work. There is no such rhetoric at $job, but also, the massive AI investment seemingly has yet to shift the needle. If it doesn’t they’ll likely fire a bunch of people again and continue.
I think that happened when gpt5 was released and pierced OpenAIs veil. While not a bad model, we found out exactly what Mr. Altman’s words are worth.
It feels that way to me, too—starting to feel closer to maturity. Like Mr. Saffron here, saying “go ham with the AI for prototyping, just communicate that as a demo/branch/video instead of a PR.”
It feels like people and projects are moving from a pure “get that slop out of here” attitude toward more nuance, more confidence articulating how to integrate the valuable stuff while excluding the lazy stuff.
[dead]
It's been less than a year and agents have gone from patently useless to very useful if used well.
Useful if used well as a thought has gone from meaning a replace all developers machine to a fresh out of college junior with perfect memory bot to a will save a little typing if you type out all of your thoughts and baby sit it text box.
I get value from it everyday like a lawyer gets value from LexisNexis. I look forward to the vibe coded slop era like a real lawyer looks forward to a defendant with no actual legal training that obviously did it using LexisNexis.
The trajectory is a replace all developers trajectory, you're just in the middle of the curve wondering why you're not at the end of it.
The funny thing is you're clearly within the hyperbolic pattern that I've described. It could plateau, but denying that you're there is incorrect.
Where are you employed?
Why you ask a stranger on the internet for PII?
I'm genuinely curious as to what's going through your mind and if people readily give you this.
I suspect you're asking dishonestly but I can't simply assume that.
Every single one of your posts from the past two weeks is hyping up AI or down voted for being highly uninformed about every topic that isn't LLM hype related. You talk like a marketer of AI, someone that works adjacent to the industry with a dependency on these tools being bought.
> Please don't post insinuations about astroturfing, shilling, brigading, foreign agents, and the like. It degrades discussion and is usually mistaken. If you're worried about abuse, email hn@ycombinator.com and we'll look at the data.
You should delete this comment.
> “I am closing this but this is interesting, head over to our forum/issues to discuss”
I really like the way Discourse uses "levels" to slowly open up features as new people interact with the community, and I wonder if GitHub could build in a way of allowing people to only be able to open PRs after a certain amount of interaction, too (for example, you can only raise a large PR if you have spent enough time raising small PRs).
This could of course be abused and/or lead to unintended restrictions (e.g. a small change in lots of places), but that's also true of Discourse and it seems to work pretty well regardless.
Mailing lists are used as a filter to raise the barrier to entry to prevent people from contributing code that they have no intention of maintaining and leaving that to the project owners. Github for better or worse has made the barrier to entry much much lower and significantly easier for people to propose changes and then disappear.
Essay is way more interesting than the title, which doesn't actually capture it.
The title seems perfectly engineered to get upvotes from people who don't read the article, which puts the article in front of more people who would actually read it (which is good because the article is, as you say, very interesting and worth sharing).
I don't like it but I can hardly blame them.
Agreed. Sometimes such rage/engagement-bait titles get changed on HN, but it's risky to do as a submitter cause it's unclear when you are "allowed" to change the title. And I suppose if you want upvotes, why would you change the ragebait title?
Usually engagement-bait titles are cover for uninteresting articles, but yeah in this case it's way more interesting than the title to me anyway.
i guess it makes it even more obvious when people are discussing the title instead of the actual piece, which is routine on HN but not always obvious! Although to be fair, the title describes one part of the piece, sure. the part with the least original insight.
> You can usually tell a prototype that is pretending to be a human PR, but a real PR a human makes with AI assistance can be indistinguishable.
A couple of weeks ago I needed to stuff some binary data into a string, in a way where it wouldn't be corrupted by whitespace changes.
I wrote some Rust code to generate the string. After I typed "}" to end the method: 1: Copilot suggested a 100% correct method to parse the string back to binary data, and then 2: Suggested a 100% correct unit test.
I read both methods, and they were identical to what I would write. It was as if Copilot could read my brain.
BUT: If I relied on Copilot to come up with the serialization form, or even know that it needed to pick something that wouldn't be corrupted by whitespace, it might have picked something completely wrong, that didn't meet what the project needed.
If one claims to be able to write good code with LLMs, it should just as easy to write comprehensive e2e tests. If you don't hold your code to a high testing standard than you were always going off 'vibes' whether they were from a silicon neural network or your human meatware biases.
Reviewing test code is arguably harder than reviewing implementation code because tests are enumerated success and failure scenarios. Some times the LOC of the tests is an order of magnitude larger than the implementation code.
The biggest place I've seen AI created code with tests produce a false positive is when a specific feature is being tested, but the test case overwrites a global data structure. Fixing the test reveals the implementation to be flawed.
Now imagine you get rewarded for shipping new features a test code, but are derided for refactoring old code. The person who goes to fix the AI slop is frowned upon while the AI slop driver gets recognition for being a great coder. This dynamic caused by AI coding tools is creating perverse workplace incentives.
Shouldn't there be guidelines for open source projects where it is clearly stipulated that code submitted for review must follow the project's code format and conventions?
This is the thought that I always have whenever I see the mention of coding standards. Not only should there be standards they should be enforced by tooling.
Now that being said a person should feel free to do what they want with their code. It’s somewhat tough to justify the work of setting up infrastructure to do that on small projects, but AI PRs aren’t likely a big issue fit small projects.
> that code submitted for review must follow the project's code format and conventions
...that's just scratching the surface.
The problem is that LLMs make mistakes that no single human would make, and coding conventions should anyway never be the focus of a code review and should usually be enforced by tooling.
E.g. when reading/reviewing other people's code you tune into their brain and thought process - after reading a few lines of (non-trivial) code you know subconsciously what 'programming character' this person is and what type of problems to expect and look for.
With LLM generated code it's like trying to tune into a thousand brains at the same time, since the code is a mishmash of what a thousand people have written and published on the internet. Reading a person's thought process via reading their code doesn't work anymore, because there is no coherent thought process.
Personally I'm very hesitant to merge PRs into my open source projects that are more than small changes of a couple dozen lines at most, unless I know and trust the contributor to not fuck things up. E.g. for the PRs I'm accepting I don't really care if they are vibe-coded or not, because the complexity for accepted PRs is so low that the difference shouldn't matter much.
Also there’s two main methods of reviewing. If you’re in an org, everyone is responsible for their own code, so review is mostly for being aware of stuff and helping catch mistakes. In an OSS project, everything’s is under your responsibility, and there’s a need to vet code closely. LGTM is not an option.
As if people read guidelines. Sure they're good to have so you can point to them when people violate them but people (in general) will not by default read them before contributing.
I’ve found LLM coding agents to be quite good at writing linters…
In a perfect world people would read and understand contribution guidelines before opening a PR or issue.
Alas…
Code format and conventions are not the problem. It's the complexity of the change without testing, thinking, or otherwise having ownership of your PR.
Some people will absolutely just run something, let the AI work like a wizard and push it in hopes of getting an "open source contribution".
They need to understand due diligence and reduce the overhead of maintainers so that maintainers don't review things before it's really needed.
It's a hard balance to strike, because you do want to make it easy on new contributors, but this is a great conversation to have.
The title doesn't make justice to the content.
I really liked the paragraph about LLMs being "alien intelligence"
It's a similitude I find compelling. The way they produce code and the way you have to interact with them really feels "alien", and when you start humanizing them, you get emotions when interacting with it and that's not correct. I mean, I do get emotional and frustrated even when good old deterministic programs misbehaved and there was some bug to find and squash or work-around, but the LLM interactions can bring the game to a complete new level. So, we need to remember they are "alien".Some movements expected alien intelligence to arrive in the early 2020s. They might have been on the mark after all ;)
Isn't the intelligence of every other person alien to ourselves? The article ends with a need to "protect our own engineering brands" but how is that communicated? I found this [https://meta.discourse.org/t/contributing-to-discourse-devel...] which seems woefully inadequate. In practice, conventions are communicated through existing code. Are human contributors capable of grasping an "engineering brand" by working on a few PRs?
This is why at a fundamental level, the concept of AGI doesn't make a lot of sense. You can't measure machine intelligence by comparing it to a human's. That doesn't mean machines can't be intelligent...but rather that the measuring stick cannot be an abstracted human being. It can only be the accumulation of specific tasks.
I’m reminded of Dijkstra: “The question of whether machines can think is about as relevant as the question of whether submarines can swim.”
These new submarines are a lot closer to human swimming than the old ones were, but they’re still very different.
The problem with AI isn’t new, it’s the same old problem with technology: computers don’t do what you want, only what you tell them. A lot of PRs can be judged by how well they are described and justified, it’s because the code itself isn’t that important, it’s the problem that you are solving with it that is. People are often great at defining problems, AIs less so IMHO. Partially because they simply have no understanding, partially because they over explain everything to a point where you just stop reading, and so you never get to the core of the problem. And even if you do there’s a good chance AI misunderstood the problem and the solution is wrong in a some more or less subtle way. This is further made worse by the sheer overconfidence of AI output, which quickly erodes any trust that they did understand the problem.
> As engineers it is our role to properly label our changes.
I've found myself wanting line-level blame for LLMs. If my teammate committed something that was written directly by Claude Code, it's more useful to me to know that than to have the blame assigned to the human through the squash+merge PR process.
Ultimately somebody needs to be on the hook. But if my teammate doesn't understand it any better than I do, I'd rather that be explicit and avoid the dance of "you committed it, therefore you own it," which is better in principle than in practice IMO.
A bit of a brutal title for what's a pretty constructive and reasonable article. I like the core: AI-produced contributions are prototypes, belong in branches, and require transparency and commitment as a path to being merged.
It is possible that some projects could benefit from triage volunteers?
There are plenty of open source projects where it is difficult to get up to speed with the intricacies of the architecture that limits the ability of talented coders to contribute on a small scale.
There might be merit in having a channel for AI contributions that casual helpers can assess to see if they pass a minimum threshold before passing on to a project maintainer to assess how the change works within the context of the overall architecture.
It would also be fascinating to see how good an AI would be at assessing the quality of a set of AI generated changes absent the instructions that generated them. They may not be able to clearly identify whether the change would work, but can they at least rank a collection of submissions to select the ones most worth looking at?
At the very least the pile of PRs count as data of things that people wanted to do, even if the code was completely unusable, placing it into a pile somewhere might be minable for the intentions of erstwhile contributors.
Maybe we need open source credit scores. PRs from talented engineers with proven track records of high quality contributions would be presumed good enough for review. Unknown, newer contributors could have a size limit on their PRs, with massive PRs rejected automatically.
The Forgejo project has been gently trying to redirect new contributors into fixing bugs before trying to jump into the project to implement big features (https://codeberg.org/forgejo/discussions/issues/337). This allows a new contributor to get into the community, get used to working with the codebase, do something of clear value... but for the project a lot of it is about establishing reputation.
Will the contributor respond to code-review feedback? Will they follow-up on work? Will they work within the code-of-conduct and learn the contributor guidelines? All great things to figure out on small bugs, rather than after the contributor has done significant feature work.
We don't need more KYC, no.
Reputation building is not kyc. It is actually the thing that enables anonymization to work in a more sophisticated way.
>That said, there is a trend among many developers of banning AI. Some go so far as to say “AI not welcome here” find another project.
>This feels extremely counterproductive and fundamentally unenforceable to me. Much of the code AI generates is indistinguishable from human code anyway. You can usually tell a prototype that is pretending to be a human PR, but a real PR a human makes with AI assistance can be indistinguishable.
Isn't that exactly the point? Doesn't this achieve exactly what the whole article is arguing for?
A hard "No AI" rule filters out all the slop, and all the actually good stuff (which may or may not have been made with AI) makes it in.
When the AI assisted code is indistinguishable from human code, that's mission accomplished, yeah?
Although I can see two counterarguments. First, it might just be Covert Slop. Slop that goes under the radar.
And second, there might be a lot of baby thrown out with that bathwater. Stuff that was made in conjunction with AI, contains a lot of "obviously AI", but a human did indeed put in the work to review it.
I guess the problem is there's no way of knowing that? Is there a Proof of Work for code review? (And a proof of competence, to boot?)
> I guess the problem is there's no way of knowing that? Is there a Proof of Work for code review?
In a live setting, you could ask the submitter to explain various parts of the code. Async, that doesn’t work, because presumably someone who used AI without disclosing that would do the same for the explanation.
Based on interviews I've run, people who use AI heavily have no problem also using it during a live conversation to do their thinking for them there, too.
Personally, I would not contribute to a project that forced me to lie.
And from the point of view of the maintainers, it seems a terrible idea to set up rules with the expectation that they will be broken.
...YYyyeah, that says a lot about you, and nothing about the project in question.
"Forced you to lie"?? Are you serious?
If the project says "no AI", and you insist on using AI, that's not "forcing you to lie"; that's you not respecting their rules and choosing to lie, rather than just go contribute to something else.
Well, but why not instead of asking/accepting people will lie undetectably when you say "No AI" and it's okay you're fine with lying, just say instead "Only AI when you spend the time to turn it into a real reviewed PR, which looks like X, Y, and Z", giving some actual tips on how to use AI acceptably. Which is what OP suggests.
The way we do it is to use AI to review the PR before a human reviewer sees it. Obvious errors, non-consistent patterns, weirdness etc is flagged before it goes any further. "Vibe coded" slop usually gets caught, but "vibe engineered" surgical changes that adhere to common patterns and standards and have tests etc get to be seen by a real live human for their normal review.
It's not rocket science.
Do you work at a profitable company?
An idea occurred to me. What if:
1. Someone raises a PR
2. Entry-level maintainers skim through it and either reject or pass higher up
3. If the PR has sufficient quality, the PR gets reviewed by someone who actually has merge permissions
I guess the main question I'm left with after reading this is "what good is a prototype, then?" In a few of the companies I've worked at there was a quarterly or biannual ritual called "hack week" or "innovation week" or "hackathon" where engineers form small teams and try to bang out a pet project super fast. Sometimes these projects get management's attention, and get "promoted" to a product or feature. Having worked on a few of these "promoted" projects, to the last they were unmitigated disasters. See, "innovation" doesn't come from a single junior engineer's 2AM beer and pizza fueled fever dream. And when you make the mistake of believing otherwise, what seemed like some bright spark's clever little dream turns into a nightmare right quick. The best thing you can do with a prototype is delete it.
Completely agree, I hate the “hackathon” for so many reasons, guess I’ll vent here too. All of this from the perspective of one frustrated software engineer in web tech.
First of all, if you want innovation, why are you forcing it into a single week? You very likely have smart people with very good ideas, but they’re held back by your number-driven bullshit. These orgs actively kill innovation by reducing talent to quantifiable rows of data.
A product hobbled together from shit prototype code very obviously stands out. It has various pages that don’t quite look/work the same, Cross-functional things that “work everywhere else” don’t in some parts.
It rewards only the people who make good presentations, or pick the “current hype thing” to work on. Occasionally something good that addresses real problems is at least mentioned but the hype thing will always win (if judged by your SLT)
Shame on you if the slop prototype is handed off to some other team than the hackathon presenters. Presenters take all the promotion points, then implementers have to sort out a bunch of bullshit code, very likely being told to just ship the prototype “it works you idiots, I saw it in the demo, just ship it.” Which is so incredibly short sighted.
I think the depressing truth is your executives know it’s all hobbled together bullshit, but that it will sell anyway, so why invest time making it actually good? They all have their golden parachutes, what do they care about the suckers stuck on-call for the house-of-cards they were forced to build, despite possessing the talent to make it stable? All this stupidity happens over and over again, not because it is wise, or even the best way to do this, the truth is just a flaccid “eh, it’ll work though, fuck it, let’s get paid.”
You touched on this but to expand on "numbers driven bullshit" a bit, it seems to me the biggest drag on true innovation is not quantifiability per se but instead how organizations react to e.g. having some quantifiable target. It leaves things like refactoring for maintainability or questioning whether a money-making product could be improved out of reach. I've seen it happen multiple times where these two forces conspire to arrive at the "eh, fuck it" place--like the code is a huge mess and difficult to work on, and the product is "fine" in that it's making revenue although customers constantly complain about it. So instead of building the thing customers actually want in a sustainable way we just... do nothing.
We have to do better than that before congratulating ourselves about all the wonderful "innovation".
> That said it is a living demo that can help make an idea feel more real. It is also enormously fun. Think of it as a delightful movie set.
[pedantry] It bothers me that the photo for "think of prototype PRs as movie sets" is clearly not a movie set but rather the set of the TV show Seinfeld. Anyone who watched the show would immediately recognize Jerry's apartment.
Its not the set of the TV show I beliefe, but a recreation.
https://nypost.com/2015/06/23/you-can-now-visit-the-iconic-s...
It looks a bit different wrt. the stuff on the fridge and the items in the cupboard
I'm not sure what you mean. Those two photos are very different. The floors are entirely different, the tables are entirely different, one of the chairs/couches is different, even the intercom and light switch are different.
In any case, though, neither one is a movie set.
I think we agree, it looks like the seinfeld set, but it not the orginal set, just something looking very similar.
We’re fixing this slop problem - engineers write rules that are enforced on PRs. Fixes the problem pretty well so far.
2 months ago, after I started using Claude Code on my side project, within the space of days, I went from not allowing a single line of AI code into my codebase to almost 100% AI-written code. It basically codes in my exact style and I know ahead of time what code I expect to see so reviewing is really easy.
I cannot justify to myself writing code by hand when there is literally no difference in the output from how I would have done it myself. It might as well be reading my mind, that's what it feels like.
For me, vibe coding is essentially a 5x speed increase with no downside. I cannot believe how fast I can churn out features. All the stuff I used to type out by hand now seems impossibly boring. I just don't have the patience to hand-code anymore.
I've stuck to vanilla JavaScript because I don't have the patience to wait for the TypeScript transpiler. TS iteration speed is too slow. By the time it finishes transpiling, I can't even remember what I was trying to do. So you bet I don't have the patience to write by hand now. I really need momentum (fast iteration speed) when I code and LLMs provide that.
I dont mean to question you personally, after all this is the internet, but comments like yours do make the reader think, if he has 5x'ed his coding, was he any good to begin with? I guess what I'm saying is, without knowing your baseline skill level, I dont know whether to be impressed by your story. Have you become a super-programmer, or is it just cleaning up stupid stuff that you shouldn't have been doing in the first place? If someone is already a clear-headed, efficient, experienced programmer, would that person be seeing anywhere near the benefits you have? Again, this isn't a slight on you personally, it's just, a reader doesnt really know how to place your experience into context.
> Some go so far as to say “AI not welcome here” find another project.
This feels extremely counterproductive and fundamentally unenforceable to me.
But it's trivially enforceable. Accept PRs from unverified contributors, look at them for inspiration if you like, but don't ever merge one. It's probably not a satisfying answer, but if you want or need to ensure your project hasn't been infected by AI generated code you need to only accept contributions from people you know and trust.
This is sad. The barrier of entry will be raised extremely high, maybe even requiring some real world personal connections to the maintainer.
Real world personal connections are how we establish trust. At some point you have to be able to trust the people you're collaborating with.
Well...just have AI review the PR to have it highlight the slop
/s
[flagged]
I wouldn't call it "vibe coded slop" the models are getting way better and I can work with my engineers a lot faster.
I am the founder and a product person so it helps in reducing the number of needed engineers at my business. We are currently doing $2.5M ARR and the engineers aren't complaining, in fact it is the opposite, they are actually more productive.
We still prioritize architecture planning, testing and having a CI, but code is getting less and less important in our team, so we don't need many engineers.
> code is getting less and less important in our team, so we don't need many engineers.
That's a bit reductive. Programmers write code; engineers build systems.
I'd argue that you still need engineers for architecture, system design, protocol design, API design, tech stack evaluation & selection, rollout strategies, etc, and most of this has to be unambiguously documented in a format LLMs can understand.
While I agree that the value of code has decreased now that we can generate and regenerate code from specs, we still need a substantial number of experienced engineers to curate all the specs and inputs that we feed into LLMs.
> we can generate and regenerate code from specs
We can (unreliably) write more code in natural english now. At its core it’s the same thing: detailed instructions telling the computer what it should do.
Maybe the code itself is less important now, relative to the specification.
> the engineers aren't complaining, in fact it is the opposite, they are actually more productive.
More productive isn't the opposite of complaining.
I don't hear any either way.
If an engineer complains in the woods and nobody is around to hear them, did they even complain at all?
> and a product person
Tells me all I need to know about your ability for sound judgement on technical topics right there.
What do the spends for AI/LLM services look like per person? Do you track any dev/AI metrics related to how the usage is in the company?
> so it helps in reducing the number of needed engineers at my business
> the engineers aren't complaining
You're missing a piece of the puzzle here, Mr business person.
I mean our MRR and ARR is growing so we must be doing something right.
WeWork thought that as well.
> reducing the number of needed engineers at my business
> code is getting less and less important in our team
> the engineers aren't complaining
lays off engineers for ai trained off of other engineer's code and says code is less important and engineers aren't complaining.
Um, yes?
They can focus on other things that are more impactful in the business rather than just slinging code all day, they can actually look at design and the product!
Maximum headcount for engineers is around 7, no more than that now. I used to have 20, but with AI we don't need that many for our size.
> Maximum headcount for engineers is around 7, no more than that now. I used to have 20,
If I survived having 65% of my colleagues laid off you'd better believe I wouldn't complain in public.
BigTTYGothGF is right
I'd also be looking for a new job that values the skills I've spent a decade building.
I wonder if the remaining engineers' salary increased by the salary of the laid off coworkers'
> I wonder if the remaining engineers' salary increased by the salary of the laid off coworkers'
Never does.
Yeah or start my own company since they're basically doing everything now it sounds like.
Someone barking orders at you to generate code because they are too stupid to be able to read it is not very fun.
These people hire developers because their own brains are inferior, and now they think they can replace them because they don't want to share the wages with them.
Yeah I'm sure they aren't complaining because you'll just lay them off like the others.
I don't see how you could think 7 engineers would love the workload of 20 engineers, extra tooling or not.
Have fun with the tech debt in a few years.
thats the trouble I see with AI and management.
Management may see a churn of a few years as acceptable. If management makes 1$M in that time.. they wont care. "Once I get mine, I don't care"
Like my old CEO who moved out of state to avoid a massive tax bill, got his payout, became hands off, and let the company slide to be almost worthless.
Or at my current company there is no care for quality since we're just going to launch a new generation of product in 3 years. We're doing things here that will CAUSE a ground up rewrite. We're writing code to rely on undocumented features of the mcu that the vendor have said 'we cannot guarantee it will always behave this way' But our management cycles out every 3-4 years. Just enough time to kill the old, champion the new, get their bonus, and move on. Bonuses are handed out every January. Like clockwork there's between 3-7 directors and above who either get promoted or leave in February.
I don't see how any business person would see any value in engineering that extends past their tenure. They see value in launching/delivering/selling, and are rolling the dice that we're JUST able to not cause a nation wide outage or brick every device.
So AI is great... as long as I've 'gotten mine' before it explodes
Did you read the full article?
Of course I did, however:
> Please don't comment on whether someone read an article. "Did you even read the article? It mentions that" can be shortened to "The article mentions that".
https://news.ycombinator.com/newsguidelines.html
Nice jewish word mostly meant to mock. Why would I care what a plugin that I don't even see in use has to say to my face (since I had to read this with all the interpretation potential and receptiveness available). The same kind of inserted judgment that lingers similar to "Yes, I will judge you if you use AI".
There’s nothing wrong with judgment. Judging someone’s character based on whether they use generative “AI” is a valid practice. You may not like being judged, but that’s another matter entirely.
Yep, if you churn out a bad change - AI or not - I'm going to be more careful with reviewing what you put out*. This is judgement, and it is a good thing - it helps us prioritise what is worth doing, and how much time should be spent on it.
If your attitude is consistently "idk the AI made it" and you refuse to review it yourself. For 1, I am insulted that you think I should pick up your slack, and 2, I'm going to judge you and everything you put out even more harshly - for my own sanity, and trying to keep debt under control.
Judgement isn't a bad thing, it's how we decide good from bad. Pretending that it is because it uniquely discriminates against bad practise only proves to me that it's worth doubling down on that judgement.
* - I won't necessarily say/do anything different, but I am more careful - and I do start to look for patterns / ways to help.
This is not judgment as much as it is programming a community and is perpetuating the opposite of correct judgment since it's inserting an emotion and opinion into a collective mind and discourse (the headline alone which might be all a lot of people scan and a tone-setter). It's going to cause reactions like the one you just had at many points in time used against people that decide to use modern tools. If Discurse wanted to start a discussion that might solve a problem they could have used a better headline.
What exactly is the problem with having an opinion? People are allowed to have opinions. People working in a field are allowed and even expected to have opinions on that field’s current state and goings-on.
Your opinion, if I had to guess, is that generative “AI” can be good and useful. My opinion is that it’s an insult to humanity that causes considerable harm and should not be used. These are both valid opinions to have, although they disagree with each other.
Don’t fall into the trap of “I’m objectively correct, everyone else just has opinions”.
>There’s nothing wrong with judgment. Judging someone’s character based on whether they use generative “AI” is a valid practice. You may not like being judged, but that’s another matter entirely.
You and I know that using AI is a metric to consider when judging ability and quality.
The difference is that it's not judgment but a broadcast, announcement.
In this case a snotty one from Discourse.
I mention that it lingers because I think that is a real psychological effect that happens.
Small announcements like this carry over into the future and flood any evaluation of yourself which can be described as torture and sabotage since it has an effect on decisions you make sometimes destroying things.
So people will be more likely to think twice before using Cursor, Copilot et al? Good. I think they should.
Your comparison to torture and sabotage is unfounded to the point of being simply bizarre.
> Nice jewish word
"Slop" doesn't seem to be Yiddish: https://www.etymonline.com/word/slop, and even if it was, so what?
Which word? Slop? I think it is from medieval old English if that is the word you are referring to.