simonsarris a day ago

> I intensely resent them diluting the word “deep” and the word “research” to mean stupidly skimming and summarising existing texts. I would probably use it twice as often if I didn’t have to play along with this degradation.

I suppose that's how I feel about calling LLMs "Artificial Intelligence" - it cheapens and degrades the goal. Broadly it feels like Marketing departments totally have the wheel with no pushback. Maybe it's always been so, though.

  • simianwords a day ago

    I would rather have deep research just be an enhanced RAG that thinks extra hard and uses many sources to answer my question briefly. Instead what I get is shallow summarisation and a hodgepodge wall of text glued arbitrarily in the form a "report" that does not usually answer my question in the prompt.

    • cheevly a day ago

      So build it? God, when did HN become self-entitled unhackers. This site is filled with non-stop comments complaining about how whatever AI product cant do something right out of the box. So hack together a solution that DOES.

      • simianwords a day ago

        I like the attitude but please remember that deep research from ChatGPT

        - uses O3 which is not available to public

        - uses a custom index of the web that an individual can not possibly recreate

krupan a day ago

This is the best explanation of why I also don't use LLMs and I'm grateful it has been put into words here. I love every reason and explanation given, except for me personally I am still writing code. It's just that at this point in my career I either don't need help writing the code, or I have run into something really challenging and/or obscure and LLMs are no help at all!

  • thefz a day ago

    It is always been my understanding that the less competent one is, the more they find LLMs useful.

  • Gigachad 20 hours ago

    I don’t use them to generate code but I’ve found ChatGPT becoming increasingly helpful giving me ideas or working through things I’m stuck on.

    I’ve started regularly sharing my ideas with it and either getting back a “yeah that’s correct, here’s why” or a “there’s an alternative which might be better …”

    I’m actually shocked how good it’s getting. Used it on the weekend to help set up a terraria server. Yeah I could have worked out iptables was dropping the connection and changed the config myself. I’ve done it before. But having ChatGPT walk me through it was way faster.

  • cheevly a day ago

    You dont like using and building tools imbued with intelligence that you can control. Why.

    • krupan 7 hours ago

      The lack of control I feel when using them is one of the big reasons I don't like them

    • duozerk a day ago

      There is no "intelligence" in LLMs; they're text predictors. As far as I can tell the whole LLM technology has limited applications in entertainment and that's about it. "Hallucinations" (even that term is problematic, as it suggests there's an actual consciousness/person, or the seed of one, there) as well as other "failures" - in fact features inherent to how the tech works - make it irrelevant for basically all other use cases.

      Tech as an industry already had an atrocious reputation but the moment the insanely stupid "AI" bubble pops I suspect it'll get much worse. Ultimately it's pretty deserved, though. At this point the bullshit is so strong one almost wishes for a new AI winter.

alganet a day ago

Most times I used it (other than testing how smart it is) were encyclopedic prompts. "tell me about ..." kind of stuff.

For that, I discovered local kiwix dumps of wikipedia and wikitionary from different years to be better, more informative and with more privacy.

Maybe some day there will be a really simple reproducible free (as in GPL) model that does just that. Then I would use it very often, despite not being as powerful as the commercial ones.

Copilot for coding is not bad, but it is too slow for me. I also feel it _wants_ to program in a certain way that sometimes actually gets in the way of my creative process. It's good for companies, when all workers must follow the same style. My coding thing (what I really do for passion, not money) is to come up with new unexpected stuff, so it doesn't suit me very well.

ringeryless a day ago

Anyway, i enjoyed reading a sane human expressing sane views, for a change.

The ycombinator sponsorship thing seems to put an insanely large thumb on all scales always, around here, so much so that rules sort of allude to one being prohibited from biting any hands that may be feeding anyone up high...

I used to enjoy HN before it became nonstop LLM "news"

Who's hiring seems to only involve LLM related enterprises, which seems rather short-sighted, IMO

boh 15 hours ago

It's funny how the "I never use a smartphone" argument moves from one technology to another. AI hype is ridiculous but it's also a technology that will probably be used by other people for things that you've decided not to use it for. It'll either be for their benefit or to their detriment, but they'll have it on their smartphones. Thinking about how it will be used will probably yield more insight than why you don't want use it.

simianwords a day ago

>I don’t have many medical issues but would happily use it for niggling things or a second opinion. This is as much to do with the weakness of medicine as the strength of AI.

This mirrors my experience with both medicine and LLM's. Most doctors from my experience are full of bias and God complex that it's actually a rational choice to use LLM's to diagnose myself.

alganet a day ago

You know what would be interesting?

Seeding a skeptic profile ("I am like this and that and I distrust LLMs...") to a prompt and asking what would be that profile opinion about using LLMs, then posting the generated text on HN to watch who goes for a "me too" response.

I guess we humans need to be more aware of confirmation bias, which will lead to a slight increase in all kinds of skepticism for who does it, which is good after all.

ringeryless a day ago

I finally found a real human on this website! The default pose here on HN can be rather eye rollingly bad: "oh, you will FALL BEHIND, everyone is using LLMs and they provide advantages you cannot conceive of, it's an arms race, not a bubble" etc ad nauseum.

I do not expect LLMs to take over even junior developer roles, let alone lead a coding revolution.

Pattern ripoff machines likely have their uses, but the grandiose delusions and the FOMO and the aggressive posturing by the LLM makers and their shrill shill army gets really old.

We are not all entirely stupid ya know

voidhorse a day ago

I'm in the same boat.

At a broader level, I've been thinking recently about how biased our late-stage view of "intelligence" is—it's so anchored on specifically digital representation at this point. I think this stuff will get better. I also think we'll reach a turning point at which the sheer amount of content being generated (slop or not) leads to a total system breakdown and larger collectives of people will (re)discover the merits of analog living and more varietied forms of knowledge than just "linguistically mediated and available in the network".

I also think about the "dark spots" in the LLM space and how its duals tendencies to (a) present the mean, and (b) to dilute/reduce once sophisticated, interactive learning to skimming of summaries (the point the OP makes about the cheapening of "deep" and "research") will in fact usher in a sort of second dark age for humanity.

I was reading a bit of philosophy of science recently (Wartofsky) that argues that much of our building of theoretical knowledge happens through interaction—I worry that, as we increasingly mediate our building of thoughts and representations through "summary gist of the pasts" we diminish this crucial interactive component, and will ultimately be the worse for it.

TZubiri a day ago

>it confidently makes an appalling error within 5 minutes and I completely lose my appetite.

Everyone that uses LLMs has gone through this, everyone knows it can make mistakes or hallucinate, people use it because of the times it is useful, not because of the times it is not useful. As any tool you can use it wrong, the trick is to learn how to use it for good effect (learn as in short trivial learning curve, it isn't rocket science)

>I like writing so much that reading and improving bad writing can be more effort than doing it myself.

Possibly a misconception based on how a non LLM user "interacts" with LLMs, by consuming slop. It's such a generic technology you can use it a million ways, generating text for others to read is like a sliver of how it can be used.

>me already knowing the basics of many many things.

Oh come off it. You are either aware of your ignorance or ignorant of your ignorance, I think you may be in the latter. There's just so much knowledge out there, unless you are some infinitely flexible polymath, I doubt this.

>me not writing much code atm

I agree, I don't write much code with it, I do make questions about software, and maybe a one off script (write me a python script to split a file into n parts), but I don't think writing code ala Cursor or vibecoding is professional.

>me needing precision and high confidence to learn

Yup, if you need high precision LLM isn't the appropriate tool. That said, you can still verify the content, it's not like you trust 100% what your algebra book or your teacher says anyway, you always verify. In any subject you can ask for a source and then look it up. It's not even a verification step, it's just something that you do. You don't ONLY use the LLM, you complement it with Wikipedia and other sources.

In general, this feels very outdated. You don't get any points for posting online about how the LLM got something wrong, while you were complaining that it got 1 answer wrong, someone else fiddled with the prompt and got the right answer.