dfxm12 3 hours ago

DDG's search assist is suggesting to me that: Recognizing bias can indicate a level of critical thinking and self-awareness, which are components of intelligence.

"Most users" should have a long, hard thought about this, in the context of AI or not.

zmmmmm 3 hours ago

I'm curious how much trained in bias damages in-context performance.

It's one thing to rely explicitly on the training data - then you are truly screwed and there isn't much to be done about it - in some sense, the model isn't working right if it does anything other than reflect accurately what is in the training data. But if I provide unbiased information in the context, how much does trained in bias affect evaluation of that specific information?

For example, if I provide it a table of people, their racial background and then their income levels, and I ask it to evaluate whether the white people earn more than the black people - are its error going to lean in the direction of the trained-in bias (eg: telling me white people earn more even though it may not be true in my context data)?

In some sense, relying on model knowledge is fraught with so many issues aside from bias, that I'm not so concerned about it unless it contaminates the performance on the data in the context window.

  • ryukoposting 7 minutes ago

    > I'm curious how much trained in bias damages in-context performance.

    I think there's an example right in front of our faces: look at how terribly SOTA LLMs perform on underrepresented languages and frameworks. I have an old side project written in pre-SvelteKit Svelte. I needed to do a dumb little update, so I told Claude to do it. It wrote its code in React, despite all the surrounding code being Svelte. There's a tangible bias towards things with larger sample sizes in the training corpus. It stands to reason those biases could appear in more subtle ways, too.

segmondy 2 hours ago

most people can't identify bias in real life, let alone in AI.

SoftTalker 4 hours ago

AIs/LLMs are going to reflect the biases in their training data. That seems intuitive.

And personally, I think when people see content they agree with, they think it's unbiased. And the converse is also true.

So conservatives might think Fox News is "balanced" and liberals might think it's "far-right"

  • prox 4 hours ago

    Yup it appears as neutral bias because (or when rather) it corresponds 1:1 with your belief system, which by default is skewed af. Unless you did a rigorous self inquiry and mapped your beliefs and thoroughly aware of them that’s gonna be nearly always true.

    • econ 3 hours ago

      Nah, the later is an example of the former.

  • mc32 2 hours ago

    Bias is different things though. If most people are cautious but the LLM is carefree, then that is a bias. Or if it recommends planting sorghum over wheat that is a different bias.

    In addition bias is not intrinsically bad. It might have a bias toward safety. That's a good thing. If it has a bias against committing crime, that is also good. Or a bias against gambling.

  • throwaway290 3 hours ago

    > And personally, I think when people see content they agree with, they think it's unbiased. And the converse is also true.

    > So conservatives might think Fox News is "balanced" and liberals might think it's "far-right"

    Article talks like when accidentally the vector for race aligns with emotion so it can classify a happy black personal as unhappy. Just because training dataset has lots of happy white people. It's not about subjective preference

    explain how "agreeing" is related

    • SoftTalker 2 hours ago

      It was mostly a tangential thought.

      People could of course see a photo of a happy black person among 1000 photos of unhappy black people and say that person looks happy, and realize the LLM is wrong, because people's brains are pre-wired to perceive emotions from facial expressions. LLMs will pick up on any correlation in the training data and use that to make associations.

      But in general, excepting ridiculous examples like that, if an LLM says something that a person agrees with, I think people will be inclined to (A) believe it and (B) not see any bias.

  • Theodores an hour ago

    Your comment has made me wonder what fun could be had in deliberately educated an LLM badly, so that it is Fox News on steroids with added flat-earth conspiracy nonsense.

    For tech, only Stack Overflow answers modded negatively would 'help'. As for medicine, a Victorian encyclopedia, from the days before germs were discovered could 'help', with phrenology, ether and everything else now discredited.

    If the LLM replied as if it was Charles Dickens with no knowledge of the 20th century (or the 21st), that would be pretty much perfect.

    • electroglyph an hour ago

      top men are already working on it, it's going to be called Grok 5

  • Y_Y 3 hours ago

    Reality has a well-known liberal bias

    - Stephen Colbert

  • BolexNOLA 4 hours ago

    > And personally, I think when people see content they agree with, they think it's unbiased. And the converse is also true.

    One only has to see how angry conservatives/musk supporters get at Grok on a regular basis.

WalterBright 3 hours ago

We're all biased, often unwittingly. But some tells for blatant bias:

* only facts supporting one point of view are presented

* reading the minds of the subjects of the article

* use of hyperbolic words

* use of emotional appeal

* sources are not identified

  • shikon7 2 hours ago

    But maybe your tells are also biased. If you're truly unbiased, then

    * any facts supporting another view are by definiton biased, and should not be presented

    * you have the only unbiased objective interpretation of the minds of the subjects

    * you don't bias against using words just because they are hyperbolic

    * something unbiased would inevitably be boring, so you need emotional appeal to make anyone care about it

    * since no sources are unbiased, identifying any of them would inevitably lead to a bias

DocTomoe 5 hours ago

If bias can only be seen by a minority of people ... is it really 'AI bias', or just societal bias?

> “In one of the experiment scenarios — which featured racially biased AI performance — the system failed to accurately classify the facial expression of the images from minority groups,”

Could it be that real people have trouble reading the facial expression of the image of minority groups?

  • ecocentrik 5 hours ago

    By "real people" do you mean people who are not members of those minority groups? Or are people who can "accurately classify the facial expression of images from minority groups" not "real people"?

    I hope you can see the problem with your very lazy argument.

    • lovemenot 5 hours ago

      AI are not real people. Obviously. Just look at the first line to see the intended line of argument.

      It's not about which people per se, but how many, in aggregate.

  • SpicyLemonZest 2 hours ago

    I guess I'm not sure what the point of the dichotomy is. Suppose you're developing a system to identify how fast a vehicle is moving, and you discover that it systematically overestimates the velocity of anything painted red. Regardless of whether you call that problem "AI bias" or "societal bias" or some other phrase that doesn't include the word "bias", isn't it something you want to fix?

  • Bnjoroge an hour ago

    what? do you think the facial expression of a person of color is significantly different from that of a white person?

7e 5 hours ago

According to research, white Americans report as happier than other groups. So I’m not sure there’s bias here, only unhappiness about that result, which AI appears to replicate via other sources.

  • serious8aller 3 hours ago

    That has no relevance to this study though. Did you just read the headline and go straight to the comment section?

eth0up 5 hours ago

Not to be glib, but...

Grok

  • jdiff 4 hours ago

    There are, simultaneously, groups of users who believe that Grok is also distorted by a far-left bias in its training data, as well as people who feel like Grok is in perfect, unbiased balance. I think it holds even for Grok that most users fail to accurately identify bias.

    • giancarlostoro 3 hours ago

      Grok had a moment where it was perfect, for some things for me, then a few months ago Elon wanted to do a major overhaul to Grok 3 and its been downhill since.

      Too many LLMs be scolding you over miniscule things. Like say a perfectly sane request: give me a regex that filters out the nword in an exhaustive fashion. Most LLMs will cry to me about how I am a terrible human and it will not say racist things. Meanwhile I'm trying to get a regex to stop others from saying awful things.

    • CuriouslyC an hour ago

      Grok is schizo because its pretraining data set leans left, and it's RL'd right.

    • eth0up 4 hours ago

      Agreed, mostly.

      Bias always feels wierd on the heads it falls upon, but is a very effective anesthetic when it falls on the heads of others.