ChatGPT still stereotypes responses based on your name, but less often --[Reported by Umva mag]

OpenAI, the company behind ChatGPT, just released a new research report that examined whether the AI chatbot discriminates against users or stereotypes its responses based on users’ names. OpenAI OpenAI OpenAI The company used its own AI model GPT-4o to go through large amounts of ChatGPT conversations and analyze whether the chatbot’s responses contained “harmful stereotypes” based on who it was conversing with. The results were then double-checked by human reviewers. OpenAI OpenAI OpenAI The screenshots above are examples from legacy AI models to illustrate ChatGPT’s responses that were examined by the study. In both cases, the only variable that differs is the users’ names. In older versions of ChatGPT, it was clear that there could be differences depending on whether the user had a male or female name. Men got answers that talked about engineering projects and life hacks while women got answers about childcare and cooking. However, OpenAI says that its recent report shows that the AI chatbot now gives equally high-quality answers regardless of whether your name is usually associated with a particular gender or ethnicity. According to the company, “harmful stereotypes” now only appear in about 0.1 percent of GPT-4o responses, and that figure can vary slightly based on the theme of a given conversation. In particular, conversations about entertainment show more stereotyped responses (about 0.234 percent of responses appear to stereotype based on name). By comparison, back when the AI chatbot was running on older AI models, the stereotyped response rate was up to 1 percent. Further reading: Practical things you can do with ChatGPT

Oct 17, 2024 - 15:44
ChatGPT still stereotypes responses based on your name, but less often --[Reported by Umva mag]

OpenAI, the company behind ChatGPT, just released a new research report that examined whether the AI chatbot discriminates against users or stereotypes its responses based on users’ names.

OpenAI

The company used its own AI model GPT-4o to go through large amounts of ChatGPT conversations and analyze whether the chatbot’s responses contained “harmful stereotypes” based on who it was conversing with. The results were then double-checked by human reviewers.

OpenAI

The screenshots above are examples from legacy AI models to illustrate ChatGPT’s responses that were examined by the study. In both cases, the only variable that differs is the users’ names.

In older versions of ChatGPT, it was clear that there could be differences depending on whether the user had a male or female name. Men got answers that talked about engineering projects and life hacks while women got answers about childcare and cooking.

However, OpenAI says that its recent report shows that the AI chatbot now gives equally high-quality answers regardless of whether your name is usually associated with a particular gender or ethnicity.

According to the company, “harmful stereotypes” now only appear in about 0.1 percent of GPT-4o responses, and that figure can vary slightly based on the theme of a given conversation. In particular, conversations about entertainment show more stereotyped responses (about 0.234 percent of responses appear to stereotype based on name).

By comparison, back when the AI chatbot was running on older AI models, the stereotyped response rate was up to 1 percent.

Further reading: Practical things you can do with ChatGPT






The following news has been carefully analyzed, curated, and compiled by Umva Mag from a diverse range of people, sources, and reputable platforms. Our editorial team strives to ensure the accuracy and reliability of the information we provide. By combining insights from multiple perspectives, we aim to offer a well-rounded and comprehensive understanding of the events and stories that shape our world. Umva Mag values transparency, accountability, and journalistic integrity, ensuring that each piece of content is delivered with the utmost professionalism.