Panic over new ChatGPT that ‘reasons’ as expert warns of terror AI raids that steal cash from ‘huge numbers’ of victims --[Reported by Umva mag]

ARTIFICIAL intelligence that can “reason” could spark a terrifying new wave of cash-stealing scams. A top security expert has told The Sun how new advancements to apps like ChatGPT could be exploited by online crooks. GettyCyber-criminals are increasingly taking advantage of powerful AI to target hundreds of victims with complex scams[/caption] Artificial intelligence chatbots are everywhere – and they’re able to make difficult jobs much easier for millions of people. This month, OpenAI showed off its new ChatGPT o1 model, which is capable of “thinking” and “reasoning”. “We’ve developed a new series of AI models designed to spend more time thinking before they respond,” OpenAI explained. “They can reason through complex tasks and solve harder problems than previous models in science, coding, and math.” It’s the latest major advancement to ChatGPT since the AI chatbot first launched in late 2022. And it’s available in an early preview for paying ChatGPT members now (but in a limited way). SCAM SPAM! The Sun spoke to security expert Dr Andrew Bolster, who revealed how this kind of advancement could be a huge win for cyber-criminals. “Large Language Models (LLMs) continue to improve over time, and OpenAI’s release of their ‘o1’ model is no exception to this trend,” said Dr Bolster, of the Synopsys Software Integrity Group, speaking to The Sun. “Where this generation of LLM’s excel is in how they go about appearing to ‘reason’. “Where intermediate steps are done by the overall conversational system to draw out more creative or ‘clever’ appearing decisions and responses. “Or, indeed to self-correct before expressing incorrect responses.” He warned this brainy new system could be used for carrying out clever scams. “In the context of cybersecurity, this would naturally make any conversations with these ‘reasoning machines’ more challenging for end-users to differentiate from humans,” Dr Bolster said. “Lending their use to romance scammers or other cybercriminals leveraging these tools to reach huge numbers of vulnerable ‘marks’.” Web users should always be wary of deals that are ‘too good to be true’. Dr Andrew BolsterSynopsys OpenAIOpenAI has released a new version of ChatGPT that offers ‘advanced reasoning’[/caption] He warned that they’d be able to carry out lucrative scams cheaply “at scale for a dollar per hundred responses”. PROTECT YOURSELF So how do regular users stay safe? The good news is that all the old rules for dodging online scams still apply. “Web users should always be wary of deals that are ‘too good to be true’,” Dr Bolster told us. “And [they] should always consult with friends and family members to get a second opinion. “Especially when someone (or something) on the end of a chat window or even a phone call is trying to pressure you into something.” AI IS SCARY – BUT DON'T GIVE UP JUST YET Here’s what Sean Keach, The Sun’s Head of Technology and Science, has to say about AI scams… Artificial intelligence is here to stay. There’s no doubt about it. And no matter how many safeguards are put in place when designing AI, there’s no fool-proof way to stop it from being abused. Tech companies are pouring money into making their AI systems safer – but nothing is perfect. So as always, the responsibility will be on you and me to stay safe in this scary new world. The main thing to remember is that AI-powered scams are often just more convincing and easier-to-carry-out versions of regular age-old cons. So keep these quick tips in mind: If a claim sounds unrealistic, there’s a good chance it is Offers that look too good to be true probably are Don’t give in to someone pressuring you to make a quick decision – especially if money is involved Have a safe word or phrase for close friends and family members to verify you’re talking to the right person – and not a fake Never click unsolicited links you’ve been sent. If you need to find a website, navigate to it manually and make sure it’s official Don’t hand over any sensitive info online unless you’re 100% sure you’re talking to the right person – and there’s a good reason for it If you’re being asked to pay for something with gift cards, it’s very likely a scam Following these rules can stop the overwhelming majority of scams – whether they’re powered by AI or not. SAFETY FIRST? To combat the new ChatGPT being abused, OpenAI has fitted it out with a whole host of new safety measures. “As part of developing these new models, we have come up with a new safety training approach that harnesses their reasoning capabilities to make them adhere to safety and alignment guidelines,” OpenAI said. “By being able to reason about our safety rules in context, it c

Sep 19, 2024 - 18:32
Panic over new ChatGPT that ‘reasons’ as expert warns of terror AI raids that steal cash from ‘huge numbers’ of victims --[Reported by Umva mag]

ARTIFICIAL intelligence that can “reason” could spark a terrifying new wave of cash-stealing scams.

A top security expert has told The Sun how new advancements to apps like ChatGPT could be exploited by online crooks.

a man in a blue hoodie stands with his arms crossed
Getty
Cyber-criminals are increasingly taking advantage of powerful AI to target hundreds of victims with complex scams[/caption]

Artificial intelligence chatbots are everywhere – and they’re able to make difficult jobs much easier for millions of people.

This month, OpenAI showed off its new ChatGPT o1 model, which is capable of “thinking” and “reasoning”.

“We’ve developed a new series of AI models designed to spend more time thinking before they respond,” OpenAI explained.

“They can reason through complex tasks and solve harder problems than previous models in science, coding, and math.”

It’s the latest major advancement to ChatGPT since the AI chatbot first launched in late 2022.

And it’s available in an early preview for paying ChatGPT members now (but in a limited way).

SCAM SPAM!

The Sun spoke to security expert Dr Andrew Bolster, who revealed how this kind of advancement could be a huge win for cyber-criminals.

“Large Language Models (LLMs) continue to improve over time, and OpenAI’s release of their ‘o1’ model is no exception to this trend,” said Dr Bolster, of the Synopsys Software Integrity Group, speaking to The Sun.

“Where this generation of LLM’s excel is in how they go about appearing to ‘reason’.

“Where intermediate steps are done by the overall conversational system to draw out more creative or ‘clever’ appearing decisions and responses.

“Or, indeed to self-correct before expressing incorrect responses.”

He warned this brainy new system could be used for carrying out clever scams.
 
“In the context of cybersecurity, this would naturally make any conversations with these ‘reasoning machines’ more challenging for end-users to differentiate from humans,” Dr Bolster said.

“Lending their use to romance scammers or other cybercriminals leveraging these tools to reach huge numbers of vulnerable ‘marks’.”

Web users should always be wary of deals that are ‘too good to be true’. Dr Andrew BolsterSynopsys

a screenshot of a web page that says temporary chat
OpenAI
OpenAI has released a new version of ChatGPT that offers ‘advanced reasoning’[/caption]

He warned that they’d be able to carry out lucrative scams cheaply “at scale for a dollar per hundred responses”.

PROTECT YOURSELF

So how do regular users stay safe?

The good news is that all the old rules for dodging online scams still apply.

“Web users should always be wary of deals that are ‘too good to be true’,” Dr Bolster told us.

“And [they] should always consult with friends and family members to get a second opinion.

“Especially when someone (or something) on the end of a chat window or even a phone call is trying to pressure you into something.”

AI IS SCARY – BUT DON'T GIVE UP JUST YET

Here’s what Sean Keach, The Sun’s Head of Technology and Science, has to say about AI scams…

Artificial intelligence is here to stay. There’s no doubt about it.

And no matter how many safeguards are put in place when designing AI, there’s no fool-proof way to stop it from being abused.

Tech companies are pouring money into making their AI systems safer – but nothing is perfect.

So as always, the responsibility will be on you and me to stay safe in this scary new world.

The main thing to remember is that AI-powered scams are often just more convincing and easier-to-carry-out versions of regular age-old cons.

So keep these quick tips in mind:

  • If a claim sounds unrealistic, there’s a good chance it is
  • Offers that look too good to be true probably are
  • Don’t give in to someone pressuring you to make a quick decision – especially if money is involved
  • Have a safe word or phrase for close friends and family members to verify you’re talking to the right person – and not a fake
  • Never click unsolicited links you’ve been sent. If you need to find a website, navigate to it manually and make sure it’s official
  • Don’t hand over any sensitive info online unless you’re 100% sure you’re talking to the right person – and there’s a good reason for it
  • If you’re being asked to pay for something with gift cards, it’s very likely a scam

Following these rules can stop the overwhelming majority of scams – whether they’re powered by AI or not.

SAFETY FIRST?

To combat the new ChatGPT being abused, OpenAI has fitted it out with a whole host of new safety measures.

“As part of developing these new models, we have come up with a new safety training approach that harnesses their reasoning capabilities to make them adhere to safety and alignment guidelines,” OpenAI said.

“By being able to reason about our safety rules in context, it can apply them more effectively.

Fraud losses facilitated by generative AI technologies are predicted to escalate to US$40 billion in the United States by 2027. This projection reflects a compound annual growth rate of 32% from US$12.3 billion in 2023. EFTSureDeepfake Statistics 2024 report

“One way we measure safety is by testing how well our model continues to follow its safety rules if a user tries to bypass them (known as ‘jailbreaking’).

“On one of our hardest jailbreaking tests, GPT-4o scored 22 (on a scale of 0-100) while our o1-preview model scored 84.”

INSIDE THE NEW CHATGPT O1

Here's how OpenAI says its new ChatGPT o1 works...

“We trained these models to spend more time thinking through problems before they respond, much like a person would,” OpenAI said in a blog post.

“Through training, they learn to refine their thinking process, try different strategies, and recognize their mistakes.

“In our tests, the next model update performs similarly to PhD students on challenging benchmark tasks in physics, chemistry, and biology.

“We also found that it excels in math and coding. In a qualifying exam for the International Mathematics Olympiad (IMO), GPT-4o correctly solved only 13% of problems, while the reasoning model scored 83%.

“Their coding abilities were evaluated in contests and reached the 89th percentile in Codeforces competitions. You can read more about this in our technical research post.

“As an early model, it doesn’t yet have many of the features that make ChatGPT useful, like browsing the web for information and uploading files and images.

“For many common cases GPT-4o will be more capable in the near term.

“But for complex reasoning tasks this is a significant advancement and represents a new level of AI capability.

“Given this, we are resetting the counter back to 1 and naming this series OpenAI o1.”

But although it’s safer, no AI system is foolproof – so be vigilant when browsing the web so you’re ready to spot these costly scams.




The following news has been carefully analyzed, curated, and compiled by Umva Mag from a diverse range of people, sources, and reputable platforms. Our editorial team strives to ensure the accuracy and reliability of the information we provide. By combining insights from multiple perspectives, we aim to offer a well-rounded and comprehensive understanding of the events and stories that shape our world. Umva Mag values transparency, accountability, and journalistic integrity, ensuring that each piece of content is delivered with the utmost professionalism.