Save yourself from terror AI attack by asking about dinner – a simple question instantly stops bank-emptying swindle --[Reported by Umva mag]

ASKING a simple question about dinner could save you from a costly scam powered by artificial intelligence. Cybersecurity experts are trying to arm The Sun readers with easy tricks to defend themselves against AI crime. GettyExperts say criminals can take advantage of AI to create more convincing scams – and target a greater number of victims[/caption] AI can be used to generate startlingly convincing human-like “deepfake” voices that say anything cyber-criminals want. And artificial intelligence can even clone the voices of your friends, family, and loved ones in a matter of seconds. Experts have warned The Sun over how fast and easy it is for cyber-criminals to abuse these AI tools to craft dangerous scams. “AI is becoming increasingly sophisticated, the responses are increasingly accurate,” said security expert Adam Pilton, speaking to The Sun. “The number of tools and resources we have access to has dramatically increased, from simple text responses, to pictures, audio, video and more. “Deepfake creation tools are becoming more user-friendly and accessible, lowering the technical barrier for attackers,” Adam, a senior cybersecurity consultant at CyberSmart, said AI chatbots are now smarter than ever. And AI “deepfakes” that create fraudulent audio or video content are more convincing too. “We are seeing deepfakes that have more realistic facial expressions, lip movements, and voice synthesis,” Adam told The Sun. “This will make them even harder to distinguish from real videos and audio. “This means that undoubtedly it will become increasingly difficult to distinguish between chatbots, AI-generated voices, and AI-faked videos, as technology continues to improve rapidly.” HOW TO BEAT THE AI DEEPFAKES Thankfully there’s some hope of beating the AI. Adam pointed out that tech companies are making tools to spot AI fakes. That means it’ll be harder for fraudulent AI content to sneak past safeguards on popular apps. The simplest defence is to ask questions that only you would know Adam Piltoncybersecurity expert But no system is fool-proof – so you’ll have to take your online safety into your own hands. Adam explained that if you receive a phone or video call from someone you know asking for money or sensitive info, you’ll want to ask some quick questions. It might even be as simple as asking about dinner. “The simplest defence is to ask questions that only you would know,” Adam told us. CREATE A SAFE WORD! Here’s some advice for staying safe from The Sun’s Head of Technology and Science Sean Keach… Artificial intelligence is now very powerful – so smart that it can convincingly replicate the voices and faces of people you know. That means criminals can use AI to trick you into handing over cash or info by posing as friends, family, or loved ones. It’s easy to give up hope. Surely we’re all doomed, right? Well one of the best defences against this is very simple: a safe word. Speak to your partner, for instance, and set up a simple safe word or phrase. Try to pick something very strange and random – not the street you live on or your favourite band. Then if you ever receive a call from them asking for money or info, ask them to tell you the safe word. You’ll instantly be able to verify whether you’re talking to the real person. If they say they’ve forgotten (genuinely, or because they’re a scammer) then try asking questions that only they’d know the answer to. Specific memories that wouldn’t have been posted online. And unless it’s urgent, just check with them in person if possible – or contact them directly by calling them yourself and verify that way instead. “These questions shouldn’t be based on information that is generic such as the company you work for, the football team you support or even a secret word you’ve agreed to. “It must be completely random and based on something only you and that person would know. “So, with a family member, it could be what you ate for dinner the night before or a memory from a Christmas gone by. “With a colleague, it could be an event which you first met at or who you spilt a drink on at the office party. “No attacker or AI would be able to respond accurately to such a question.” But he warned: “That is assuming you didn’t post a picture of your dinner on social media last night!”

Sep 20, 2024 - 16:56
Save yourself from terror AI attack by asking about dinner – a simple question instantly stops bank-emptying swindle --[Reported by Umva mag]

ASKING a simple question about dinner could save you from a costly scam powered by artificial intelligence.

Cybersecurity experts are trying to arm The Sun readers with easy tricks to defend themselves against AI crime.

a digital image of a woman 's face with pink eyes
Getty
Experts say criminals can take advantage of AI to create more convincing scams – and target a greater number of victims[/caption]

AI can be used to generate startlingly convincing human-like “deepfake” voices that say anything cyber-criminals want.

And artificial intelligence can even clone the voices of your friends, family, and loved ones in a matter of seconds.

Experts have warned The Sun over how fast and easy it is for cyber-criminals to abuse these AI tools to craft dangerous scams.

“AI is becoming increasingly sophisticated, the responses are increasingly accurate,” said security expert Adam Pilton, speaking to The Sun.

“The number of tools and resources we have access to has dramatically increased, from simple text responses, to pictures, audio, video and more.

“Deepfake creation tools are becoming more user-friendly and accessible, lowering the technical barrier for attackers,”

Adam, a senior cybersecurity consultant at CyberSmart, said AI chatbots are now smarter than ever.

And AI “deepfakes” that create fraudulent audio or video content are more convincing too.

“We are seeing deepfakes that have more realistic facial expressions, lip movements, and voice synthesis,” Adam told The Sun.

“This will make them even harder to distinguish from real videos and audio.

“This means that undoubtedly it will become increasingly difficult to distinguish between chatbots, AI-generated voices, and AI-faked videos, as technology continues to improve rapidly.”

HOW TO BEAT THE AI DEEPFAKES

Thankfully there’s some hope of beating the AI.

Adam pointed out that tech companies are making tools to spot AI fakes.

That means it’ll be harder for fraudulent AI content to sneak past safeguards on popular apps.

The simplest defence is to ask questions that only you would know Adam Piltoncybersecurity expert

But no system is fool-proof – so you’ll have to take your online safety into your own hands.

Adam explained that if you receive a phone or video call from someone you know asking for money or sensitive info, you’ll want to ask some quick questions.

It might even be as simple as asking about dinner.

“The simplest defence is to ask questions that only you would know,” Adam told us.

CREATE A SAFE WORD!

Here’s some advice for staying safe from The Sun’s Head of Technology and Science Sean Keach

Artificial intelligence is now very powerful – so smart that it can convincingly replicate the voices and faces of people you know.

That means criminals can use AI to trick you into handing over cash or info by posing as friends, family, or loved ones.

It’s easy to give up hope. Surely we’re all doomed, right?

Well one of the best defences against this is very simple: a safe word.

Speak to your partner, for instance, and set up a simple safe word or phrase.

Try to pick something very strange and random – not the street you live on or your favourite band.

Then if you ever receive a call from them asking for money or info, ask them to tell you the safe word.

You’ll instantly be able to verify whether you’re talking to the real person.

If they say they’ve forgotten (genuinely, or because they’re a scammer) then try asking questions that only they’d know the answer to. Specific memories that wouldn’t have been posted online.

And unless it’s urgent, just check with them in person if possible – or contact them directly by calling them yourself and verify that way instead.

“These questions shouldn’t be based on information that is generic such as the company you work for, the football team you support or even a secret word you’ve agreed to.

“It must be completely random and based on something only you and that person would know.

“So, with a family member, it could be what you ate for dinner the night before or a memory from a Christmas gone by.

“With a colleague, it could be an event which you first met at or who you spilt a drink on at the office party.

“No attacker or AI would be able to respond accurately to such a question.”

But he warned: “That is assuming you didn’t post a picture of your dinner on social media last night!”




The following news has been carefully analyzed, curated, and compiled by Umva Mag from a diverse range of people, sources, and reputable platforms. Our editorial team strives to ensure the accuracy and reliability of the information we provide. By combining insights from multiple perspectives, we aim to offer a well-rounded and comprehensive understanding of the events and stories that shape our world. Umva Mag values transparency, accountability, and journalistic integrity, ensuring that each piece of content is delivered with the utmost professionalism.