Connect with us

Hi, what are you looking for?

Tech & Science

How ChatGPT could become a cybersecurity menace

Generative AI is also set to exacerbate the deepfake phenomenon, potentially wreaking havoc on social media.

ChatGPT can deliver an essay, computer code... or legal text, within seconds. — © AFP INDRANIL MUKHERJEE
ChatGPT can deliver an essay, computer code... or legal text, within seconds. — © AFP INDRANIL MUKHERJEE

What threat does generative AI pose to the business community and how might this be manifested? To understand such a threat, Digital Journal sought the opinion of Skybox Security’s Howard Goodman.

Howard Goodman, the Technical Director, thinks that in 2024, threat actors will weaponize generative AI to craft malware that evades detection and fabricate highly convincing deepfakes.

How might this come to pass? According to Goodman: “In 2024, threat actors are poised to unleash a new breed of malware, empowered by generative AI to bypass conventional detection methods and adapt to evade security measures.”

One variant has even been called FraudGPT.

This ability to bypass normal defences is clearly of a concern, as Goodman finds: “This evolution will usher in an era of intelligent malware capable of mimicking human interactions with unprecedented sophistication.”

Of particular concern are large language models and here Goodman says: “Generative AI is also set to exacerbate the deepfake phenomenon, potentially wreaking havoc on social media and introducing the chilling prospect of “brain hacking.”

Drawing this out for special attention, Goodman discovers: “This sinister form of manipulation could exploit digital content to influence individuals’ thoughts and perceptions, posing a serious threat to societal stability and individual autonomy.”

However, generative AI can also be used to counter-balance some of these threats. For example, generative AI’s ability to understand language-based data allows it to learn about the most recent threats from online intelligence communities and can empower it to discover and respond to threats using simple voice prompts.

Goodman also looks at what is occurring at state level. In particular he finds that nations are set to intensify their AI investments to further their geopolitical goals, necessitating increased control over intellectual property and supply chain security through real-time monitoring.

There are specific areas of the World where developments are the most important. Goodman predicts: “As nations like China, Russia, and North Korea ramp up their investments in artificial intelligence, the potential for AI-powered threats to geopolitical stability looms large.”

Consequently, says Goodman: “This surge in AI development could lead to direct attacks on adversaries or indirect support for criminal syndicates utilizing AI-powered services. To mitigate these risks, it is imperative to strengthen intellectual property protections and bolster cybersecurity measures.”

As a further recommendation, Goodman puts forwards: “Additionally, establishing real-time monitoring of supply chain processes, replacing periodic assessments, is crucial to gain full visibility and control over these critical channels.”

Avatar photo
Written By

Dr. Tim Sandle is Digital Journal's Editor-at-Large for science news. Tim specializes in science, technology, environmental, business, and health journalism. He is additionally a practising microbiologist; and an author. He is also interested in history, politics and current affairs.

You may also like:

Tech & Science

Financial AI could be used for something other than destroying the world.  

Business

Asian markets stumbled out of the gates Monday, extending last week's grim start to the year.

World

Palestinian villager Ghadeer al-Atrash in front of her bulldozed home in Al-Walaja - Copyright AFP INDRANIL MUKHERJEEAnuj CHOPRADabbing away tears, Ghadeer al-Atrash stood before...

World

Britain's embassy in Beijing directed an AFP request to comment to the Foreign Office in London - Copyright AFP Indranil MUKHERJEEChina’s spy agency said...