ChatGPT brings new cybersecurity risks, The emergence

ChatGPT brings new cybersecurity risks, The emergence

ChatGPT brings cybersecurity risks – Updates on AI and Technology by Outreinfo

OpenAI launched ChatGPT in November, impressing millions with its capabilities. However, concerns arose about its potential to advance bad actors’ agendas, such as breaching advanced cybersecurity software. With a 38% global increase in data breaches in 2022, it’s critical to recognize AI’s growing impact and act accordingly. This article examines the new risks from ChatGPT’s widespread use, explores the needed training and tools for cybersecurity professionals to respond, and calls for government oversight to ensure AI usage doesn’t harm cybersecurity efforts.

ChatGPT is the most advanced language-based AI to date, with its ability to converse seamlessly with users. For hackers, ChatGPT is a game changer. Phishing is the most common IT threat in America, but most scams are easily recognizable. ChatGPT will allow hackers from all over the globe to bolster their phishing campaigns with near fluency in English. Cybersecurity leaders need to equip their IT teams with tools to determine what’s ChatGPT-generated vs. human-generated, geared specifically toward incoming “cold” emails. “ChatGPT Detector” technology already exists and is likely to advance alongside ChatGPT itself. IT infrastructure should integrate AI detection software, automatically screening and flagging AI-generated emails. Employees should be routinely trained on the latest cybersecurity awareness and prevention skills, with specific attention paid to AI-supported phishing scams. The sector and wider public must continue advocating for advanced detection tools, rather than only fawning over AI’s expanding capabilities.

ChatGPT is proficient at generating code, but it’s programmed not to generate malicious code. However, bad actors may be able to trick the AI into generating hacking code. For example, Israeli security firm Check Point recently discovered a thread on an underground hacking forum from a hacker testing the chatbot to recreate malware strains. Cybersecurity pros need proper training and resources to respond to ever-growing threats, AI-generated or otherwise. There’s also the opportunity to equip cybersecurity professionals with AI technology to better spot and defend against AI-generated hacker code. In addition to preventing ChatGPT-related threats, cybersecurity training should include instruction on how ChatGPT can be an important tool in the cybersecurity professionals’ arsenal. Software developers should look to develop generative AI that’s potentially even more powerful than ChatGPT and designed specifically for human-filled Security Operations Centers (SOCs).

While there’s significant discussion around bad actors leveraging AI to hack external software, the potential for ChatGPT itself to be hacked is seldom discussed. From there, bad actors could disseminate misinformation from a source that’s typically seen as impartial. ChatGPT has reportedly taken steps to identify and avoid answering politically charged questions. However, if the AI were hacked and manipulated to provide seemingly objective but actually biased information, it could become a dangerous propaganda machine. This may necessitate enhanced government oversight for advanced AI tools and companies like OpenAI.

The Biden administration released a “Blueprint for an AI Bill of Rights,” but the stakes are higher than ever with the launch of ChatGPT. We need oversight to ensure that OpenAI and other companies launching generative AI products regularly review their security features to reduce the risk of being hacked. New AI models should require minimum-security measures before being open sourced. For example, Bing launched their own generative AI in early March, and Meta is finalizing a powerful tool of their own, with more coming from other tech giants. As people marvel at the potential of ChatGPT and the emerging generative AI market.

We must reimagine the foundational base for AI, especially open-sourced examples like ChatGPT. Before a tool becomes available to the public, developers need to ask if its capabilities are ethical and if it has a foundational “programmatic core” that truly prohibits manipulation. How do we establish standards that require this and hold developers accountable for failing to uphold them? Organizations have instituted agnostic standards to ensure safe and ethical exchanges across different technologies. It’s critical to apply the same principles to generative AI. ChatGPT chatter is at an all-time high and as the technology advances, technology leaders must think about what it means for their team, company, and society as a whole. If not, they’ll fall behind their competitors in adopting and deploying generative AI to improve business outcomes and fail to anticipate and defend against next-generation hackers who can manipulate this technology for personal gain. With reputations and revenue on the line, the industry must come together to have the right protections in place and make the ChatGPT revolution something to welcome, not fear.

Also explore our another blogs on OpenAI’s ChatGPT

Join us

This Week

Recommended