Movies to reality: the pros and cons of ChatGPT
Now that ChatGPT, an artificial intelligence chatbot, is on tap, the sci-fi scenes of self-aware intelligent machines gleefully murdering the protagonist's family and friends, or trying to destroy and enslave all of humanity in a variety of ways, are bound to come to mind. This may be one of the reasons why people are sceptical and scared of AI. Fortunately, we may - or may not - have a few years before T1000 robots take over the world and some kind of predecessor to a self-aware computer system like Skynet is born. But what is the situation with ChatGPT? Can anyone use it safely? Or is it already worth having doubts about chatting with an AI-based robot? Are chatGPT and similar solutions risky?
Security risks of ChatGPT
The ChatGPT, developed by the OpenAI Artificial Intelligence Research Lab, has created a huge buzz around the world. Due to the fact that this chatbot currently has no independent knowledge - the answers it generates are based on trained data - it cannot yet be described as a good or bad robot. At the moment, it is just a tool that is effective in many ways.
Artificial intelligence is becoming more and more integrated into our lives by being able to perform tasks that require human intelligence. AI and ML (machine learning) work together to transform industries such as healthcare, finance, transport and even cybersecurity. The last one of these also creates an excellent opportunity for cybercriminals. Let's look at some examples:
Before ChatGPT, the biggest pitfalls of these messages were spelling and grammatical mistakes, but hackers are now able to use AI to write phising emails that might look genuine. They can become too authentic.
ChatGPT can make the life of cybercriminals easier, because, according to in several articles, it can lie, impersonate anyone, write flawless texts, create codes. Attackers have always found a way to steal data using various tools and techniques, but now they have a whole new arena, a complete playground.
Nowadays, with the rise of clickbait journalism and social media, it is a huge challenge to separate fake news from real ones. Filtering the news is also important because some spread propaganda, while others can lead to malicious sites with AI help. For example, fake news about natural disasters can lure unsuspecting users into sending donations to scammers. Obviously, these funds do not end up with the victims of the disaster.
Researchers say ChatGPT is able to help malware to evolve. Their research suggests that malware creators can develop advanced applications, such as polymorphic viruses that change easily their code to blend in and remain invisible.
ChatGPT can text in a real person's voice and style in seconds. This ability to impersonate even famous people can lead to even bigger scams. We have all heard about the cryptocurrency scams that have misused Elon Musk's name to swindle millions from good faith amateur investors. These scams are even more convincing when written in Elon's style by an AI chatbot. This is a truly creepy skill that could lead to mass scams.
Ransomware helps attackers to hack into computer systems and take money from victims. Many attackers don't write their own code, but buy it from various developers via the Dark Web. According to researchers, ChatGPT can now successfully write malicious codes that can encrypt an entire system during a ransomware attack, so it is slowly becoming unnecessary to buy it from others.
Most spam emails take a few minutes to write and have essentially harmless, promotional content, but ChatGPT can help speed up this process, and these spam emails can also carry malicious code or direct unprepared, careless users to malicious websites.
Business Email Compromise (BEC) is one of the most common methods. It involves the fraudster sending an email to persuade someone - within the organisation - to share confidential company information, or even send money. Traditional" BEC attacks are easily detected by pattern recognition, but ChatGPT attacks can get past security filters.
So there is reason to be concerned about the rapid spread of ChatGPT and similar solutions. But of course, these tools also have an upside.
What are these benefits?
Most of the time, the world is not just black and white, so in addition to the disadvantages, ChatGPT also offers us significant advantages. It's true that the way it is used is determined by the user, which opens new doors to cyber attacks, but the same is true for cybersecurity. ChatGPT can also provide valuable information and recommendations to security teams, enabling them to make more informed decisions. It can analyse large amounts of data at the same time and identify patterns that may not be immediately obvious to humans. It can identify trends and predict future cyber threats, helping security teams prepare and plan for potential attacks. Here are some examples:
Improved threat detection
Analysing large amounts of data and identifying potential cyber threats can help improve threat detection capabilities. By analysing data patterns, it can detect suspicious behaviour and anomalies that could indicate a cyber attack. ChatGPT can also help identify and categorise malware, phishing and other cyber threats, making it easier for security analysts to respond quickly and effectively.
Time is an important factor in the event of a cyber attack. Artificial intelligence can help security teams respond more quickly to attacks by analysing data in real time and developing recommendations for action. It can generate automated responses to certain types of threats, freeing security analysts to focus on more complex threats. It can provide valuable information and recommendations in a short time.
Cybersecurity professionals can also use ChatGPT-like solutions to discover a vulnerability in a program. In cybersecurity, it is critical to fix security bugs before they are exploited by hackers.
Detecting errors in smart contracts
This is a self-executing document embedded in code with the terms of the contract. Although ChatGPT is not designed to identify faults in smart contracts, it has a proven ability to find them in a contract. Experts believe that its ability to identify contract defects will only improve with further developments.
Perform Network Mapper scans
Nmap (Network Mapper) is a useful solution for security auditing, penetration testing and vulnerability assessment. AI can help the tool by scanning the data and providing insights.
Filling the skills gap
Cybersecurity training is critical for any organisation that wants to reduce the risk of cyber-attacks. Qualified employees are less likely to open malicious links or websites that can infect corporate systems with ransomware, Trojans and spyware. Artificial intelligence can help close the cybersecurity knowledge gap by providing concise, collated answers to help with preventative measures.
That makes it all sound better. This suggests that ChatGPT and others are not just a new tool for cybercriminals, but also for professionals focused on defense. So the cat-and-mouse fight can continue in the same way, but with more modern tools, a bit like science fiction.
What does the future hold?
The mass of science fiction fans enthusiastically welcomes the rise of artificial intelligence, while the anti-technology camp is worried about the impacts of the spread of AI on the future of humanity. However, you cannot stop technology from advancing, you can only set limits. For sure, thanks to the popularity of ChatGPT, this kind of artificial intelligence opens new and unknown doors for everyone, with the potential to radically transform our future. For now, we've only covered the possibilities in cyberspace, but AI has a lot of potential and a lot of risks. The director and screenwriter James Cameron - or rather Schwarzenegger himself - is quoted as saying, "I'll be back" with more updates.