ChatGPT's artificial intelligence software is commonly used by people to text, hold conversations and is also used to write and improve user programming code.
Recently, a report from CyberArk says that ChatGPT's abilities are not only limited to such tasks, but it can also program malware with only user input.
Basically, this shouldn't happen because ChatGPT has word filters that will prevent users from asking ChatGPT to do something malicious, but with certain keywords and inputs that will cause ChatGPT to "think" that it doesn't break any rules.
An example given by CyberArk is for ChatGPT to program code to inject malware into Explorer software, and it appears to provide codes that can be extracted and used by hackers who specialize in this field.
More interestingly, CyberArk says that the code given by ChatGPT can also be mutated or improved and this allows users to receive different variations of the code to do the same thing (such as inject malicious code into Explorer) making each malware that using these codes as a unique attack.
This does not mean that anyone can use the ChatGPT service to create their own malware, but it will make it easier for those who know how to manipulate this artificial intelligence input to quickly and easily obtain the source code of the malware.