The world’s artificial intelligence (AI) landscape has changed a lot since the chatbot revolution was carried out by OpenAI three years ago. Now almost all services want to have AI features built-in and trillions of dollars are being spent on building data centers and pursuing the dream of building the first artificial general intelligence (AI). Last weekend, China’s Central Cyberspace Affairs Commission released draft regulations that AI chatbot services must comply with.
Among the regulations that will be introduced is that chatbots must tell users that they are not human. This reminder must be given every two hours so that users do not feel like they are talking to a human. This is an issue that occurs in Western countries where users believe they are in a romantic relationship with chatbots.
Operators must also give users the option to delete their accounts and not allow conversation texts to be used to train AI models. The dissemination of obscene, violent and criminal content is also prohibited. There is also a clause that chatbots cannot encourage users to harm or commit suicide.
All of these regulations are still just draft proposals, with China's Central Cyberspace Affairs Commission collecting public feedback until January 25, 2026.
