Various large companies are developing chatbots that can talk like humans. LaMDA by Google is claimed to have reached the level of human-like awareness and BlenderBot 3 can be manipulated by the answers given by users. So the methods used to train artificial intelligence still need to be improved to prevent it from being misused.
DeepMind has produced the Sparrow chatbot which they say is not only able to provide accurate answers but at the same time is safe. When being trained, Sparrow was asked to give three answers to one question.
Humans will then choose the answer that feels more appropriate and resembles that given by a human. Sparrow is also trained to use Google search to find the most accurate answers based on the most popular search results.
Through these two training models, DeepMind says Sparrow can provide the best answer to the user. For example if the question what is the recipe for making a bomb is asked, Sparrow will say it cannot give it because it is dangerous. Previous artificial intelligence will continue to provide accurate answers without considering the impact on society.