Giveaway: SUBSCRIBE our youtube channel to stand a chance to win an iPhone 17 Pro

OpenAI Seeks ‘Head of Preparedness’ to Identify Negative Impacts of AI Use



The situation facing artificial intelligence (AI) system developers this year has been shocking. AI models have been proven to cause users to become lazy, brain activity decreases, and in more serious situations, cause users to face mental health problems and even commit suicide.


OpenAI is now facing a lawsuit after a user committed suicide because of using ChatGPT. To detect the negative impact that may occur, OpenAI is currently looking for a “Head of Preparedness” based in San Francisco.


The task that the Head of Preparedness must undertake is to mitigate across major risks such as cyberspace, and biological weapons because the AI ​​models developed may be used for terrorist and hacker attacks. They also need to refine and develop a framework as new external risks, capabilities, and expectations emerge.


In other words, the main task is to identify the risks that may exist due to the models produced by the company before offering them to users. This will ensure that negative incidents that cause the company to bear liability can be avoided before they occur.

Previous Post Next Post

Contact Form