Sam Altman's ouster from OpenAI ended yesterday after he returned to the company as CEO with a new board of directors. The real reason for his removal last weekend is still unclear with several theories that cannot be confirmed. But Reuters this morning reported that before he was fired, researchers at OpenAI issued a warning about a model of artificial intelligence (AI) that they felt could threaten human civilization.
AI model Q* (called Q-Star) is being developed by OpenAI with the ability to solve mathematical problems on its own. It's not just memorizing formulas but being able to understand what problem it's trying to solve. Although still in the early stages. researchers at OpenAI believe it has the potential to become an artificial general intelligence (AGI).
A warning letter about the possible dangers of AGI Q* was sent to the board of directors before Altman was fired. Among the rumors floating around immediately after Altman's dismissal was that he did not care about security in product development at OpenAI. lya Sutskever who is the head of research at OpenAI is the individual involved in getting Altman fired. It is not known whether this warning letter was written by him.
A few days ago researchers at Google DeepMind published a study on the five levels of AGI, namely emerging, competent, expert, virtuoso and superhuman. ChatGPT by OpenAI is an AGI that has already reached an emerging level. This raises the question of what level of AGI Q* can achieve.