The vast power of superintelligence could lead to the disempowerment of humanity or even human extinction. Currently, we don't have a solution for steering or controlling a potentially superintelligent AI, and preventing it from going rogue. Our goal is to build a roughly human-level automated alignment researcher.

1m read timeFrom openai.com
Post cover image

Sort: