Skip to content

Is the Superalignment initiative by OpenAI capable of heroics?

  • OpenAI's Superalignment initiative, led by Sutskever and Leike, aligns superintelligent AI with human intent, prioritizing safety and addressing social concerns through robust training and evaluation methods.
  • OpenAI dedicates significant resources to Superalignment, emphasizing transparency, collaboration, and community involvement, fostering a collective approach to ensure responsible development.
  • The initiative acknowledges the importance of balancing caution and reaping the potential benefits of superintelligence, actively encouraging researchers to contribute to this crucial effort.

Everybody knows about OpenAI by now. After pioneering Generative-text and Generative-art technology they have been doing the same for ‘staying-in-the-headlines’ tech. Even your parents know about them through some WhatsApp forwards like ‘10 Ways to be safe from ChatGPT conspiracy’.

While the latter may not be completely true, OpenAI is now already becoming the popular kid in the startup ecosystem. And this popularity is becoming a curse to its co-founders’ conscience a lot lately.

Whether it be the senate hearing or the involvement in European regulations, OpenAI is openly admitting to the fact that they believe in a possibility of a future similar to that of the movie Terminator, unless something is done immediately about it.

Regardless, they are taking steps towards preventing the ‘Skynet’ situation. One such prevention method they came up with is called Superalignment.

What is Superalignment?

Superalignment is led by noted OpenAI cofounder Ilya Sutskever and Head of Alignment Jan Leike. You can read up in detail about the initiative but for now, it can be summarized as a regulatory body for limiting the superintelligence of AI beyond the human extents and to align them with its creators’ intents.

They believe they can come up with training methods and evaluation techniques to identify problematic behavior, ensure its interpretability, and ensure social concerns regarding the supergenius AI.

They are putting in a significant amount of resources and time, something around 20% while inviting machine learning engineers and fellow researchers in the field. They will also let other AI regulators use these resources in due time.

So what does this say about OpenAI? Are they being the Judge, the Jury, and the Executioner all by themselves? Is this the popular kid’s ploy to pit a bully against other kids that might threaten their popularity? Are they the real T-800 in the story? Are you too young to understand that reference?

Irrespective of these questions let me introduce a new section in my blogs called ‘GPT-Grill’, and nothing better than the irony of starting this roast section powered by ChatGPT for a blog on OpenAI.

The Top 3 roasts for this week are:

  • Because teaching AI to parallel park is just too easy. Let's jump straight to aligning superintelligence with human intent!
  • Because when it comes to managing the risks of superintelligence, we're confident that dedicating 20% of our computing and a few years will solve it. No big deal!
  • Teaching AI to align with human values, one "please don't destroy humanity" command at a time.

On one hand, OpenAI emphasizes the importance of providing evidence, and arguments and involving the machine learning and safety community to collectively work towards solving the problem.

Their approach involves transparency, collaboration, and accountability rather than unilaterally making decisions or exercising unchecked power.

On the other hand, OpenAI, through its Superalignment initiative, has significant influence and control over the direction and development of AI alignment. They have dedicated substantial computing resources and assembled a team of experts to work on the problem.

While they may seek input from external experts and engage with the community, OpenAI ultimately has a considerable level of authority and decision-making power in shaping the direction of super alignment efforts.

While recognizing the complexity of the problem, OpenAI remains optimistic about making significant progress within four years. Personally speaking, that doesn’t appear far-fetched to me.

Just look at where AI development was in 2016 and where it is now in seven years. Although, a major concern for me is how they plan to carry it forward.

The superintelligence of an AI is needed to go beyond human capabilities to solve problems which we haven’t been able to yet. It is understandable that precaution is far much better than the cure but how cautious one needs to be and to what measure.

Tomorrow our AI algorithms are capable enough to work around the complexities of human anatomy and come up with a cure for cancer, will someone let the superintelligence achieve what it was meant to?

As I end this piece I ponder upon some philosophy. Plato believed that fear arises from ignorance and lack of knowledge. He believed that true knowledge and understanding dispel fear and lead to courage.


Are you working on an innovative product? We would love to hear your story. Share it with us at [email protected]

Edited by Shruti Thapa

Latest