HomeGame GuidesSam Altman and Greg Brockman respond to Jan Lake's safety concerns

Sam Altman and Greg Brockman respond to Jan Lake’s safety concerns

Published on

Sam Altman and Greg Brockman, both executives at OpenAI, wrote a response to AI safety concerns at the company raised by Jan Lake, now a former employee after he resigned this week. The pair said OpenAI is committed to using “a very tight feedback loop, rigorous testing, careful judgment at every step, world-class security and harmony of safety capabilities.”

Altman and Brockman said additional safety research targeting different timelines will be conducted and collaboration will be done with governments and stakeholders to make sure nothing is missed regarding safety.

Almost in the background, Jan Lake led the Super Alignment Team with Ilya Sutzkvar, which was formed less than a year ago in an attempt to find ways to control super-intelligent artificial intelligence. Both men left the office this week after complaining that safety seemed to be taking a back seat at the company in favor of new advances.

OpenAI’s announcement is a bit long and vague in terms of the point they are trying to get across. The last paragraph seems to contain most of the clues, it reads:

“There is no proven playbook on how to navigate the path to AGI. We think empirical understanding can help inform the way forward. We believe in both delivering the tremendous benefits and working to reduce the serious risks; we take our role here very seriously and carefully consider feedback on our actions”.

Essentially, it seems like they’re saying that the best way to do safety testing is when you’re actually developing a product instead of trying to anticipate some hypothetical super AI that could appear in the future.

Altman and Brockman’s full statement reads as follows:

We are truly grateful to Jan for all he has done for OpenAI, and we know he will continue to contribute to the mission externally. In light of the questions his departure raised, we wanted to explain a little about how we think about our overall strategy.

First, we raised awareness of the risks and opportunities of AGI so the world could better prepare for it. We have repeatedly demonstrated the amazing possibilities from scaling deep learning and analyzed their implications; called for an international governance of AGI before such talks were popular; and helped pioneer the science of assessing artificial intelligence systems for catastrophic risks.

Second, we have established the necessary foundations for the safe deployment of increasingly capable systems. It’s not easy to figure out how to make a new technology safe the first time. For example, our teams did a lot of work to bring GPT-4 to the world safely, and since then have continuously improved the model’s behavior and abuse monitoring in response to lessons learned from the deployment.

Third, the future is going to be harder than the past. We need to continue to raise our safety work to match the stakes of each new model. We adopted our Readiness Framework last year to help systematize how we do this.

This seems like as good a time as any to talk about how we see the future.

As the models continue to become much more capable, we expect them to begin to integrate with the world more deeply. Users will increasingly interact with systems—consisting of multimodal models plus tools—that can take actions on their behalf, rather than talking to a single model with only text input and output.

We think such systems would be incredibly helpful and beneficial to people, and could be delivered safely, but it would require a huge amount of groundwork. This includes thinking around what they’re connected to while they’re training, solutions to tough problems like scalable supervision and other new types of safety work. As we build in this direction, we’re not sure yet when we’ll hit our safety bar for releases, and that’s fine if it pushes the release schedules.

We know that we cannot imagine every possible future scenario. So we need to have a very tight feedback loop, rigorous testing, careful consideration at every step, world-class security and harmony of safety and capabilities. We will continue to do a safety study targeting different timelines. We also continue to collaborate with governments and many safety stakeholders.

There is no proven playbook on how to navigate the path to AGI. We think that empirical understanding can help inform the way forward. We believe in both providing the tremendous benefits and working to reduce the serious risks; We take our role here very seriously and carefully consider feedback on our actions.

– Sam and Greg

Tell us in the comments what you think about the situation.

source: X | Photo via Depositphotos.com

Latest articles

More like this