ChatGPT maker OpenAI Creates Safety and Security Committee Members and What It Will Do
ChatGPT maker OpenAI has taken an important step forward. The company behind ChatGPT has formed a new committee. This committee will focus on safety and security issues. This is called the Safety and Security Committee.
The creation of this committee was announced on March 27, 2024. OpenAI says the committee will help ensure that their AI systems are developed responsibly. The committee members are experts in various important fields.
ChatGPT-maker OpenAI, which recently disbanded its “superalignment team” focused on long-term AI risks, has formed a committee to make recommendations on the security of OpenAI’s projects and operations. The company announced that the committee formed by the OpenAI board is led by directors Brett Taylor (Chairman), Adam D’Angelo, Nicole Seligman and Sam Altman (CEO).
OpenAI technical and policy experts Alexander Madry (Head of Preparedness), Lillian Weng (Chief of Security Systems), John Schulman (Chief of Alignment Science), Matt Knight (Chief of Security), and Jakub Pachocki (Chief Scientist) will also be joining. Committee. This team will evaluate and develop the Microsoft-backed company’s processes and security measures over the next 90 days. Following the review, OpenAI will share an update on the recommendations adopted in line with safety and security. The team also includes other security, safety and technology experts, including Rob Joyce, a former cybersecurity executive who advises OpenAI on security, and John Carlin.
Also Read: WhatsApp Introduces Longer Voice Note Status Updates: Share Up To 1-Minute Voice Notes
Who is on the committee?
The Safety and Security Committee has seven initial members. They come from backgrounds such as technology policy, ethics, and law enforcement.
One member is Maritje Schaake. She previously worked at Stanford University’s Cyber Policy Center. The other is Matt Olsen, the former head of the National Counterterrorism Center.
Kalinda Raina is also a member. He previously worked on privacy software tools at Apple. Dario Floriano is Director of the Laboratory of Intelligent Systems of the Swiss Federal Institute of Technology.
The committee also has two professors – Gillian Hadfield from the University of Toronto and Jonathan Zittrain from Harvard Law School. Initial members include Tomica Tillman, founder of the Biotechnology Policy Institute.
What will the committee do?
According to OpenAI, the main responsibilities of the Safety and Security Committee will be:
Advising OpenAI on potential risks and challenges related to advanced AI systems. This includes the risks associated with the misuse of AI systems by bad actors.
To provide guidance on security best practices during the development of AI systems in OpenAI. The goal is to create AI systems that are safe and consistent with human values.
Consulting on OpenAI’s plans for the safe and responsible deployment of AI systems. It includes protocols for AI security and integrity.
Reviewing OpenAI AI systems before public release. The committee will analyze the system for potential security and safety vulnerabilities.
Investigating any AI-related incidents, whether on OpenAI or externally. They will work to analyze root causes and recommend responses.
Advising OpenAI leadership on high-risk decisions involving AI safety and security tradeoffs.
The committee will meet regularly. They will provide written reports on their activities and recommendations. OpenAI says the committee’s guidance will be invaluable as AI capabilities become more advanced and powerful.
Why is this committee important?
Concerns are growing about potential risks from advanced AI systems. These include risks associated with safety, ethics and unintended negative impacts on society.
Experts warn that highly capable AI models can be misused by bad actors. This includes threats like automated misinformation campaigns, cyber attacks, and more.
There are also concerns that very advanced AI may be difficult to control or align with human values. Some worry about scenarios where superintelligent AI systems pursue counterproductive or destructive goals.
With AI systems like ChatGPT increasingly demonstrating remarkable capabilities, it is important to work on AI safety and security. It is important to get security practices right before making these systems more powerful and widespread.
Also Read:Â How To Use WhatsApp Web? 2024 Guide For Browser And Desktop App
OpenAI is a cutting-edge AI research company. The formation of a dedicated safety and security committee is an important step. This shows that OpenAI is taking potential AI risks seriously as they rapidly advance in the field.
The committee brings together respected experts from various relevant backgrounds. Their oversight and input can help ensure that OpenAI’s development of transformative AI systems moves forward responsibly and safely.
The move coincides with the high-profile departures of key leaders, including co-founder and chief scientist Ilya Sutskever and co-head of the SuperAlignment team Jan Leik.
OpenAI begins testing “its next frontier model”
The company also said it recently began training “its next frontier model” and that the model will bring the company to “the next level of capabilities on our path to AGI,” or artificial general intelligence.
“OpenAI recently began training its next frontier models and we expect the resulting systems to bring us to the next level of capabilities on the path to AGI. While we are proud to build and release models that advance the capabilities and security Both are industry leaders. We welcome the robust debate at this critical moment in OpenAI’s announcement.
Earlier this month, OpenAI demonstrated GPT-4o, an AI model that is multimodal — meaning it lets people talk to ChatGPT on their smartphones the same way they talk to other voice assistants.
Join YouTube channel: weekly wakeup