OpenAI Publishes Safety Documentation for GPT-5.5 Model
OpenAI on Thursday published the system card for GPT-5.5, its latest flagship language model, outlining the safety testing, capability benchmarks and risk mitigation measures applied before the model’s release.
The system card — a standardized safety document that OpenAI publishes alongside major model launches — provides a detailed accounting of how the company evaluated GPT-5.5 across categories including cybersecurity, biological risk, persuasion and autonomous behavior.
System cards have become a central piece of OpenAI’s public-facing safety infrastructure, serving as the primary technical disclosure for each new model’s risk profile. The documents are intended to give researchers, policymakers and the public visibility into what testing was conducted and what guardrails were put in place.
The GPT-5.5 card follows the format established with previous releases, covering the model’s performance on internal red-teaming exercises, external safety evaluations and automated testing suites. It also details the preparedness framework scores assigned to the model across OpenAI’s defined risk categories.
OpenAI has faced ongoing scrutiny from safety researchers and regulators over the adequacy of its pre-deployment testing. The company has committed to publishing system cards for all major model releases as part of its voluntary safety commitments made to the White House in 2023.
The release comes as the AI industry faces increasing pressure from lawmakers in the United States and European Union to provide greater transparency around model capabilities and limitations. Several proposed regulatory frameworks would mandate safety documentation similar to what OpenAI currently publishes voluntarily.
The full system card is available on OpenAI’s website. Independent researchers are expected to review the document in the coming weeks to assess whether the evaluations meet current best practices for frontier model safety testing.
Source