New York State has enacted the Responsible AI Safety and Education Act (RAISE Act), a landmark law creating broad safety, transparency, and compliance obligations for developers of powerful artificial intelligence systems, called frontier models. Signed by Governor Kathy Hochul, the legislation is designed to ensure that companies building advanced AI address critical safety risks and report notable incidents, establishing enforceable standards ahead of the law’s effective date on January 1, 2027.
The RAISE Act targets only the largest AI developers whose models meet specific computational and cost thresholds. Under the agreed framework for implementation, the law will apply to entities with annual revenues over $500 million that develop, deploy, or operate frontier AI models in whole or part within New York State. This revenue benchmark reflects New York’s effort to align its approach with California’s recently adopted Transparency in Frontier Artificial Intelligence Act (TFAIA) and avoid overly burdensome thresholds based solely on compute costs.
Defining Frontier Models and Covered Developers
“Frontier models” under the RAISE Act are defined by intense computational requirements and investment: those trained with extremely high floating-point operations (referred to in industry as more than 10^26 FLOPs) and with substantial compute expenditure. Models trained through knowledge distillation, in which a smaller model inherits capabilities from a larger one, can also qualify if they meet investment criteria. Accredited academic research institutions are generally excluded, provided they do not transfer rights to commercial entities.
This definition emphasizes scope and potential technological impact rather than small or experimental AI work, reflecting a focus on systems with broad capabilities and influence. Companies making these systems available to New York residents, regardless of physical location, fall under the law’s jurisdiction.
Mandatory Safety and Security Protocols
A core pillar of the RAISE Act is the requirement that covered developers create detailed safety and security protocols before deploying a frontier model. These protocols must outline technical and organizational measures designed to reduce risk, including administrative, cybersecurity, and operational safeguards engineered to mitigate potential critical harm, defined in the statute as outcomes such as the death or serious injury of at least 100 people or $1 billion or more in economic damage connected to misuse or malfunction of a model.
Developers must publish a redacted version of these protocols for public transparency and submit them to both the New York Attorney General and the state’s Division of Homeland Security and Emergency Services. Unredacted protocols must be retained for at least five years after deployment. The annual review of safety plans is also mandated to account for evolving model capabilities and industry best practices.

Incident Reporting and Transparency Obligations
The RAISE Act requires rapid reporting of safety incidents linked to frontier models. Developers must notify the Attorney General and Homeland Security within 72 hours of discovering a safety incident or having a reasonable belief that one occurred. Safety incidents include scenarios such as unauthorized access, leaks of model code or weights, autonomous behavior outside user instruction, or significant failure of protective controls that increase the risk of critical harm.
This reporting requirement is more stringent than similar provisions in California’s TFAIA, which allows a longer reporting window and has somewhat different definitions of reportable events. The accelerated timeframe in New York reflects a policy emphasis on prompt disclosure and state oversight.
Enforcement, Penalties, and Protections
Enforcement of the RAISE Act will be overseen by the New York Attorney General, who can pursue civil actions against companies that fail to comply. Penalties agreed upon in amended legislation could reach up to $1 million for a first violation and $3 million for subsequent violations, despite earlier drafts referencing higher sums. These fines aim to incentivize compliance without imposing destabilizing financial burdens on covered developers.
The statute also prohibits developers from knowingly making false or misleading statements in required documentation and includes whistleblower protections to prevent retaliation against employees reporting safety concerns to appropriate authorities.
Comparing to Other AI Safety Laws
The RAISE Act has been designed in part to mirror aspects of California’s TFAIA and to avoid pitfalls that prompted the veto of earlier California proposals by focusing on transparency and developer responsibility rather than imposing technical mandates such as “kill switches” or licensing regimes. This alignment with TFAIA’s revenue-based thresholds could help create a more coherent national compliance landscape, although federal preemption debates continue as policymakers grapple with national versus state AI regulation.
Nonetheless, differences remain in harm thresholds, reporting timelines, and specific obligations. New York’s critical harm threshold, requiring documentation of harms involving at least 100 deaths or $1 billion in losses, and its public disclosure framework vary in key ways from California’s model, making parallel compliance strategies essential for organizations operating cross-jurisdiction.
Gaps and Uncertainties
Some aspects of the RAISE Act’s final implementation remain uncertain. Chapter amendments will take effect in January 2026, but the specific treatment of provisions like annual independent audits and the precise scope of the newly established oversight office within the Department of Financial Services will require further rulemaking. Debate over federal preemption and constitutional challenges also looms large, as industry stakeholders prepare for potential litigation over state regulatory authority in AI governance.
The requirement to define and prevent “unreasonable risk of critical harm” pushes developers into complex risk assessment territory that lacks settled industry standards, leaving room for future regulatory interpretation and legal challenge.
