The United States does not have a federal AI law. What it has instead is a rapidly expanding constellation of state-level regulations, each with its own definitions, thresholds, disclosure requirements, and enforcement timelines. For enterprise AI deployers operating across multiple states, the resulting compliance landscape is not merely complex — it is structurally incoherent, demanding simultaneous adherence to laws that sometimes contradict each other in their fundamental assumptions about what AI risk looks like.
As of March 2026, at least 17 states have enacted AI-specific legislation, with another 23 states actively considering bills in their current legislative sessions. The two most consequential laws — Colorado's SB 24-205 and California's suite of AI transparency statutes — establish compliance frameworks that will shape enterprise AI governance for years to come. Meanwhile, the federal government has issued executive orders that attempt to preempt state authority without providing a clear alternative, creating a regulatory vacuum that enterprises must navigate largely on their own.
This guide breaks down the specific requirements, deadlines, and practical compliance steps that enterprise AI teams need to understand right now.
Colorado AI Act (SB 24-205): The Most Comprehensive State AI Law
What It Covers
Colorado's AI Act, signed into law in May 2024, is the first comprehensive state-level regulation of AI systems in the United States. Unlike narrower laws that target specific applications (hiring algorithms, facial recognition), SB 24-205 establishes a broad regulatory framework for any AI system that makes or substantially contributes to "consequential decisions" affecting consumers.
The law defines consequential decisions across six domains:
- Employment: Hiring, termination, promotion, compensation, and task allocation decisions
- Education: Admissions, grading, disciplinary actions, and accommodation determinations
- Financial services: Lending, insurance underwriting, credit scoring, and fraud detection
- Healthcare: Diagnosis support, treatment recommendations, coverage determinations, and cost calculations
- Housing: Rental screening, mortgage qualification, and property valuation
- Government services: Benefits eligibility, risk assessments, and resource allocation
Any AI system that operates in these domains — whether as the primary decision-maker or as a tool that informs human decisions — falls under the Act's jurisdiction.
Key Requirements for Deployers
Risk assessments are the cornerstone obligation. Every organization deploying a high-risk AI system must complete and document a comprehensive impact assessment that includes:
- A description of the system's purpose, intended use, and the specific decisions it influences
- An analysis of the known and foreseeable risks of algorithmic discrimination, including disparate impact on protected classes
- A description of the data used by the system, including the sources, collection methods, and any known biases
- The metrics used to evaluate the system's performance and fairness
- A description of the human oversight mechanisms in place, including who reviews AI outputs and how often
- A timeline for periodic reassessment, which must occur at minimum annually or whenever the system undergoes material modifications
Disclosure obligations require deployers to inform consumers, before the AI system is used, that an AI system is being used to make or substantially inform a consequential decision about them. This disclosure must include:
- A plain-language description of the AI system and its role in the decision
- The type of data the system processes about the consumer
- Instructions for how the consumer can opt out or request a human review of the decision
- Contact information for submitting complaints or requesting additional information
Algorithmic auditing requirements mandate that deployers maintain records sufficient to enable third-party audits of the system's decision patterns. While the law does not currently mandate independent audits, it creates the documentation framework that makes such audits feasible — and many legal experts expect mandatory auditing requirements to follow in subsequent amendments.
Enforcement Timeline
The Colorado AI Act takes effect on February 1, 2026, but the enforcement provisions have a critical grace period. The Colorado Attorney General has exclusive enforcement authority and has indicated a compliance-first approach through June 2026, during which organizations that demonstrate good-faith compliance efforts will receive guidance rather than penalties.
Starting July 1, 2026, the Attorney General may pursue enforcement actions, including:
- Injunctive relief requiring organizations to modify or discontinue non-compliant AI systems
- Civil penalties of up to $20,000 per violation, with each affected consumer constituting a separate violation
- Mandatory disclosure of algorithmic audit results
For a large enterprise processing thousands of AI-influenced decisions daily, the potential liability exposure is substantial. An AI system affecting 10,000 consumers without proper disclosure could generate $200 million in penalties — a figure that demands immediate compliance investment.
California: Transparency Laws Already in Effect
The Transparency in Fake AI-Generated Content Act (TFAIA)
California's TFAIA, effective January 1, 2025, requires clear disclosure when AI-generated content is used in specific contexts, with particular emphasis on political communications, advertising, and consumer-facing content.
Key provisions:
- AI-generated images, audio, and video distributed in California must include machine-readable metadata identifying the content as AI-generated
- Political advertisements using AI-generated content must include a conspicuous visual disclosure stating "This content was generated or substantially modified by artificial intelligence"
- Social media platforms operating in California must provide detection tools that allow users to identify AI-generated content in their feeds
- Violations carry penalties of up to $50,000 per incident for commercial entities
For enterprises, the practical implication is that any AI-generated marketing material, customer communication, or public-facing content distributed to California residents must carry appropriate disclosures. This includes AI-generated product descriptions, customer service chatbot interactions, and automated email content.
AB 2013: Training Data Disclosure
California's AB 2013, also effective since January 2025, requires AI developers to disclose information about the data used to train their models. While this law primarily targets AI model providers rather than enterprise deployers, it has significant downstream effects:
- Enterprise AI teams must verify that their AI vendors can provide training data documentation
- Organizations building custom models on proprietary data must maintain detailed data provenance records
- Any fine-tuning or retrieval-augmented generation system must document the sources, licenses, and consent status of all data used
The enforcement mechanism is complaint-driven, with the California Attorney General empowered to investigate and issue penalties. Several major class-action law firms have publicly stated their intention to file suits under AB 2013, creating private enforcement pressure that may prove more consequential than government action.
SB 1047 — The Bill That Almost Was
It is worth noting what California chose not to enact. SB 1047, which would have imposed safety testing requirements and "kill switch" mandates on frontier AI models, was vetoed by Governor Newsom in September 2024. The veto message cited concerns about stifling innovation and establishing requirements that were not scientifically grounded. However, revised versions of the bill are circulating in the current legislative session, and many of SB 1047's provisions are expected to return in modified form. Enterprise teams should monitor California's 2026-2027 legislative calendar closely.
Federal Preemption: The Executive Order Landscape
Trump Administration's Approach
The current federal approach to AI regulation is defined by Executive Order 14179, signed in January 2025, which revoked the Biden-era AI safety executive order and replaced it with a framework emphasizing innovation and voluntary industry commitments over prescriptive regulation.
Key provisions of the current federal posture:
- Explicit skepticism toward state-level AI regulation, with the administration encouraging Congress to pass preemption legislation that would override state AI laws
- Voluntary commitments from major AI companies on safety testing, content provenance, and security practices — commitments that carry no legal enforcement mechanism
- Reduced funding for the National Institute of Standards and Technology (NIST) AI Safety Institute, which had been developing the AI Risk Management Framework
- Emphasis on national competitiveness, framing regulatory restraint as necessary to maintain US leadership over China in AI development
The Preemption Question
The central legal question is whether federal executive action can preempt state AI laws. The short answer is: not without legislation. Executive orders do not have the legal force necessary to override state statutes, and the current Congress has not passed comprehensive AI legislation.
Several preemption bills are pending, including:
- The AI LEAD Act, which would establish federal standards that preempt state laws for AI systems in interstate commerce
- The CREATE AI Act, which focuses on federal R&D funding and includes limited preemption provisions
- The American AI Innovation Act, a bipartisan compromise that would preempt state laws only for specific categories of AI applications
None of these bills are expected to reach a floor vote before late 2026 at the earliest, meaning state laws will remain the primary regulatory framework for enterprise AI compliance through the rest of this year.
Extraterritorial Pressure: The EU AI Act
The EU AI Act, which reached its high-risk AI system compliance milestone in August 2025, exerts significant influence on US enterprises through extraterritorial application. Any organization that deploys AI systems affecting EU residents — including US companies with European customers, employees, or operations — must comply with EU requirements regardless of where the AI system is physically located.
For multinational enterprises, the practical effect is that EU standards become the de facto global compliance floor, since meeting EU requirements generally satisfies the less stringent US state-level laws. The EU framework includes:
- Risk classification of AI systems into unacceptable, high, limited, and minimal risk categories
- Conformity assessments for high-risk systems, including technical documentation and third-party auditing
- Transparency requirements that exceed California's TFAIA provisions
- Penalties of up to 7% of global annual turnover — significantly higher than any US state penalty
For a detailed analysis of EU AI Act compliance milestones and their enterprise implications, see our earlier coverage of the 2026 International AI Safety Report.
Other State Laws Enterprise Teams Must Track
Illinois: BIPA Amendments for AI Biometrics
Illinois' Biometric Information Privacy Act (BIPA) — already the most litigated privacy law in US history — received AI-specific amendments in 2025 that expand its scope to cover:
- AI systems that process biometric data, including facial recognition, voice recognition, gait analysis, and keystroke dynamics
- Informed consent requirements for any AI system that collects, stores, or analyzes biometric identifiers
- A private right of action allowing individuals to sue for BIPA violations, with statutory damages of $1,000 per negligent violation and $5,000 per intentional violation
- New retention limitations requiring organizations to delete biometric data within three years of the individual's last interaction with the AI system
The amendment's private right of action is particularly consequential. BIPA litigation has already generated over $6 billion in settlements since 2020, and the AI amendments are expected to produce a new wave of class-action filings targeting enterprise AI deployments that process employee or customer biometric data.
Texas: AI Advisory Council Recommendations
The Texas AI Advisory Council, established by HB 2060, published its initial recommendations in January 2026. While these recommendations do not carry the force of law, they signal the direction of future Texas AI legislation:
- Mandatory disclosure for AI systems used in hiring, lending, and insurance decisions
- Annual algorithmic impact assessments for high-risk AI applications
- A state-level AI registry requiring organizations to register AI systems that make consequential decisions about Texas residents
- Restrictions on AI-powered surveillance in public spaces, with exceptions for law enforcement
Texas is the second-largest state economy in the US, and legislation based on these recommendations could affect a significant portion of enterprise AI deployments nationally.
New York City: Local Law 144 Enforcement Updates
New York City's Local Law 144, which requires bias audits of automated employment decision tools (AEDTs), has been in enforcement since July 2023. Updated enforcement guidance issued in February 2026 clarifies several ambiguities that had created compliance uncertainty:
- Scope clarification: AI systems that rank, score, or filter candidates at any stage of the hiring process are covered, including resume screening tools, interview scheduling algorithms, and performance prediction models
- Audit standards: Bias audits must be conducted by an independent auditor with no financial relationship to the AI vendor
- Publication requirements: Audit results must be published on the employer's website for at least two years
- Enhanced penalties: Fines increased from $500 per violation to $1,500 per violation, with each candidate affected constituting a separate violation
Enterprise Compliance Checklist
1. AI System Inventory
Before you can comply with any regulation, you need a complete inventory of every AI system in your organization. This inventory should document:
- System name and vendor (or internal team if custom-built)
- Purpose and use case — what decisions does the system make or influence?
- Data inputs — what data does the system process, and where does it come from?
- Output and integration — where do the system's outputs go, and who acts on them?
- Jurisdictional exposure — which states and countries are affected by the system's decisions?
- Risk classification — does the system qualify as "high-risk" under any applicable regulation?
Most enterprises discover during this process that they have 3-5x more AI systems in production than their leadership teams realize, due to departmental deployments, embedded AI features in SaaS tools, and legacy automation systems that now incorporate AI components.
2. Risk Assessment Framework
For each high-risk AI system identified in the inventory, conduct a risk assessment that addresses:
- Fairness analysis: Test the system's outputs for disparate impact across protected classes (race, gender, age, disability status). Use both statistical parity and equalized odds metrics.
- Accuracy assessment: Measure the system's error rates, including false positive and false negative rates, across different population segments.
- Explainability review: Can the system's decisions be explained in terms that a non-technical person can understand? If not, what additional tooling or documentation is needed?
- Override mechanism: Is there a clear process for humans to override the system's recommendations? Is this process actually used in practice, or do human reviewers rubber-stamp AI outputs?
- Incident response: What happens when the system produces an incorrect or harmful output? Who is notified, and how quickly can the system be modified or disabled?
3. Disclosure Templates
Prepare standardized disclosure notices for each category of AI system interaction. At minimum, you will need:
- Consumer-facing disclosure: For AI systems that interact directly with customers or make decisions about them
- Employee disclosure: For AI systems used in HR processes, performance management, or workplace monitoring
- Vendor disclosure: For AI systems that evaluate or make decisions about business partners, suppliers, or contractors
- Regulatory disclosure: For AI systems used in compliance, reporting, or audit processes
Each disclosure should be written in plain language at a 7th-grade reading level, should be available in the primary languages of the affected population, and should include clear instructions for opting out or requesting human review.
4. Vendor Evaluation Criteria
If you use third-party AI systems (and most enterprises do), your vendor contracts must address regulatory compliance:
- Training data documentation: Can the vendor provide AB 2013-compliant disclosure of training data sources?
- Bias audit results: Has the vendor conducted independent bias audits, and will they share the results?
- Data processing agreements: Where is your data processed, and does the vendor's data handling comply with applicable privacy laws?
- Indemnification: Will the vendor indemnify you for regulatory penalties arising from defects in their AI system?
- Right to audit: Do you have the contractual right to audit the vendor's AI system, or to commission a third-party audit?
- Update and modification notices: Will the vendor notify you before making material changes to the AI model or its training data?
5. Documentation and Record-Keeping
Across all regulations discussed in this guide, documentation is the common thread. Organizations that maintain comprehensive records of their AI systems, decisions, assessments, and disclosures will be in a fundamentally stronger compliance position than those that do not. Key documentation requirements include:
- Risk assessment reports, updated annually or upon material system changes
- Disclosure notices and evidence of delivery to affected individuals
- Bias audit results and remediation actions taken
- Training data provenance records
- System modification logs, including what changed, when, and why
- Incident reports documenting system failures, erroneous outputs, and corrective actions
How AI Platforms Simplify Compliance
The compliance requirements outlined above are demanding, but they become significantly more manageable when your AI infrastructure is designed with governance in mind. AI platforms that provide built-in audit trails, model documentation, and usage tracking eliminate much of the manual documentation burden that makes compliance costly and error-prone.
Specifically, platforms with integrated governance features offer:
- Automatic audit logging of every AI inference, including the input data, model version, output, and timestamp — creating the documentation trail that regulators require without manual record-keeping
- Model cards and system documentation that capture the technical specifications, training data, performance metrics, and known limitations of each AI model in use
- Usage tracking and analytics that show which AI systems are being used, by whom, how often, and for what purposes — enabling the AI system inventory that is the foundation of every compliance framework
- Access controls and approval workflows that ensure high-risk AI decisions receive appropriate human oversight before being acted upon
- Version control and change management that document every modification to AI models and workflows, with rollback capabilities if a change introduces compliance issues
Swfte's enterprise security and compliance features are designed around these principles, providing the governance infrastructure that makes regulatory compliance an operational capability rather than a periodic audit exercise. When compliance requirements evolve — and they will — organizations with platform-level governance can adapt their documentation, disclosures, and monitoring without rebuilding their compliance processes from scratch.
What Comes Next
The AI regulatory landscape will not simplify in 2026. If anything, the pace of state-level legislation is accelerating, with new bills introduced almost weekly during active legislative sessions. Enterprise AI teams should prepare for:
- Additional state laws modeled on the Colorado AI Act, with at least 5-8 states expected to pass comprehensive AI legislation by the end of 2026
- Sector-specific federal regulations from agencies like the SEC, FTC, EEOC, and HHS, which are developing AI-specific guidance within their existing regulatory authority
- International convergence around the EU AI Act framework, as Canada, Brazil, Japan, and South Korea finalize their own AI regulations
- Increased private litigation, particularly under Illinois BIPA and California consumer protection laws, where plaintiffs' attorneys see AI systems as the next major class-action frontier
The organizations that will navigate this environment most successfully are those that treat compliance not as a one-time project but as an ongoing operational capability — with the infrastructure, documentation, and governance processes to adapt as requirements evolve. Building that capability now, while the enforcement landscape is still taking shape, is significantly less expensive and disruptive than scrambling to comply after the penalties begin.
For additional guidance on building enterprise AI governance frameworks, see our analysis of enterprise AI governance and risk management and our enterprise AI security and compliance guide.