Key Takeaways
- California's DRE confirmed in March 2026 that licensees and supervising brokers bear full liability for AI-generated misrepresentations; the technology vendor is not in the chain of legal responsibility.
- Generative AI-related lawsuits grew 978% between 2021 and 2025, yet most real estate brokerages lack a documented AI governance policy that would provide any legal protection in a misrepresentation case.
- E&O coverage for AI-related claims is actively narrowing: AIG, Great American, and WR Berkley are pursuing AI exclusions, and Verisk's generative AI exclusion forms became available to carriers on January 1, 2026.
- California AB 723 creates a statutory liability split between broker-approved platforms and agent-selected tools, but the framework collapses whenever agents use AI writing tools the broker has never reviewed or approved.
- The gap between current industry guidance (review AI output) and what courts will eventually require (documented, auditable review processes) is where liability will concentrate as hallucination cases reach litigation.
The scenario is straightforward: an agent uses an AI copywriting tool to draft a listing description, the model hallucinates a "resort-style pool and spa" based on a misread photo, the listing goes live in the MLS, a buyer makes an offer partly on the strength of that amenity, and closes escrow. The pool doesn't exist. Under existing agency law, the answer to "who gets sued" is almost certainly the agent and the supervising broker, and their E&O carrier may decline to defend them.
This is not a hypothetical edge case. Buyers and agents have documented frustration with AI-generated listing descriptions containing fabricated features, from non-existent fireplaces to phantom views to invented square footage. The legal industry calls this a hallucination. Real estate law calls it material misrepresentation. And the existing framework for assigning liability was written entirely for humans.
What an AI Hallucination Actually Looks Like Inside a Real Estate Transaction
When a large language model drafts a property description, it does not "know" the property. It generates plausible-sounding text by predicting what typically follows what. Photographs and MLS data feed the model, but the model will confidently invent details it cannot verify from those inputs. Square footage, roof condition, appliance packages, proximity to transit, zoning classifications, and the presence of a pool are all fair game for fabrication.
NAR has warned directly that AI tools can generate information that sounds correct but is not, "including incorrect square footage, made-up property features, or inaccurate market details." The structural danger is that the output is polished, authoritative, and grammatically impeccable, which makes it far more dangerous than a handwritten typo. A buyer reviewing a beautifully formatted AI description has no signal that the "chef's kitchen with Wolf appliances" was inferred from a blurry listing photo.
The hallucination problem extends beyond marketing copy into automated valuations. AI-assisted AVMs increasingly feed automated listing tools, and when an AVM misclassifies a property as containing a finished basement or an accessory dwelling unit, those errors can cascade through the listing description with no human checkpoint in between.
The Legal Framework Was Written for Human Error — It Has No Clean Answer for Machine Confabulation
Under existing agency law, a real estate licensee who publishes a material misrepresentation about a property's physical condition is liable to the buyer regardless of intent. Most state licensing statutes follow the same logic: ignorance is not a defense when the agent had a duty to verify. The California DRE's March 2026 advisory makes this explicit: "responsibility under current law rests with the licensee and their responsible broker, not the technology provider."
That single sentence is the most consequential thing regulators have said about AI liability in this cycle, and most brokerages haven't operationalized it. It means an agent who pastes unreviewed AI output into MLS remarks cannot argue the hallucination was the model's fault. As the RASM advisory frames it: "You cannot hide behind the system that generated it. If it is in your MLS remarks, your flyer, your Facebook ad, or your website, you are responsible."
NAR's Code of Ethics Article 2 prohibits REALTORS from exaggerating, concealing, or misrepresenting pertinent facts about a property or transaction. NAR's updated 2026 Code extends this obligation explicitly to AI-generated content. There is no carve-out for machine error. What the law does not yet have is a coherent theory for allocating fault among the three parties who typically share the exposure: the AI platform vendor, the brokerage, and the individual agent. Courts will get there through litigation, and the industry will pay tuition on those cases.
E&O Insurance in the Age of AI: Why Your Policy Probably Has a Coverage Gap You Haven't Noticed
Generative AI-related lawsuits in the U.S. grew 978% between 2021 and 2025, according to Lexology's analysis of litigation trends. The insurance market has noticed. Three major carriers, AIG, Great American, and WR Berkley, have already sought regulatory approval to limit liability for claims arising from AI systems, according to Metropolitan Risk Advisory. Verisk's new generative AI exclusion forms became available to insurers on January 1, 2026.
Berkley's formulation deserves careful reading: an "absolute" AI exclusion eliminating coverage for any claim "based upon, arising out of, or attributable to" the use of artificial intelligence, including AI-generated content and failure to detect AI-produced materials. Under that language, an E&O claim arising from a hallucinated pool in a listing description would be excluded entirely, because the agent used an AI tool and failed to detect the error.
Many real estate brokerages are renewing E&O policies in 2026 without asking whether their carrier has adopted Verisk's exclusions. Standard professional liability policies, tech E&O, and commercial general liability each leave significant gaps when AI is the proximate cause of harm, and relying on "silent coverage" (coverage assumed because it isn't explicitly excluded) is no longer a viable posture. The Lexology analysis specifically flags this: silent AI coverage is a strategy carriers are actively dismantling.
The Chain of Liability: Platform, Brokerage, Agent — And Why Courts May Hold All Three
California's AB 723, effective January 1, 2026, creates the clearest statutory framework currently on the books. The WAV Group analysis identifies the key liability split: brokers bear supervisory responsibility for AI tools deployed on their behalf, while agents bear personal liability for tools they independently select and deploy. The law draws a line between broker-approved platforms and agent-controlled tools, with indemnification obligations flowing accordingly.
That framework works cleanly when an agent uses a brokerage-issued AI tool. It collapses when an agent brings in a third-party AI writing assistant the broker has never reviewed. In that scenario, the California DRE's supervision standard still applies: brokers must "reasonably supervise" AI tool usage by affiliated licensees, including establishing written policies and monitoring compliance. A brokerage without a written AI policy faces a strong argument that it failed its supervisory duty regardless of whether it knew the agent was using any particular tool.
The proptech platform itself occupies ambiguous ground. Most terms of service disclaim liability for AI-generated content and place verification responsibility on the user. Those disclaimers have not been tested in a real estate misrepresentation case. There is a credible plaintiff theory that a platform actively marketing its tool for MLS listing generation bears some duty to flag when model confidence is low or when it is inferring features not visible in the supplied imagery. Until a court rules on that theory, the agent and brokerage remain the primary defendants, and the platform collects its subscription fee from the waiting room.
What Industry Groups Are Warning (And What They're Not Saying Out Loud)
NAR's public guidance is appropriately cautious. The organization urges brokerages to establish AI use policies and insists that human review precede any client-facing AI output. The California DRE advisory from March 2026 is the most detailed regulatory document currently available to practitioners, and it is worth reading in full.
What neither NAR nor any state licensing authority has yet mandated is an AI audit trail: a documented log showing which parts of a listing were AI-generated, what human review occurred, and when that review was completed. That requirement is coming. The Colorado AI Act, effective June 30, 2026, mandates formal impact assessments for AI used in consequential decisions including housing transactions, with penalties exceeding $100,000 for repeat Fair Housing Act violations. The Neuhaus compliance analysis for 2026 identifies a five-part framework that anticipates this direction, but it remains voluntary practice, not regulatory mandate.
The gap between what industry groups are warning (be careful, review everything) and what courts will eventually require (prove your review process was documented and adequate) is where liability will concentrate as the first hallucination cases reach verdict.
The Compliance Playbook That Doesn't Exist Yet — And What Firms Should Be Building Right Now
No comprehensive AI compliance framework for real estate transactions currently exists at the federal level. What brokerages need to build, before a hallucinated amenity becomes a headline verdict, is a documented AI governance structure with three operational components.
First, a written AI acceptable use policy specifying which tools are approved, what they can be used for, and that human factual verification against primary sources (inspection reports, seller disclosures, tax records) is mandatory before any AI output touches the MLS, a disclosure form, or a client communication. The policy should name the approved tools, not simply describe categories.
Second, updated independent contractor agreements explicitly assigning liability for unauthorized AI tool use to the agent, while creating indemnification obligations running back to the brokerage for tools the brokerage selects and deploys. Without that language, the default rule in most states makes the broker the responsible party for agent conduct regardless of who picked the tool.
Third, an E&O coverage audit conducted with a carrier or specialist broker who can confirm whether the current policy language excludes AI-related errors, whether Verisk's 2026 exclusion forms have been adopted, and what endorsements or standalone products are available to close the gap.
The pool that doesn't exist is a small example of a large structural exposure. The industry deployed AI at scale before the legal infrastructure caught up, and the liability is real, diffuse, and only partially insurable. Firms that treat this as a checkbox will discover its teeth in litigation. Firms that build governance infrastructure now will have documentation to show a court when that litigation arrives, and in this area of law, documented process is the difference between a defensible claim and a default judgment.
Frequently Asked Questions
Can a buyer sue the AI platform vendor directly if a hallucinated property feature causes financial harm?
Most AI platform terms of service disclaim liability for generated content and place verification responsibility squarely on the user. Those disclaimers have not been tested in a real estate misrepresentation case, but there is a credible plaintiff theory that platforms actively marketing tools for MLS listing generation bear some duty of care. Until courts rule on this, the agent and supervising broker remain the primary defendants.
Does California's AB 723 require disclosure when AI is used to write listing copy, or only for altered images?
AB 723, effective January 1, 2026, specifically targets digitally altered images in real estate advertising, requiring clear disclosure and access to unaltered originals, with violations constituting a misdemeanor. It does not yet mandate disclosure for AI-drafted written copy. However, the California DRE's March 2026 advisory makes clear that all advertising, including AI-generated text, must be truthful and accurate, with the licensee bearing full responsibility for factual verification regardless of the authorship tool.
If a brokerage has no written AI policy, does that create direct liability exposure?
The California DRE requires brokers to "reasonably supervise" AI tool usage by affiliated licensees, including establishing written policies and providing training. A brokerage without written policies faces a strong supervisory failure argument whenever an agent's AI tool produces a material misrepresentation. That failure of supervision creates direct broker liability independent of the agent's personal liability under agency law.
Are standard real estate E&O policies currently covering AI-related misrepresentation claims?
Many legacy policies still provide coverage because they predate AI exclusion language, but that window is closing rapidly. Verisk's generative AI exclusion forms became available to carriers on January 1, 2026, and AIG, Great American, and WR Berkley are actively pursuing AI liability limitations according to [Metropolitan Risk Advisory](https://www.metropolitanrisk.com/major-insurers-are-pulling-back-from-ai-liability/). Berkley's absolute AI exclusion eliminates coverage for any claim attributable to AI use, including failure to detect AI-produced errors, which would encompass an unreviewed hallucinated listing description.
What is the Colorado AI Act's relevance for real estate brokerages operating outside Colorado?
The Colorado AI Act, effective June 30, 2026, requires formal impact assessments for AI systems used in consequential decisions including housing transactions, and carries penalties exceeding $100,000 for repeat Fair Housing Act violations. Its significance beyond Colorado is as a legislative template: it is the first state law to mandate documented AI governance in housing contexts, and the compliance infrastructure it requires, audit trails, human oversight logs, impact assessments, previews what federal regulation or litigation-driven standards will eventually demand nationwide.