Enterprise Contract Stacks, Confidentiality, and Kovel: A Companion Analysis to the Report

This note complements the report and Appendix G. The report's Appendix G surveys the consumer-facing Terms of Service of leading frontier-model providers and shows that those terms generally do not accept fiduciary, agency, or loyalty-like duties. This page addresses a different contractual layer: the enterprise and commercial agreements these providers use for business, regulated, and other high-assurance deployments. That layer materially changes the confidentiality analysis. It improves, but does not fully resolve, the privilege analysis under Kovel and related doctrines. See the report's discussion of Observable Contractual Loyalty, Kovel, functional-equivalent doctrine, and the contrast between general consumer SaaS and privilege-sensitive legal-tech contracting.

Methodology note. This analysis reviews publicly available standard enterprise terms, DPAs, help-center pages, and product/security documentation as of April 21, 2026. It does not evaluate non-public order forms, negotiated amendments, private BAAs, customer-specific security exhibits, or operational configurations. Provider documentation changes quickly, and negotiated terms may differ from the public baseline summarized here.

What these enterprise agreements are

For enterprise customers, the relevant agreement is often not the public consumer ToS at all. Instead, providers typically offer some combination of:

These instruments serve a different purpose than retail clickwrap. Consumer terms are designed for scale and broad risk allocation. Enterprise stacks are designed to answer different questions: who controls the data, whether the provider acts as a processor on the customer's behalf, whether the provider trains on customer data, what retention and deletion rules apply, what administrative and audit controls exist, whether data residency is available, and what additional contractual requirements apply for regulated or sensitive data. In short, these are fit-for-purpose contracts for enterprise deployment, not simply "better ToS."

That distinction matters for law firms, healthcare entities, and other professional users. For those users, the key question is not whether a public consumer AI service is sufficiently tailored for confidential matter work absent additional controls and contracting; for most professional uses it is not. The real question is whether the provider offers a commercial contract stack that is sufficiently bounded, confidential, instruction-driven, and operationally controllable to support the customer's professional duties. In healthcare, that often means BAA-scoped HIPAA-eligible services. In legal practice, the analogous question is whether the contract stack supports client confidentiality and, in the stronger case, provides enough of the structure one would want for a Kovel-style privilege argument.

OpenAI

OpenAI's enterprise stack includes the OpenAI Services Agreement, the OpenAI Data Processing Addendum, published Enterprise Privacy commitments, and — for eligible healthcare use cases — a Business Associate Agreement. OpenAI states that enterprise offerings give customers ownership and control over business data, that OpenAI does not train its models on business data by default, and that ChatGPT Enterprise, ChatGPT for Healthcare, and ChatGPT Edu offer retention and administrative controls such as SAML SSO and feature/access management (https://openai.com/enterprise-privacy/). OpenAI's help documentation states that the API platform can be used with PHI only after the customer obtains a BAA from OpenAI, and that BAA requests are reviewed case by case (https://help.openai.com/en/articles/8660679-how-can-i-get-a-business-associate). The DPA states that OpenAI processes customer data on the customer's behalf and pursuant to the DPA and the agreement (https://openai.com/policies/data-processing-addendum/). (OpenAI Enterprise Privacy)

For confidentiality, this is a serious enterprise posture. It supports the argument that OpenAI is functioning as a bounded confidential service provider or processor, not as an uncontrolled public recipient of matter data. For privilege, however, the public contractual record remains incomplete. The Services Agreement expressly states that OpenAI and the customer "are not legal partners or agents but are independent contractors" (https://openai.com/policies/services-agreement/). A court that insisted on a more formal agency relation under Kovel could treat that clause as a substantial obstacle. A court that focused instead on functional equivalence, customer instructions, confidentiality, no-training commitments, and counsel-directed deployment could reach a different conclusion. OpenAI's enterprise stack therefore materially strengthens the confidentiality and functional-equivalent argument, but it does not provide the cleanest possible express-agency hook. (OpenAI Services Agreement)

Anthropic

Anthropic's enterprise stack includes its Commercial Terms, its incorporated DPA, enterprise admin/compliance features for Claude for Work, and a feature-scoped BAA program for certain HIPAA-ready services. Anthropic states that for Claude for Work the customer is the controller, Anthropic acts as a processor on the customer's behalf, and Anthropic processes data only as instructed by the customer to provide the service (https://support.claude.com/en/articles/9267385-does-anthropic-act-as-a-data-processor-or-controller). Anthropic also states that its DPA is automatically incorporated into its Commercial Terms (https://support.claude.com/en/articles/7996862-how-do-i-view-and-sign-your-data-processing-addendum-dpa). Anthropic documents enterprise audit logs, a Compliance API, and configurable custom retention controls for enterprise plans (https://support.claude.com/en/articles/9970975-access-audit-logs ; https://support.claude.com/en/articles/13015708-access-the-compliance-api ; https://support.claude.com/en/articles/10440198-configure-custom-data-retention-controls-for-enterprise-plans). Anthropic further states that BAAs may cover HIPAA-ready services, including use of its first-party API and Claude Enterprise plans, subject to feature-level and configuration limitations. The BAA does not cover Workbench/Console, Claude Free, Pro, Max, or Team plans, and certain beta features, third-party connectors, and specific API features are excluded or only conditionally covered (https://support.claude.com/en/articles/8114513-business-associate-agreements-baa-for-commercial-customers). (Claude Help Center — Data Processor or Controller)

On confidentiality, Anthropic's public enterprise posture is one of the strongest in this group. It is unusually clear about processor status, on-behalf-of processing, enterprise controls, and no-training treatment for commercial data. On privilege, the posture still stops short of a classical Kovel-style intermediary contract. Anthropic's public materials do not, on the record reviewed here, expressly position the company as the law firm's agent or as a privilege-preserving confidential intermediary in the way some legal-tech vendors do. That means Anthropic offers a strong confidentiality and functional-equivalent argument, but not a complete public contractual acceptance of the legal intermediary role itself. (Claude Help Center — Data Processor or Controller)

Google

Google's relevant enterprise stack spans Google Workspace, Google Cloud, the applicable data processing terms, and the relevant BAA/HIPAA materials. Google states in its Workspace with Gemini privacy materials that customer data in Google Workspace with Gemini remains within the organization, that prompts and generated output are not used to train models outside the customer's domain without permission, and that customer data is processed under the Google Cloud Data Processing Addendum (https://support.google.com/a/answer/15706919). Google also documents that Google Workspace and Cloud Identity can be used under a HIPAA BAA and maintains a list of HIPAA Included Functionality that includes Gemini app and Gemini in Workspace (https://support.google.com/a/answer/3407054 ; https://workspace.google.com/terms/2015/1/hipaa_functionality/). On the Cloud side, Google documents that Vertex AI Search and related RAG capabilities support HIPAA-compliant use under the appropriate BAA and that customer data used in Vertex AI Search is not used to train Google foundation models; Google also documents zero-data-retention options for some Vertex AI generative AI services (https://docs.cloud.google.com/generative-ai-app-builder/docs/compliance-security-controls ; https://docs.cloud.google.com/generative-ai-app-builder/docs/data-governance ; https://docs.cloud.google.com/vertex-ai/generative-ai/docs/vertex-ai-zero-data-retention). (Google Workspace with Gemini Privacy Hub)

Google's enterprise posture is therefore robust on the dimensions that matter most for confidentiality: customer-instruction processing, training restrictions, enterprise controls, a mature compliance architecture, and explicit HIPAA support for listed services. But the public materials reviewed here do not frame Google as the lawyer's agent or confidential intermediary for facilitating legal advice. Google's posture is best understood as that of a mature enterprise cloud and productivity provider with strong confidentiality and data-governance commitments. That substantially helps the confidentiality analysis and strengthens a functional-equivalent privilege argument. It is not the same thing as a public Kovel-style acceptance of a bounded legal-agency role. (Google Workspace with Gemini Privacy Hub)

As with the other providers, these commitments are product-, feature-, and configuration-specific; HIPAA and retention posture for a given deployment should be checked against the listed service and the enabled controls.

xAI

xAI's relevant enterprise stack consists of its Enterprise Terms of Service and Data Processing Addendum. xAI's DPA states that xAI acts as a processor, that the customer is the controller or processor as applicable, and that xAI will process personal data only in accordance with the customer's lawful documented instructions (https://x.ai/legal/data-processing-addendum). xAI's enterprise terms also establish a business-facing confidentiality framework. At the same time, xAI's enterprise terms expressly preserve a standard independent-contractor posture rather than an agency posture (https://x.ai/legal/terms-of-service-enterprise). (xAI Data Processing Addendum)

On the public record reviewed here, xAI's enterprise stack is materially more substantial than a consumer ToS and now includes a processor-oriented DPA, confidentiality provisions, a public BAA intake questionnaire for API/HIPAA use, Zero Data Retention for enterprise accounts, a 90-day in-app audit trail for Business Tier accounts, SAML SSO and role-based access, SOC 2 Type 2 compliance, and enterprise security/access controls (https://docs.x.ai/developers/faq/security ; https://x.ai/security/). xAI's Enterprise Terms also prohibit submission of PHI and other listed sensitive categories unless the customer contacts xAI and agrees to a separate Enterprise Customer Agreement or similar agreement and a BAA, as applicable; that makes the enterprise-agreement/BAA path the operative enabling path for healthcare deployments. The remaining distinction is that xAI's public materials are less specifically developed for law-firm, privilege-sensitive, or feature-by-feature HIPAA deployment analysis than the OpenAI, Anthropic, and Google materials reviewed above. That leaves xAI with a real but less fully documented confidentiality and privilege-support posture on the public record. (xAI Enterprise Terms of Service)

Confidentiality analysis

The importance of these enterprise stacks is clearest on confidentiality. A law firm, healthcare entity, or other professional user ordinarily needs more than a consumer disclaimer regime. It needs a contract stack that identifies the provider as processing data for the customer's purposes, limits secondary use, constrains retention, offers administrative and audit controls, and creates enforceable confidentiality obligations. OpenAI, Anthropic, and Google now plainly offer such stacks for at least some enterprise and regulated use cases, while xAI offers a processor-oriented enterprise framework with a less fully documented public posture for professional and regulated deployments. (OpenAI Enterprise Privacy)

That means the report's Appendix G remains correct as to the consumer baseline, but incomplete as a full market picture. There is now a separate contractual layer in the market for enterprise and regulated deployments. These are not best understood as "consumer terms with better security." They are a distinct class of commercial contract designed to make the provider usable where confidentiality, processor status, administrative control, and regulated-data handling are central to the service relationship.

Kovel and privilege analysis

Caveat. This is not a claim that processor terms, no-training commitments, enterprise controls, or BAAs by themselves establish attorney-client privilege or work-product protection. Privilege analysis remains jurisdiction- and fact-specific, and may turn on negotiated terms, the lawyer's supervision and necessity showing, the user's workflow, client expectations, and the actual use of the system. The AI-specific privilege and work-product case law discussed in the parent report is early and fast-moving as of April 2026.

The harder question is privilege. Under the report's account of Kovel and its functional-equivalent extensions, the strongest privilege-sensitive posture is one in which the third-party provider acts as a confidentiality-bound intermediary or agent helping the lawyer render effective legal advice. The enterprise stacks described above materially improve the factual predicate for confidentiality and may support functional-equivalence arguments more strongly than consumer terms do. They strengthen the argument for a reasonable expectation of confidentiality. They strengthen the argument that the provider is operating under the lawyer's or firm's instructions. They strengthen the argument that the provider's use of the data is limited to delivering the contracted service rather than to unrelated training or product-improvement uses.

But they do not fully solve the agency question. A court that strictly applied Kovel and demanded a more explicit agency relation could find the current public enterprise stacks insufficient, especially where the contract expressly disclaims agency, as OpenAI's and xAI's public business terms do. A court that emphasized functional equivalence, necessity, confidentiality, and counsel-directed use could conclude that modern processor-style safeguards are enough, especially where the lawyer — not the client — selects, configures, and supervises the system and the contract sharply limits secondary use. The law could evolve in that direction. The present position is therefore intermediate: the enterprise stacks improve the privilege argument substantially, but they do not eliminate the doctrinal uncertainty identified in the report. (OpenAI Services Agreement)

That uncertainty is commercially meaningful because some established legal-tech, e-discovery, and litigation-support SaaS vendors already contract in ways intended to support privilege-sensitive workflows — for example, through customer agreements that expressly acknowledge attorney-direction, confidentiality-preserving access, or work-product-oriented processing. That is the market pattern described in the report. The contrast is therefore not between "ordinary modern contracting" and some implausible legal ask. It is between two commercially available postures: one that offers confidentiality and process-control commitments only, and another that goes further and accepts a limited agent/intermediary role because that is what the customer market requires.

Normative recommendation: scoped agency as a design option

The point below is not that independent-contractor or no-agency clauses in provider enterprise terms are bad-faith drafting; they are standard commercial risk-management tools. The question is whether agentic and privilege-sensitive deployments would benefit from an additional, more specifically bounded contractual posture.

For high-assurance professional deployments, recognizing a limited agency or confidential-intermediary role is a narrow option, well-adapted to the market and commercially reasonable, not a general transfer of risk. It is a bounded, proportional, commercially intelligible acknowledgment of the function the provider is already performing when it processes confidential matter data solely on the customer's instructions to help the customer discharge professional duties. In that context, agency is not a broad transfer of all risk. It is a bounded legal characterization of a bounded service role.

That observation is even more apt where the provider is not merely supplying storage or compute, but is supplying an AI agent designed to operate on behalf of the user — communicating, acting, retrieving, negotiating, and in some cases initiating transactions at the user's direction. In those settings, the practical reality of the product looks increasingly agentic even if the contract avoids the word. A provider that markets and supplies agentic functionality while disclaiming any cognizable agent role may create a mismatch between the product's operational role and the legal posture reflected in standard terms. The report's recommendation is that this mismatch is neither inevitable nor desirable. A provider can instead accept a limited, scoped, observable agency-like role — bounded by authorization controls, confirmation gates, audit logs, confidentiality restrictions, and explicit exclusions — and thereby give enterprise and professional users a more accurate and more useful legal fit for the work the system is actually being asked to do.

Static MVP

Take the CONTRACT

This is a toy-demo showing conceptually how relevant fiduciary and other agency related terms can be crafted to support and reflect the offerings of AI agent providers. The actual categories of services, promises, and caveats would of course be a more complex and heavily crafted matter. For purposes of this demo select duty modules and generate a copy-friendly Markdown stub. Everything runs in the browser; no form submission, analytics, or backend call is used.

Example language only. Not legal advice.

Use CONTRACT.md and AUTH_PREFS.md as canonical reference texts and adapt with counsel before production use.

Duty modules

Generated Markdown

Copy this stub into your own drafting workflow.