Financial Services Spoke

AI for Investment Research Governance

The most-deployed and fastest-scaling AI category in financial services — and the one where adoption most consistently outruns supervision.

The Market Has Consolidated Faster Than Governance Has Caught Up

AlphaSense's scale is the proxy for industry-wide research-AI penetration

$500M
AlphaSense ARR (Oct. 2025)
7,000
Enterprise customers (April 2026)
90%
Of the S&P 100 are AlphaSense customers
80%
Of the top hedge funds use AlphaSense (vendor-reported)

The Adoption Pattern Behind the AlphaSense Number

Bloomberg GPT, S&P Global's Kensho, Hebbia, and several smaller competitors fill the rest of the market. The pitch is consistent — compress weeks of earnings-season research into hours; surface relevant signals across analyst transcripts, 10-Ks, alt-data, and expert calls; generate structured research notes ready for portfolio managers.

The governance pattern is less consistent. Adoption has outrun supervisory framework design at most firms.

Where AI Research Delivers

Five high-fit use cases that are mature, reliable, and well-deployed

Use Case 1

Earnings-Call Summarization

AI listens to the call, extracts key topics, surfaces analyst-tagged questions.

Why it matters: Mature, reliable, well-deployed.

Use Case 2

10-K / 10-Q Parsing

Pulling specific disclosures, calculating cross-quarter consistency, flagging unusual changes.

Why it matters: Strong fit for AI workflow.

Use Case 3

Cross-Document Synthesis

Comparing competitor disclosures, building thematic baskets from filings and transcripts.

Use Case 4

Expert Call Summarization

Pulling key facts and disclosures from expert network calls.

Use Case 5

Alt-Data Ingestion

Web traffic, app downloads, satellite imagery, supply-chain signals — AI extracts and summarizes signals from unstructured data.

Coverage

Volume + Velocity

AI can scan 100 filings or 50 transcripts in the time an analyst can read three.

Why it matters: Productivity gain is real; supervision is what is missing.

Where AI Research Fails

Four failure modes that determine where the firm must require human-first review

Failure 1

Novel Situations

Anything outside the training distribution — restructurings, going-private transactions, novel financing structures — produces lower-quality output and higher hallucination rates.

Why it matters: Treat unusual deal structures as not AI-eligible by default.

Failure 2

Hallucinated Citations

Stanford RegLab's 2024 study found legal-research RAG tools hallucinated 17-33% of the time despite vendor "hallucination-free" claims. The same architectural failure mode applies to financial research RAG.

Why it matters: RAG reduces hallucination — does not eliminate it.

Failure 3

Forward-Looking Judgment

Recognizing that a metric, while accurately summarized, mis-represents underlying commercial reality.

Why it matters: Strategic judgment does not ride on RAG.

Failure 4

MNPI Handling

AI tools that accept any input cannot enforce the firm's MNPI access controls.

Why it matters: Information barriers fail at the API boundary.

The MNPI Risk That Is Specific to Investment Research AI

Investment research operates under information walls. Analysts in different sectors have access to different non-public material. Material Non-Public Information (MNPI) controls — Chinese walls, restricted lists, watch lists, and information barriers — exist precisely to prevent the wrong people from seeing the wrong information.

Shadow AI breaks information walls instantly. An analyst pastes notes into ChatGPT; the prompts are retained vendor-side and may be used to train the model; the same model serves another analyst at a different firm or another seat at the same firm. Even if the information barrier between seats is correctly enforced inside the firm, the AI tool can create a leak path outside the firm.

How Research AI Fits the Regulatory Framework

Five existing regimes that each apply to AI-assisted research output

Privacy

Reg S-P (2024 Amendments)

AI vendors handling customer information are service providers. Due diligence and contractual terms required; 72-hour notification clock applies.

Supervision

FINRA Notice 24-09

The firm evaluates the AI tool before deployment and supervises its use across the lifecycle.

Recordkeeping

Rule 17a-4 and FINRA 4511

Research that gets transmitted is a record. Retention and audit trail required.

Communications

Rule 2210

Where research is communicated to retail, pre-publication review for content that goes to clients.

Recommendations

Reg BI

Where research informs recommendations, recommendations derived from AI-assisted research are subject to the best-interest standard.

Disclosure

Form ADV (for RIAs)

Material AI use in research processes is a Form ADV disclosure question under the Delphia framework.

Governance Controls Every Research AI Deployment Needs

Seven controls that align AI research with the firm's existing supervisory plumbing

1

Tool inventory

Every research AI tool in use, including shadow tools, with risk-tier classification.

2

Vendor due diligence files

Reg S-P service-provider documentation for every AI vendor.

3

MNPI controls

Contractual terms with the AI vendor on data retention, training, and segregation. Internal usage policy specifying permitted inputs.

4

Prompt and response retention

Capturing AI interactions for Rule 17a-4 and FINRA 4511 compliance.

5

Citation verification

For any research that cites sources, the analyst verifies the citation before incorporating into a publication or recommendation.

6

Pre-publication review

Research that becomes retail communications flows through 2210 supervision.

7

Recommendation documentation

Reg BI documentation captures the AI's role in any recommendation chain.

AI for Investment Research Governance — FAQ

How widespread is AI in investment research?

AlphaSense reached $500M ARR in October 2025 and reports 7,000 enterprise customers including 90% of the S&P 100, ~70% of the S&P 500, and 80% of the top hedge funds. Bloomberg GPT, Kensho, and Hebbia fill the rest of the market. Adoption is mature; supervisory framework design at firms is uneven.

What is the MNPI risk with research AI?

AI tools that accept any input cannot enforce the firm's MNPI access controls (Chinese walls, restricted lists, watch lists). An analyst pasting notes into a consumer AI tool can create a leak path outside the firm's information barriers — even if the inside-firm barriers are correctly enforced. The compliance question is which AI tools have contractual terms strong enough to support MNPI handling.

Do investment research AI tools hallucinate?

Yes — though research AI vendors typically do not publish independent accuracy benchmarks. Stanford RegLab's peer-reviewed 2024 study found legal-research RAG tools hallucinated 17-33% of the time despite vendor 'hallucination-free' claims. The same architectural failure mode applies to financial research RAG, meaning analysts must verify citations and material claims before incorporating them into publications.

What regulations apply to AI in investment research?

Multiple — Reg S-P (2024 amendments) governs the vendor as a service provider, FINRA Notice 24-09 requires firms to evaluate and supervise the tool, Rule 17a-4 and FINRA 4511 require prompt/response retention when transmitted, Rule 2210 governs pre-publication review where research becomes retail communications, and Reg BI applies where research informs recommendations.

Govern Your Firm's Research AI Use Before the Next Exam

Free Shadow AI Assessment inventories your research AI tools, audits vendor due diligence, and stress-tests your MNPI handling and recordkeeping.