GLBA + AI for Wealth Managers: Client Data Patterns
Wealth managers handle financial data covered by Gramm-Leach-Bliley. AI tools touching that data have specific rules to follow. Here's the practical map.
The Gramm-Leach-Bliley Act (GLBA) governs how financial institutions handle nonpublic personal information (NPI). RIAs, broker-dealers, and other financial institutions must follow GLBA Privacy and Safeguards Rules.
AI tools touching NPI need to fit within these rules.
This is not legal advice. Talk to your compliance officer.
The relevant rules
Privacy Rule. Requires firms to provide privacy notices and limits sharing of NPI with non-affiliated third parties.
Safeguards Rule. Requires firms to develop and maintain an information security program to protect NPI.
For AI vendors: they are typically "service providers" under the Safeguards Rule. Firms are responsible for ensuring service providers maintain appropriate safeguards.
Patterns that work
Enterprise AI vendors with appropriate Safeguards-aligned contracts. Specifically: written contract requiring the vendor to maintain safeguards, terms about no use of NPI for purposes other than serving the firm, breach notification provisions.
AI deployed within the firm's own infrastructure. Self-hosted models or cloud-deployed in the firm's account. NPI doesn't leave the firm's control.
AI on de-identified data only. Strip NPI before AI processing. Most analytics and pattern-recognition work can be done on de-identified data.
AI for non-NPI tasks. Internal communications, marketing, training, operations. GLBA applies to NPI specifically.
Patterns that don't work
Free or consumer AI for client data. No service provider contract. Terms don't meet Safeguards requirements.
AI services without breach notification clauses. GLBA requires notification of breaches. Your vendor's terms need to support that.
AI training on NPI. Most reading of the Privacy Rule prohibits using NPI to train models that will be used for other clients. Use vendors with explicit no-training terms.
Unmanaged AI use by advisors. Advisors using personal accounts on consumer AI services with client data — the firm is exposed even if the advisor was the one who did it.
The vendor management question
GLBA's Safeguards Rule requires written information security programs that include vendor management. Specifically:
- -Due diligence on vendors
- -Written contracts requiring safeguards
- -Monitoring of vendor compliance
- -Updates as the AI landscape changes
For each AI vendor: keep documentation showing you've done all four.
A specific compliant pattern: client communication drafting
Many RIAs want AI to draft client emails. Compliant pattern:
The AI is via enterprise tier with no-training and confidentiality terms. The AI sees the relevant client data (account balance, performance, last meeting topics). The advisor reviews every output before send. Logs show what data was sent to AI and when.
Privacy notice has been provided to clients. The firm's information security program covers AI vendors.
A specific compliant pattern: portfolio analytics
For analytics on client portfolios:
De-identify the data before AI processing (replace client IDs with random tokens). Run the analysis. Map results back to clients within the firm's environment, not at the AI vendor.
The AI vendor never sees identifiable client information. GLBA exposure is minimal.
A specific pattern that's risky: AI-driven trading
Letting AI make trading decisions on client accounts introduces a different layer of regulatory exposure beyond GLBA. Fiduciary duty, advisor of record questions, suitability. Talk to compliance and counsel before going here.
The audit angle
SEC examiners increasingly ask about AI use in advisor exams. Expect: - List of AI tools used - Description of what data flows to each - Vendor management documentation - Information security program updates reflecting AI vendors - Staff training records
Have these ready.
What advisors should NOT do
Use consumer AI services with client information. Even if the terms have improved, the patterns of use are problematic.
Trust the AI's outputs without human review. Especially anything that affects a client's account.
Forget the AI in the privacy notice. If your privacy practices have changed to include AI vendors, your notice needs to reflect that.
What firms should have
- -AI vendor inventory
- -Vendor agreements with GLBA-aligned terms
- -AI use policy
- -Staff training
- -Updated information security program documentation
- -Monitoring and audit logs
- -Updated privacy notice if AI changes data flows materially
The bottom line
GLBA + AI is workable. The frame is vendor management and information security. Enterprise vendors with proper terms. Documented use. Monitored compliance.
The wealth firms getting it wrong are using consumer AI and hoping. The ones doing it right are operating like AI is just another service provider that happens to need careful vendor management.
Not legal advice. Talk to your compliance officer.
Want the full guide? Check out our deep-dive page for more context, FAQs, and resources.
read the full guide