AI Compliance Guide for Chicagoland Businesses | CelereTech

AI Compliance Guide for Chicagoland Businesses

AI adoption creates real compliance obligations for businesses in financial services, legal, accounting, and other regulated industries across Chicago and Chicagoland. The rules governing data protection, professional responsibility, and vendor management all extend to AI systems that process your clients’ information. This guide answers the most important compliance questions for Chicagoland SMBs adopting AI. CelereTech builds compliance-first AI programs that address these requirements for regulated businesses throughout the Chicago area.

This guide is part of the CelereTech AI Resource Center for Chicago and Chicagoland businesses.

What compliance obligations does AI adoption create for small businesses?

AI adoption creates compliance obligations under any regulation that governs the data your AI system processes, including GLBA for financial data, state data privacy laws for consumer information, IRS safeguard rules for tax data, and professional responsibility rules for law and accounting firms. The core obligation is that your compliance framework does not change because you are using AI — it applies to the AI tools themselves. Most businesses discover they need updated vendor agreements, data governance documentation, and employee policies before they can deploy AI in a compliant manner.

How does GLBA apply to AI systems used by financial firms?

The GLBA Safeguards Rule requires financial institutions to implement a written information security program covering all systems that access, transmit, or store customer financial information, including AI tools. This means AI vendors must be evaluated through vendor due diligence, covered by appropriate contractual controls, and included in your risk assessment. The 2023 Safeguards Rule amendments added specific requirements for vendor oversight, encryption, access controls, and incident response that apply to AI deployments handling customer financial data.

What do state data privacy laws require for AI use?

State privacy laws including Illinois BIPA, CCPA in California for businesses with California customers, and similar laws in other states require that AI systems processing personal information meet consent, data minimization, and security requirements. Illinois BIPA specifically covers biometric data, including voice patterns used in voice cloning or authentication systems, and carries a private right of action with statutory damages. Businesses serving customers in multiple states should map which privacy laws apply to their AI deployments based on where customers are located.

How do IRS data safeguard rules apply to tax preparers using AI?

IRS Publication 4557 and the FTC Safeguards Rule require tax preparers to implement administrative, technical, and physical safeguards for all taxpayer data, including any AI system that processes or accesses that data. Tax preparers using AI to draft returns, analyze documents, or communicate with clients must ensure the AI vendor provides a data processing agreement, prohibits use of tax data for model training, and meets encryption and access control requirements. Using consumer AI tools with taxpayer data is a direct violation of IRS safeguard requirements.

What professional responsibility obligations do lawyers have regarding AI?

Lawyers have professional responsibility obligations under Model Rules of Professional Conduct to competently supervise AI tools used in client matters, protect client confidentiality, and disclose AI use where required by engagement terms or ethics rules. Competent supervision means understanding what the AI tool does with client data, verifying AI-generated work product for accuracy, and ensuring client confidential information does not enter tools that lack appropriate data protections. Illinois Rules of Professional Conduct apply these duties to Chicago-area attorneys, and bar associations across Illinois have issued ethics guidance on AI use.

What must be included in an AI vendor data processing agreement?

An AI vendor data processing agreement must specify: what data the vendor may access and process, that the vendor may not use your data for model training without explicit consent, data residency and storage location, breach notification timelines, data deletion procedures upon contract termination, and the security controls the vendor maintains. Enterprise AI vendors like Microsoft (for Copilot) provide standard DPAs that meet these requirements. Consumer AI vendors typically do not provide DPAs, which is why they are inappropriate for processing any business or client data.

What AI governance documentation do regulators expect?

Regulators increasingly expect businesses to maintain a written AI governance program that includes: an inventory of AI systems in use, a risk assessment for each, vendor due diligence records, an AI acceptable use policy, employee training records, and an incident response procedure for AI-related events. This documentation requirement mirrors what the GLBA Safeguards Rule already requires for information security programs generally. Having this documentation in place before a regulatory exam or client audit demonstrates proactive governance.

What should an AI acceptable use policy include?

An AI acceptable use policy should define: which AI tools are approved for use, what data categories may and may not be entered into AI tools, employee responsibilities for verifying AI-generated output, how to report unauthorized AI tool use, and the consequences for policy violations. The most critical provision is the data classification rule — specifying that confidential client data, financial records, and personal information may only enter approved enterprise AI systems with appropriate controls. The policy should be updated as new AI tools are approved or new threats emerge.

What are the compliance risks of using consumer AI tools in business?

Consumer AI tools such as free-tier ChatGPT, Google Gemini, or other tools without enterprise agreements create compliance risk by: processing business data without a data processing agreement, potentially using inputs for model training, lacking the access controls required by most compliance frameworks, and having no breach notification obligations. For regulated businesses, using consumer AI with client data is not a gray area — it is a direct violation of applicable safeguard rules. The practical answer is to standardize on approved enterprise tools and enforce the policy through training and technical controls.

How does the EU AI Act affect US businesses?

The EU AI Act affects US businesses that deploy AI systems used by EU-based customers or employees, or that develop AI systems marketed in the EU. For most Chicagoland SMBs serving local markets, the EU AI Act has limited direct applicability, though it is influencing the direction of US AI regulation and enterprise AI vendor practices. Businesses in industries with EU clients or operations should conduct a scoping analysis to determine whether EU AI Act obligations apply.

What is required for GLBA-compliant AI vendor due diligence?

GLBA-compliant vendor due diligence for AI tools requires: assessing the vendor’s security controls against your information security standards, reviewing the vendor’s data processing practices and agreement, evaluating the vendor’s incident response and breach notification procedures, and documenting the review. The due diligence must be performed before deployment and updated periodically, particularly if the vendor makes material changes to their AI system. CelereTech provides AI vendor due diligence as part of managed security programs for Chicagoland financial services firms.

How should financial advisors document AI use for compliance?

Financial advisors should document: which AI tools are used in the advisory process, how AI output is reviewed and supervised before use with clients, that client data entered into AI systems is covered by appropriate agreements, and any client disclosures made regarding AI use in their engagement. FINRA and SEC guidance indicates that AI use in client communications and recommendations should be subject to the same supervision requirements as other investment-related communications. Maintaining this documentation supports both regulatory compliance and client trust.

What are the compliance requirements for AI in accounting firms?

Accounting firms using AI must comply with IRS safeguard requirements for taxpayer data, AICPA privacy standards, and any state CPA licensing board guidance on technology use. The AICPA has issued guidance indicating that AI tools used with client data must meet confidentiality and security standards consistent with professional standards. Practically, this means approved enterprise tools with DPAs, not consumer AI, for any work involving client financial data.

How do professional ethics rules for attorneys apply to AI output?

Professional ethics require attorneys to supervise the accuracy of all work product, including AI-generated drafts, briefs, research, or correspondence. Several courts have sanctioned attorneys who submitted AI-generated filings containing fabricated citations without verification. Beyond accuracy, confidentiality rules require that any AI tool used with client matter information provides contractual confidentiality protections equivalent to those required of human contractors.

What is AI data minimization and why does it matter for compliance?

AI data minimization means providing AI systems with only the data they need to perform their function, rather than granting broad access to all business data. This principle is required by most data privacy frameworks and reduces both breach risk and compliance exposure — if an AI tool cannot access data it has no reason to process, it cannot expose that data. Implementing data minimization for AI tools requires defining what data each tool needs access to and configuring permissions accordingly, typically through role-based access controls.

How should a business handle an AI-related data breach for compliance?

An AI-related data breach triggers the same notification obligations as any other data breach involving the affected data type, including GLBA notification to regulators and customers, state breach notification laws, and any contractual notification obligations to clients. The AI vendor’s data processing agreement should define their breach notification timeline to you. A tested incident response plan that specifically addresses AI-related incidents is required under GLBA and should include the vendor escalation path.

Does using Microsoft 365 Copilot create compliance obligations?

Microsoft 365 Copilot operates within your Microsoft 365 tenant, meaning it accesses only data your organization already controls and is governed by Microsoft’s enterprise Data Processing Addendum. This architecture does not create new data exposure beyond what already exists in your M365 environment, but it does require that your existing M365 data governance, permissions, and access controls are properly configured before enabling Copilot. For regulated businesses, the pre-deployment governance review is the compliance obligation — ensuring only appropriate data is accessible to the Copilot system.

How do I demonstrate AI compliance to clients and regulators?

Demonstrating AI compliance requires producing the governance documentation that shows your program: the AI inventory, vendor agreements, acceptable use policy, risk assessments, and training records. For client-facing demonstrations, the ability to articulate specifically how client data is protected in your AI environment, including contractual protections with vendors and technical access controls, is the most persuasive evidence. CelereTech provides compliance documentation support for Chicagoland firms that need to demonstrate AI governance to regulators or clients.

What compliance frameworks specifically address AI governance?

NIST has published the AI Risk Management Framework (AI RMF) as a voluntary governance standard that maps onto existing security and compliance programs. GLBA, CCPA, and sector-specific rules do not yet have AI-specific provisions but apply to AI through their general data protection requirements. The FTC has also issued guidance making clear that existing consumer protection laws apply to AI-generated content and AI-powered decision making, including in marketing and lending contexts.

How often should AI compliance programs be reviewed and updated?

AI compliance programs should be reviewed at least annually and whenever a material change occurs, such as adding a new AI tool, a vendor updating their AI system, a regulatory agency issuing new guidance, or a security incident involving an AI system. The pace of AI development and regulation means annual reviews are a minimum — quarterly reviews of the AI tool inventory and vendor posture are prudent for regulated businesses. CelereTech includes AI governance review in the managed IT programs delivered to Chicagoland financial and legal firms.

What is the Illinois Artificial Intelligence Video Interview Act and who does it affect?

The Illinois Artificial Intelligence Video Interview Act requires employers using AI to analyze applicant video interviews to: notify applicants before the interview, explain how AI is used, obtain consent, and limit disclosure of the video recordings. Illinois was the first state to pass AI-specific employment law, and similar legislation is being enacted in other states. Chicago-area businesses using AI-assisted screening tools in hiring must comply and should review vendor agreements to confirm the vendor’s AI analysis processes meet the Act’s requirements.

How does AI compliance differ for small businesses vs. large enterprises?

Small businesses face the same underlying legal requirements as large enterprises when handling regulated data with AI tools, but have less capacity to build and maintain compliance programs. The practical difference is that SMBs should focus on a smaller set of high-impact controls — approved tool list, vendor agreements, acceptable use policy, and training — rather than attempting to build the full compliance infrastructure a large enterprise maintains. Partnering with a managed IT provider like CelereTech that includes AI compliance governance in its service program is the most cost-effective approach for Chicagoland SMBs.

Related CelereTech Resources

Ready to Adopt AI Safely?

CelereTech helps Chicagoland businesses implement AI tools with the managed IT infrastructure, security controls, and compliance governance to support real deployment. Our Schaumburg-based team is ready to assess your AI readiness.

Call (847) 658-4800 or Book Your Free AI Readiness Consultation →