Generative AI in Legal Practice: Key Risks and What Lawyers Should Do
בינה מלאכותית גנרטיבית בפרקטיקה המשפטית: סיכונים מרכזיים ומה עורכי דין צריכים לעשות
Question: What should legal practitioners keep in mind when using generative AI?
Answer: The use of generative AI in legal practice raises a number of core professional and compliance risks, particularly around confidentiality, privilege, and discovery. These risks are increasingly addressed by regulators and courts across jurisdictions.
Confidentiality and Client Information:
The most significant immediate legal risk associated with generative AI tools is unauthorized disclosure of confidential client information. When lawyers input client data into third-party AI systems, particularly publicly available consumer tools, and not “closed" AI systems designed for corporate use, this may constitute disclosure to a third party or a waiver of attorney-client privilege.
Regulators are converging on a consistent position:
United States: ABA Formal Opinion 512 (July 29, 2024) [1]emphasizes that lawyers must understand how AI tools handle data (storage, training, sharing, security) and must take reasonable steps to prevent unauthorized disclosure. Furthermore, explicit and informed client consent isneeded prior to inputting information relating to the client representation into a generative AI tool.
United Kingdom: The SRA has issued compliance guidance (Feb.2026) [2] stressing that solicitors remain fully responsible for confidentiality, competence, and supervision when using AI tools.
European Union: GDPR obligations apply fully to AI useinvolving personal data. The European Data Protection Board [3] and European Data Protection Supervisor [4] have issued guidance addressing data minimization, lawful basis, transparency, and lifecycle management.
Recent market practice reflects the risk posed by unauthorized disclosure of client information: Deloitte Australia reportedly instructed staff in February 2026 [5],to stop uploading confidential client data into publicly available AI tools, underscoring a broader move toward strict internal controls.
Attorney-Client Privilege and Work Product:
The use of AI tools raises fact-sensitive questions regarding whether communications, prompts, and outputs remain protected by privilege or work product doctrine. Courts are beginning to address whether use of public AI systems constitutes disclosure to a thirdparty.
Recent U.S. case law developments include:
* United States v. Heppner(S.D.N.Y., Feb. 2026), where the court held that AI-generated documents in that context were not protected by privilege or work product.
* Other courts have suggested that AI-assisted materials may be protected where generated under attorney supervision within confidential environments. Practical risk factors include the type of AI tool (public vs. enterprise), contractual safe guards, retention and training policies, and whether the communication involved legal advice.
Discovery and eDiscovery Implications
AI tools create new categories of potentially discoverable electronically stored information, including prompts, chat histories, outputs, and metadata.
Key considerations include:
*AI-generated prompts and outputs may be subject to preservation obligations.
*Courts have ordered production of AI interaction logs in certain litigation contexts.
*Meet-and-confer protocols may need to address AI-assisted review and summarization workflows.
The consistent message across jurisdictions: lawyers remain fully responsible for accuracy, verification, and professional judgment.
Conclusion: Generative AI does not change core professional obligations—it amplifies them. Lawyers remain fully responsiblefor confidentiality, privilege, and accuracy, and should treat AI tools as third-party systems requiring the same level of scrutiny, control, and governance as any other external service provider.
Legal professionals should carefully control the use of publicly available AI tools, particularly for any client-related information, and instead rely on approved, enterprise-grade systems. Clear internal policies should govern when and how AI may be used, supported by appropriate training and supervision to ensure lawyers understand both the capabilities and limitations of these tools. Firms should also assess AI vendors’ data practices—including storage, training, and sharing—to safeguard confidentiality and privilege, and use AI only within controlled environments. Finally, prompts, outputs, and related data should be treated as potentially discoverable, with document retention and litigation practices updated accordingly.
[1]AMERICAN BAR ASSOCIATIONSTANDING COMMITTEE ON ETHICS AND PROFESSIONAL RESPONSIBILITY, Formal Opinion512, July 29, 2024 https://www.americanbar.org/content/dam/aba/administrative/professional_responsibility/ethics-opinions/aba-formal-opinion-512.pdf
[2] Solicitors Regulation Authority, Compliancetips for solicitors regarding the use of AI and technology, updated February 9,2026 https://www.sra.org.uk/solicitors/resources/innovate/compliance-tips-for-solicitors/?utm_source=chatgpt.com
[3] https://www.edpb.europa.eu/system/files/2024-12/edpb_opinion_202428_ai-models_en.pdf?utm_source=chatgpt.com
[4] https://www.edps.europa.eu/data-protection/our-work/publications/guidelines/2025-10-28-guidance-generative-ai-strengthening-data-protection-rapidly-changing-digital-era_en
[5] Deloitte tells staff to stop uploadingconfidential data to ChatGPT, Financial Review, February 24, 2026
https://www.afr.com/companies/professional-services/deloitte-tells-staff-to-stop-uploading-confidential-data-to-chatgpt-20260224-p5o4zb?utm_source=chatgpt.com
בינה מלאכותית גנרטיבית בפרקטיקה המשפטית: סיכונים מרכזיים ומה עורכי דין צריכים לעשות
Question: What should legal practitioners keep in mind when using generative AI?
Answer: The use of generative AI in legal practice raises a number of core professional and compliance risks, particularly around confidentiality, privilege, and discovery. These risks are increasingly addressed by regulators and courts across jurisdictions.
Confidentiality and Client Information:
The most significant immediate legal risk associated with generative AI tools is unauthorized disclosure of confidential client information. When lawyers input client data into third-party AI systems, particularly publicly available consumer tools, and not “closed" AI systems designed for corporate use, this may constitute disclosure to a third party or a waiver of attorney-client privilege.
Regulators are converging on a consistent position:
United States: ABA Formal Opinion 512 (July 29, 2024) [1]emphasizes that lawyers must understand how AI tools handle data (storage, training, sharing, security) and must take reasonable steps to prevent unauthorized disclosure. Furthermore, explicit and informed client consent isneeded prior to inputting information relating to the client representation into a generative AI tool.
United Kingdom: The SRA has issued compliance guidance (Feb.2026) [2] stressing that solicitors remain fully responsible for confidentiality, competence, and supervision when using AI tools.
European Union: GDPR obligations apply fully to AI useinvolving personal data. The European Data Protection Board [3] and European Data Protection Supervisor [4] have issued guidance addressing data minimization, lawful basis, transparency, and lifecycle management.
Recent market practice reflects the risk posed by unauthorized disclosure of client information: Deloitte Australia reportedly instructed staff in February 2026 [5],to stop uploading confidential client data into publicly available AI tools, underscoring a broader move toward strict internal controls.
Attorney-Client Privilege and Work Product:
The use of AI tools raises fact-sensitive questions regarding whether communications, prompts, and outputs remain protected by privilege or work product doctrine. Courts are beginning to address whether use of public AI systems constitutes disclosure to a thirdparty.
Recent U.S. case law developments include:
* United States v. Heppner(S.D.N.Y., Feb. 2026), where the court held that AI-generated documents in that context were not protected by privilege or work product.
* Other courts have suggested that AI-assisted materials may be protected where generated under attorney supervision within confidential environments. Practical risk factors include the type of AI tool (public vs. enterprise), contractual safe guards, retention and training policies, and whether the communication involved legal advice.
Discovery and eDiscovery Implications
AI tools create new categories of potentially discoverable electronically stored information, including prompts, chat histories, outputs, and metadata.
Key considerations include:
*AI-generated prompts and outputs may be subject to preservation obligations.
*Courts have ordered production of AI interaction logs in certain litigation contexts.
*Meet-and-confer protocols may need to address AI-assisted review and summarization workflows.
The consistent message across jurisdictions: lawyers remain fully responsible for accuracy, verification, and professional judgment.
Conclusion: Generative AI does not change core professional obligations—it amplifies them. Lawyers remain fully responsiblefor confidentiality, privilege, and accuracy, and should treat AI tools as third-party systems requiring the same level of scrutiny, control, and governance as any other external service provider.
Legal professionals should carefully control the use of publicly available AI tools, particularly for any client-related information, and instead rely on approved, enterprise-grade systems. Clear internal policies should govern when and how AI may be used, supported by appropriate training and supervision to ensure lawyers understand both the capabilities and limitations of these tools. Firms should also assess AI vendors’ data practices—including storage, training, and sharing—to safeguard confidentiality and privilege, and use AI only within controlled environments. Finally, prompts, outputs, and related data should be treated as potentially discoverable, with document retention and litigation practices updated accordingly.
[1]AMERICAN BAR ASSOCIATIONSTANDING COMMITTEE ON ETHICS AND PROFESSIONAL RESPONSIBILITY, Formal Opinion512, July 29, 2024 https://www.americanbar.org/content/dam/aba/administrative/professional_responsibility/ethics-opinions/aba-formal-opinion-512.pdf
[2] Solicitors Regulation Authority, Compliancetips for solicitors regarding the use of AI and technology, updated February 9,2026 https://www.sra.org.uk/solicitors/resources/innovate/compliance-tips-for-solicitors/?utm_source=chatgpt.com
[3] https://www.edpb.europa.eu/system/files/2024-12/edpb_opinion_202428_ai-models_en.pdf?utm_source=chatgpt.com
[4] https://www.edps.europa.eu/data-protection/our-work/publications/guidelines/2025-10-28-guidance-generative-ai-strengthening-data-protection-rapidly-changing-digital-era_en
[5] Deloitte tells staff to stop uploadingconfidential data to ChatGPT, Financial Review, February 24, 2026
https://www.afr.com/companies/professional-services/deloitte-tells-staff-to-stop-uploading-confidential-data-to-chatgpt-20260224-p5o4zb?utm_source=chatgpt.com
