Description


The reach of artificial intelligence is ubiquitous and has touched virtually every facet of business, government, and human interaction. AI simultaneously offers opportunities and risks that cannot and should not be ignored. AI, especially generative artificial intelligence, has become a magnet for investors and has the potential to reshape the global landscape and influence virtually every industry.

Given the rapid pace with which AI is advancing, no law or regulation can keep pace unless the law or regulation is technology neutral and provides broad guidance rather than restrictions that attempt to address point in time challenges. In other words, the laws and regulations must look to the future rather than focus on the past. While market participants have repeatedly urged policymakers to adopt responsible and fair laws and regulations to govern AI, it is unlikely, in the immediate term, that the US will be able to address all of the many issues raised by the use of AI, and that means the US will continue to have a patch work of laws and many of those lags will continue to lag behind advances in technology and the gap between the current state of the law and what is needed will continue to grow wider.

This full-day, in-person CLE AI Institute is designed to provide much needed clarity in four key areas:

The Use of Artificial Intelligence in AML/CFT Compliance

1. AML/CFT Use Cases
What are some of the use cases? Monitoring of Customers Behavior and Transactions to Detect Suspicious Activity; Improvement of the Quality of Reporting; Fraud Detection; Digital Identity Solutions; Screen Multiple Lists; Fuzzy Matching; More Productive Investigations; and Advanced Assessment of Hit Quality

2. AI/ML Challenges
What are some of the Challenges? Underlying Data Used to Train AI Models; Complexity of AI Models; IT Systems and Modeling Approaches Supporting the Models; Human Capital; and Governance and Model Risk Management

What is Responsible AI? Ethical AI; Explainable AI; Effective AI Model Training; Synthetic Data Solutions; Updatability; Auditability; Tech Sprints; and Data Sharing

3. Regulating the Use of AI
How is AI regulated? Federal; State; International

The Use of Large Language Models and Generative Artificial Intelligence

1. How do Large Language Models work?
How do LLMs work under the hood?
What are their strengths and weaknesses? How are they trained and how can they be fine-tuned for specific applications?

2. Exploring applications of LLMs in the legal profession
The law is a discipline built on language. How can LLMs be leveraged to help provide legal services? What are the main applications where they can augment the abilities of human lawyers? What technical resources are needed to deploy LLMs in the law at scale?

3. What does the future hold?
What developments should we expect going forward?
What are the implications of multi-modal models?
What challenges remain open and where should research focus?
How do we best address challenges surrounding privacy, data security, and professional responsibility?

The Use of Artificial Intelligence in Commerce and Finance, including in Banking, Securities, Commodities, and Insurance

1. AI and Data Privacy and Protection
What are some of the practical strategies for navigating the complex intersection of AI technology and financial regulation?

2. AI and Algorithmic Transparency and Accountability
What are some of the real-world use cases?

3. Regulatory Compliance Challenges, Ethics, Risk Management Frameworks, and the Evolving Role of Regulators in Overseeing AI-driven Financial Services
What are some of the key ethical considerations such as bias detection and mitigation, fairness in algorithmic decision-making, and ensuring AI systems uphold consumer rights?

The Use of Artificial Intelligence in the New York Judiciary

1. Judicial Use of AI
How can judges use AI? Text generation, legal research, document summarization, case management, decision-making What are the pros and cons of that use? Efficiency, productivity, bias detection v. inaccuracy, bias, trust loss Should judges be barred from using AI in certain cases? If so, what standards should guide that? Some say yes, like in criminal and family law contexts because room for error is extraordinarily low in light of physical and familial liberty Should AI decide cases? No, because risky. Yes, depending on the case type and so long as parties give informed consent.

2. Access to Justice
How can AI assist self-represented parties? Enhance pleadings and papers, make oral advocacy more persuasive What are some risks? Flood of litigation, including frivolous litigation. Should self-represented parties be required to disclose that they have used AI in their advocacy? Yes, because they lack the professional obligations that attorneys have. No, because any party must still be candid with the court and otherwise such a rule creates inefficient double standards: one for attorneys and one for non-attorneys.

3. Evidence
What are some issues in this domain? Deepfakes, bias, judicial understanding of this technology How can those risks be managed? Deepfake detection technology, technical advisors, new burdens of production and proof, etc.

Program Fee:
$299 Member | $399 Nonmember
Small Law Firm Member: $199 

 

No content found

No content found

No content found

No content found

No content found

Not a Member?
Join Now and save on this program & more.

Sponsorship Opportunities are Available Here!

Please Contact:
Yelena Balashchenko, Manager, Business Development & Sponsorships
(212) 382-6608
ybalashchenko@nycbar.org