Tags
Language
Tags
October 2024
Su Mo Tu We Th Fr Sa
29 30 1 2 3 4 5
6 7 8 9 10 11 12
13 14 15 16 17 18 19
20 21 22 23 24 25 26
27 28 29 30 31 1 2

Securing Generative AI

Posted By: IrGens
Securing Generative AI

Securing Generative AI
ISBN: 0135401801 | .MP4, AVC, 1280x720, 30 fps | English, AAC, 2 Ch | 3h 31m | 845 MB
Instructor: Omar Santos

The Sneak Peek program provides early access to Pearson video products and is exclusively available to subscribers. Content for titles in this program is made available throughout the development cycle, so products may not be complete, edited, or finalized, including video post-production editing.

Introduction

Securing Generative AI: Introduction

Lesson 1: Introduction to AI Threats and LLM Security

Learning objectives
1.1 Understanding the Significance of LLMs in the AI Landscape
1.2 Exploring the Resources for this Course - GitHub Repositories and Others
1.3 Introducing Retrieval Augmented Generation (RAG)
1.4 Understanding the OWASP Top-10 Risks for LLMs
1.5 Exploring the MITRE ATLAS (Adversarial Threat Landscape for Artificial-Intelligence Systems) Framework

Lesson 2: Understanding Prompt Injection Insecure Output Handling

Learning objectives
2.1 Defining Prompt Injection Attacks
2.2 Exploring Real-life Prompt Injection Attacks
2.3 Using ChatML for OpenAI API Calls to Indicate to the LLM the Source of Prompt Input
2.4 Enforcing Privilege Control on LLM Access to Backend Systems
2.5 Best Practices Around API Tokens for Plugins, Data Access, and Function-level Permissions
2.6 Understanding Insecure Output Handling Attacks
2.7 Using the OWASP ASVS to Protect Against Insecure Output Handling

Lesson 3: Training Data Poisoning, Model Denial of Service Supply Chain Vulnerabilities

Learning objectives
3.1 Understanding Training Data Poisoning Attacks
3.2 Exploring Model Denial of Service Attacks
3.3 Understanding the Risks of the AI and ML Supply Chain
3.4 Best Practices when Using Open-Source Models from Hugging Face and Other Sources
3.5 Securing Amazon BedRock, SageMaker, Microsoft Azure AI Services, and Other Environments

Lesson 4: Sensitive Information Disclosure, Insecure Plugin Design, and Excessive Agency

Learning objectives
4.1 Understanding Sensitive Information Disclosure
4.2 Exploiting Insecure Plugin Design
4.3 Avoiding Excessive Agency

Lesson 5: Overreliance, Model Theft, and Red Teaming AI Models

Learning objectives
5.1 Understanding Overreliance
5.2 Exploring Model Theft Attacks
5.3 Understanding Red Teaming of AI Models

Lesson 6: Protecting Retrieval Augmented Generation (RAG) Implementations

Learning objectives
6.1 Understanding the RAG, LangChain, Llama Index, and AI Orchestration
6.2 Securing Embedding Models
6.3 Securing Vector Databases
6.4 Monitoring and Incident Response


Securing Generative AI