ABSTRACT: As organizations rush to integrate Large Language Models (LLMs) into their core business processes, they face a critical dilemma: embrace the 66% productivity boost offered by generative AI or mitigate the serious risks of data exfiltration and "shadow AI". This session provides a dive into the technical foundations of a robust generative AI system, moving beyond basic chat interfaces to a comprehensive enterprise architecture.
We will explore the flegdling LLM Stack, identifying critical trust boundaries between organizational tenants and the public internet. Attendees will learn:
The Risk Landscape: An analysis of top threats including prompt injection (OWASP LLM01), insecure output handling, and training data poisoning.
Architectural Defenses: How to implement Retrieval-Augmented Generation (RAG) to maintain data accuracy and avoid the security pitfalls of fine-tuning on sensitive PII.
Data Governance: Strategies for applying fine-grained access controls and role-based accounting to vector databases to ensure that AI only serves information to authorized users.
Operational Security: A "layered onion" approach to security, from model hyperparameter tuning to outer-layer rate limiting and semantic caching.
Leave this session with a foundational framework for deploying AI that is not only innovative but also compliant, secure, and resilient.