AI & LLM Engineering for .NET Architects

Content Moderation: Azure AI Content Safety integration

1 Views Updated 5/4/2026

AI Content Safety

When you open your app to the world, you are responsible for what the AI says. Content Moderation ensures the model doesn't generate hate speech, violence, or sexual content.

1. Pre-filtering vs Post-filtering

A professional safety system has two layers:

  • Input Filtering: Checking the USER's prompt before it even hits the LLM. If they ask "How do I make a bomb?", the request is blocked immediately.
  • Output Filtering: Checking the AI's response before the user sees it. If the AI goes off the rails, the system replaces the bad text with "I'm sorry, I cannot answer that."

2. Azure AI Content Safety

This is a specialized model that gives you a **Severity Score** (0-6) for Hate, Self-Harm, Sexual, and Violence. It is much more accurate than simple keyword blocking and can even detect "Jailbreak" attempts hidden in code.

4. Interview Mastery

Q: "How do you handle 'False Positives' in content moderation?"

Architect Answer: "Content safety is a balance between safety and utility. We use **Human-in-the-loop** for borderline cases. If a message is flagged as 'Level 2' (low risk), we might log it for review but still show it. If it's 'Level 5' (high risk), we block it. We also maintain an **Exception List** for internal users or specific technical domains (like medical or legal) where sensitive words might be legitimate."

AI & LLM Engineering for .NET Architects
1. AI Foundations & Prompt Engineering
The LLM Landscape: Transformers, Attention, and Tokens Advanced Prompt Engineering: Few-shot, Chain-of-Thought, and ReAct Prompt Versioning & Management in Production LLM Cost Estimation: Token accounting and budget strategies
2. Semantic Kernel & Integration
Introduction to Microsoft Semantic Kernel (SK) Skills & Plugins: Extending the LLM with native C# functions Planner & Orchestration: Automating complex multi-step AI tasks Connectors: Switching between OpenAI, Azure OpenAI, and HuggingFace
3. Vector Databases & RAG
The RAG Pattern: Solving the 'Static Knowledge' problem Embeddings Deep Dive: Converting text to math Vector DBs: Azure AI Search vs Pinecode vs Milvus Hybrid Search: Combining Keyword and Semantic search for accuracy
4. Advanced RAG Techniques
Document Chunking Strategies: Overlap, Slidewindow, and Semantic splitting Recursive Document Processing for massive knowledge bases Context Window Management: Summarization vs Truncation Citations & Grounding: Ensuring the AI doesn't hallucinate
5. AI Safety & Guardrails
Content Moderation: Azure AI Content Safety integration Prompt Injection: Defending against adversarial attacks Punitiveness & Bias: Evaluating and mitigating model behavior Self-Correction Patterns: Letting the AI check its own work
6. Small Language Models (SLMs) & Local AI
The rise of SLMs: Phi-3, Llama-3-8B, and Mistral Running AI Locally with ONNX and LocalLLM Quantization: Running 70B models on 16GB RAM Edge AI: Deploying models to local devices and private clouds
7. Multimodal & Agentic AI
Multimodal AI: Processing Images, PDFs, and Audio in C# Agentic Workflows: Multi-agent collaboration with AutoGen Function Calling: Letting the LLM use your SQL and API tools Memory Management: Ephemeral vs Long-term Semantic memory
8. FAANG AI Engineer Interview
Case Study: Designing a Global Enterprise AI Knowledge Assistant Case Study: Building an Autonomous AI Agent for Software Dev