In production, your prompts are Code. They should not be hard-coded strings in your C# files. They should be versioned, tested, and deployable like any other microservice.
Store your prompts in a separate repository or a Prompt Management System (like Portkey or LangSmith). This allows your "AI Product Manager" to update a prompt's wording without requiring a full code deployment and restart.
When you update a prompt, you must run it against a Golden Dataset (a list of known correct answers). If the new prompt improves "Accuracy" for one feature but breaks another, you catch it in CI/CD before the user does.
Q: "Why is 'Semantic Versioning' important for AI Prompts?"
Architect Answer: "Because even a 1-word change in a prompt can fundamentally change the JSON schema output. We use **SemVer** to track breaking changes in prompts. If I update a prompt to a new version that returns a different object structure, all downstream microservices must know to update their parsers accordingly. Prompts are the 'Contract' of the AI era."