An LLM is a thinker, not a doer. Plugins (formerly Skills) are the "Hands" of the AI. They allow the model to reach out and touch your database, your file system, or your internal APIs.
To turn a C# method into a plugin, you just add an attribute.
[KernelFunction, Description("Fetches the weather for a city")]
public string GetWeather(string location) { ... }
The AI reads the **Description** to understand when it should call your function. Metadata is code!
Q: "What is 'Tool-Calling' and why is it better than simple text parsing?"
Architect Answer: "Tool-calling (or Function Calling) is when the LLM returns a structured JSON payload like `{ "function": "GetWeather", "args": { "location": "Dubai" } }` instead of just text. This is safer because the model has been trained specifically to output these parameters accurately. It allows for reliable integration between non-deterministic AI and deterministic C# business logic."