This is THE key to using LLM/agents effectively. And yes, prompting is a non-trivial skill.
A prompt is literally just the text instructions you send to the LLM/agent. Prompting is the action of providing prompt.
Prompt engineering is the practice of figuring out how to write “good” prompts. We will explain what that means now.
Key concepts
How one prompts the LLM/agent can significantly impact the quality of the result generated.
The importance of prompting often feel counterintuitive, because it does not apply well to humans. For humans, the exact wording of a task often matters less because we infer context or ask for clarifications.
But LLM/agents do not do this by default. Every new conversation you start with the agent is a fresh one ("conversation" here refers to a new chat in e.g. ChatGPT). The LLM/agent has no memory of your other conversations (this assumes no memory. See below extension), thus do not have context like humans do.
There are LOTS of prompting materials/courses out there. See these if interested: