Prompt Engineering Guide
A comprehensive guide to prompt engineering with examples, strategies, and links to domain-specific guides.
Introduction
Prompt engineering is the practice of designing and refining inputs to AI systems — especially large language models (LLMs) — to guide them toward useful, accurate, and reliable outputs. It combines elements of programming, communication, and human–computer interaction.
This guide provides a foundation for prompt engineering: principles, techniques, and worked examples. It also links to domain-specific guides for code , images , system administration , and creative writing .
Why prompt engineering matters
AI models are general-purpose. Without guidance, they may produce vague, incomplete, or misleading outputs. A well-designed prompt can:
- Improve accuracy by clarifying the task
- Increase reliability by reducing ambiguity
- Save time by reducing back-and-forth corrections
- Control tone, style, and output format
Core principles
1. Clarity and specificity
Vague input → vague output.
- Bad: “Tell me about transformers.”
- Better: “Explain transformer architectures in deep learning, focusing on self-attention, in 300 words for a technical but non-expert audience.”
2. Context and framing
Adding roles or background improves focus.
- Example: “You are an experienced Python tutor. Explain list comprehensions to a beginner with examples.”
3. Incremental refinement
Start broad, then refine based on the output. Treat prompting as iterative.
4. Structure and formatting
Use explicit formatting instructions.
- Example: “Summarize this article in 5 bullet points, each under 15 words.”
5. Constraints and boundaries
Set limits on style, tone, or format.
- Example: “Answer in JSON with fields:
name
,description
,tags
.”
Common prompting techniques
Role prompting
“Act as a career coach…” “Imagine you are a Unix sysadmin…”
See: System Administration Guide .
Few-shot prompting
Provide examples to teach a pattern.
Translate the following into French:
Hello -> Bonjour
Good night -> Bonne nuit
Now: How are you? -> ?
Chain-of-thought prompting
Encourage reasoning before the answer.
- “Explain your reasoning step by step before giving the final answer.”
See: Code Guide for debugging examples.
Instruction hierarchy
Combine high-level goals with constraints.
- “Summarize this report for policymakers. Keep it under 200 words, highlight 3 risks, and end with one recommendation.”
Output shaping
Control tone and style.
- “Respond in a friendly, conversational tone.”
- “Write like an academic abstract.”
See: Creative Writing Guide .
Worked examples
Summarization
Prompt:
"Summarize this article in 5 bullet points. Each bullet under 12 words."
Outcome:
- AI adoption rising across industries
- Regulators focus on safety and transparency
- Enterprises prioritize data security
- LLM context windows expanding
- Open-source models gain traction
Comparison
Prompt:
"Compare Postgres and MySQL in a table with columns: Feature, Postgres, MySQL."
Outcome: table of differences in features, performance, replication, etc.
Step-by-step debugging
Prompt:
"Here is my error: TypeError: 'int' object is not iterable.
Explain the cause step by step, then fix the code."
Outcome: Explains the Python error, shows corrected code. (See Code Cheatsheet ).
Creative ideation
Prompt:
"Generate 5 short story ideas about time travel mishaps in everyday life."
Outcome: Produces multiple story seeds. (See Creative Writing Cheatsheet ).
Image generation
Prompt:
"A cyberpunk street at night, neon signs, rainy reflections, cinematic wide shot."
Outcome: Produces stylized concept art. (See Image Prompt Cheatsheet ).
Best practices checklist
- Define the goal clearly
- Add context and role specification
- Set constraints on format, tone, or length
- Use examples when possible
- Iterate and refine
Pitfalls
- Overloading: too many instructions → confusing results
- Ambiguity: unclear wording → unpredictable outputs
- Unrealistic expectations: prompts can’t make the model know things it was never trained on
- Bias: prompts can reinforce stereotypes if phrased poorly