Claude Users Manual

As of

A practical, step-by-step guide to getting the most out of Claude — across the chat app, CoWork, and Claude Code. Includes copy-paste prompt templates for the most common goals.

🎈 ELI5

Claude is a really smart helper who can read, write, and think out loud. You type stuff to it, it types back. The magic isn't that it's smart — the magic is that you can tell it exactly what you want (and what "good" looks like) and it'll try to give you that.

There are three doors you can use to talk to Claude: Chat (just type and ask), CoWork (Chat, but for your whole team and connected to your tools), and Code (Claude living inside your computer's terminal so it can edit your real code files). Pick the door that matches the kind of work you're doing.

Getting started in 60 seconds

  1. Sign in at claude.ai with your email or Google account. Free, Pro, Max and Team plans all use the same chat interface.
  2. Pick the right surface for the job: Chat for conversations and one-off tasks, CoWork for team workspaces with shared skills/connectors, Claude Code for working in your terminal on real codebases.
  3. Tell Claude the goal, the audience, and the format. The single biggest jump in quality comes from saying "what good looks like" up front.
  4. Iterate, don't restart. Refine in the same conversation — Claude already has the context.

Which Claude should I use?

Claude Chat

claude.ai & mobile

  • Quick questions, writing, brainstorms
  • Research with web search
  • Document & PDF analysis
  • Generated artifacts (docs, code, charts)
  • Personal projects with custom instructions

Claude CoWork

Team workspace

  • Shared skills tuned for your team
  • Role-based plugins (eng, sales, design…)
  • Connectors to your tools (Slack, Drive…)
  • Background agents that watch dashboards
  • Knowledge stays inside the workspace

Claude Code

Terminal & IDE

  • Read/edit your real codebase
  • Runs commands, tests, builds
  • Multi-step coding tasks
  • Sub-agents, hooks, MCP servers
  • Plan mode & git worktrees

The five prompt fundamentals

Every great prompt — across every Claude surface — has at most five parts. Use the ones that apply.

PartPurposeExample phrase
RoleFrame Claude's perspective"You are a senior staff engineer reviewing a junior PR."
GoalWhat "done" looks like"Produce a 1-page exec summary I can paste into Notion."
ContextBackground & constraints"Audience: non-technical execs. Tone: confident, plain English."
InputsThe raw materialPasted text, attached PDF, file reference, or URL.
FormatShape of the output"5 bullets, ≤15 words each, no preamble."
Rule of thumb If your prompt is shorter than two sentences and the output disappoints, the problem is almost always missing format or audience. Add those two and try again before rephrasing.

Compact universal template

Universal Role: [who Claude is acting as] Goal: [the outcome you want] Audience: [who reads/uses this] Constraints: [length, tone, must-include, must-avoid] Format: [structure of the output] Input: """ [paste content / describe the situation] """
🎈 ELI5

Claude comes in three sizes — like three sizes of pizza. Opus is the giant one: best for hard puzzles, slowest, costs more. Sonnet is the medium one: perfect for most days, the everyday default. Haiku is the little one: super fast and cheap, great when the job is small and clear.

The numbers (4.7, 4.6, 4.5…) are version numbers — bigger numbers are newer and usually better. Always start with Sonnet. If the answer feels shallow on a hard task, jump up to Opus instead of asking the same thing in three different ways.

The current Claude lineup

As of 2026-05-03, the Claude 4.x family is the active generation. Three tiers — Opus, Sonnet, Haiku — span depth-of-reasoning vs. speed-and-cost. Pick by job, not by name.

Latest release: Claude Opus 4.7 — 2026-04-16 New since previous Opus: a large vision quality jump, an xhigh effort level for finer reasoning/latency control, Task budgets (public beta), file-system-based memory across sessions, and the /ultrareview slash command in Claude Code. See What's new in Opus 4.7 below.
About these dates Dates marked are approximate. Confirmed releases use exact dates from official Anthropic announcements. If you need a guaranteed date, check the Anthropic model release notes.
Model API ID Released Best for Status
Claude Opus 4.7 claude-opus-4-7 2026-04-16 Hardest reasoning, agentic tasks, long-horizon code work, advanced vision Current flagship
Claude Opus 4.6 claude-opus-4-6 2025-Q4 † Powers Fast mode in Claude Code — same depth, faster output Active (Fast mode)
Claude Sonnet 4.6 claude-sonnet-4-6 2025-Q4 † The everyday default — writing, coding, analysis, agents Active
Claude Haiku 4.5 claude-haiku-4-5-20251001 2025-10-01 Fast classification, summaries at scale, cheap drafts, real-time UX Active
Naming convention Anthropic's API IDs follow the pattern claude-<tier>-<major>-<minor>[-YYYYMMDD]. The dated suffix pins a specific snapshot; the undated alias tracks the latest minor revision of that family.

Release timeline (chronological)

The full Claude history, from launch to today. Useful when looking at old code, picking up a deprecated app, or understanding capability jumps.

Date Release What changed
2023-03-14 Claude 1.0 First public release. Conversational assistant, ~9k token context.
2023-04-18 Claude Instant 1 Faster, cheaper sibling for high-volume workloads.
2023-07-11 Claude 2 100k token context — first model that could read whole books.
2023-11-21 Claude 2.1 200k context, fewer hallucinations, tool use beta.
2024-03-04 Claude 3 family — Haiku, Sonnet, Opus Three-tier lineup launches. Vision (image understanding) introduced.
2024-06-20 Claude 3.5 Sonnet First "smarter than 3 Opus, faster than 3 Sonnet" release. Artifacts launch in claude.ai.
2024-10-22 Claude 3.5 Sonnet (v2) + Computer Use beta Upgraded Sonnet plus first model that could control a desktop.
2024-11-04 Claude 3.5 Haiku Fast tier upgraded to 3.5-class quality.
2025-02-24 Claude 3.7 Sonnet Introduced extended thinking (visible reasoning) and stronger code performance. Claude Code launches in research preview.
2025-05-22 Claude 4 — Opus 4 & Sonnet 4 Major generation jump. Sustained agentic work, much better long-horizon coding.
2025-08-05 Claude Opus 4.1 Refinement release: better instruction-following, safer tool use.
2025-09-29 Claude Sonnet 4.5 Reasoning + coding lift. Becomes the new default for most workflows.
2025-10-01 Claude Haiku 4.5 Fast tier reaches near-Sonnet-4 quality at Haiku price.
2025-Q4 † Claude Sonnet 4.6 & Opus 4.6 Sonnet 4.6 takes over as default. Opus 4.6 powers Fast mode in Claude Code.
2026-04-16 Claude Opus 4.7 Current flagship. Major vision lift (98.5% visual-acuity vs 54.5% for 4.6, images up to ~3.75 MP). New xhigh effort level, Task budgets (public beta), file-system-based memory across sessions, /ultrareview in Claude Code, extended auto mode for Max users.

What's new in Claude Opus 4.7 (2026-04-16)

Opus 4.7 keeps the same pricing and tier as 4.6 but adds meaningful capability lifts. Highlights:

AreaWhat changed
Software engineering Gains on difficult, long-running coding tasks. Better rigor and consistency over multi-hour agentic work.
Vision Substantially improved image understanding. Supports images up to 2,576 px on the long edge (~3.75 MP). Internal visual-acuity benchmark: 98.5% for Opus 4.7 vs. 54.5% for Opus 4.6.
Reasoning controls New xhigh effort level on top of the existing low/medium/high settings — finer control over the reasoning ↔ latency trade-off.
Task budgets public beta Cap how much compute/tokens an agent can spend before stopping or asking for permission. Useful for long-horizon work where runaway cost is a risk.
Memory Enhanced file-system-based memory across sessions — agents can persist and retrieve context between runs without bespoke plumbing.
Claude Code New /ultrareview slash command (multi-agent cloud review of the current branch).
Max plan Extended auto mode for Max users — longer autonomous runs.
Instruction following Better adherence to format/constraint instructions, including for multimodal inputs.

Pricing

InputOutput
Opus 4.7$5 per million tokens$25 per million tokens
Opus 4.6 (for comparison)$5 per million tokens$25 per million tokens

Same headline price as 4.6 — but see the migration note below about token consumption.

Where you can use it

  • Claude products (claude.ai, mobile/desktop apps, CoWork, Claude Code)
  • Anthropic API
  • Amazon Bedrock
  • Google Cloud Vertex AI
  • Microsoft Foundry
Migration note: tokenizer change Opus 4.7 ships with an updated tokenizer that consumes 1.0–1.35× more tokens than 4.6 depending on content type. Higher effort levels (especially the new xhigh) also produce more output tokens. Plan for higher per-call cost than a literal price-comparison would suggest, and re-run your evals — prompts tuned to earlier models may need adjustment.

Optimal prompts for Opus 4.7's new features

Use xhigh effort Use claude-opus-4-7 with effort=xhigh for this task. I'm reviewing an architectural decision and need your most rigorous reasoning, even if it takes longer. Please: 1. List every assumption you're making before you start. 2. Walk through alternatives you considered and rejected, with reasons. 3. Flag any place your confidence drops below "high". Decision under review: """ [paste the decision/spec] """
Set a task budget Goal: investigate the slow checkout endpoint and propose 3 fixes, ranked. Task budget: stop after 200,000 output tokens OR 30 minutes of wall time, whichever comes first. If you hit either limit, summarize where you are and ask me whether to continue. Don't skip the investigation phase to save budget — depth matters more than breadth here.
High-fidelity image analysis Attached image is a 3.5 MP screenshot of a complex dashboard. Analyze it at full resolution. I need: 1. Every numeric KPI shown, with its label, value, and pixel coordinates. 2. Every chart's title and a one-sentence read of its trend. 3. Anything in small text that looks like a footnote or caveat. Don't summarize — be exhaustive. Flag anything you can't read clearly rather than guessing.
Persistent agent memory You have file-system-based memory across sessions. Use it deliberately. At the end of each session, save: - Decisions we made and the reason - Open questions and who needs to answer them - Anything I asked you to remember explicitly Save to /memory/ as small named markdown files (one topic per file). At the start of each new session, scan that folder before answering and tell me what context you loaded.

How to pick a model

Pick Opus 4.7 when…

  • The task is genuinely hard reasoning — multi-step planning, novel synthesis.
  • Long-horizon agentic work (multi-hour coding, research with many sources).
  • The cost of a wrong answer is high (legal review, security analysis, architecture).
  • You're writing once and reading many times (a strategy doc, a system design).

Pick Sonnet 4.6 when…

  • You don't have a specific reason to escalate — this is the default.
  • Day-to-day coding, writing, editing, summarizing, brainstorming.
  • You want the best quality-per-second-and-dollar ratio.
  • Building agents and tool-using systems where speed matters.

Pick Haiku 4.5 when…

  • The task is repetitive (classify 10,000 tickets, tag 50,000 rows).
  • You need real-time latency (autocomplete, chat suggestions).
  • Simple transformations — extract fields, redact PII, format conversion.
  • Cost dominates the decision and the task isn't ambiguous.

Pick Opus 4.6 (Fast mode) when…

  • You're in Claude Code and want Opus-class depth without Opus-class latency.
  • You're iterating on a hard problem and want quick turnarounds.
  • Toggle with /fast in Claude Code.
The "escalate, don't rewrite" rule If Sonnet 4.6 gives you a shallow answer on the first well-formed prompt, don't spend three more attempts rephrasing — switch to Opus 4.7 and try once more. Time saved beats tokens saved.

Switching models

In Claude Chat (claude.ai)

  1. Click the model name at the top of the conversation (next to the chat title).
  2. Pick from the dropdown. Pro/Max/Team plans see all available models; free plans see a subset.
  3. Switching mid-conversation is fine — the new model inherits the full context. Useful for "draft with Sonnet, polish with Opus."

In Claude CoWork

  1. Same model picker as Chat — top of the conversation.
  2. Workspace admins can pin a default model and restrict which models teammates can switch to.
  3. Skills and agents can be configured to run on a specific model regardless of the user's pick (e.g. always use Haiku for the "tag this ticket" agent).

In Claude Code (CLI)

  1. Run-time flag: claude --model claude-opus-4-7
  2. Inside a session: use /model to change models without restarting.
  3. Fast mode toggle: /fast switches to Opus 4.6 (faster Opus). Toggle off to return.
  4. Per-subagent override: when launching a subagent, pass a model parameter to override.
  5. Project default: pin a model in .claude/settings.json:
    settings.json{ "defaultModel": "claude-sonnet-4-6" }

In the Anthropic API / SDK

Pass the model ID in the request body. Use a dated alias when you need pinned reproducibility, undated when you want auto-upgrades.

Python SDK from anthropic import Anthropic client = Anthropic() resp = client.messages.create( model="claude-opus-4-7", # latest flagship max_tokens=1024, messages=[{"role": "user", "content": "Hello"}], ) print(resp.content[0].text)
TypeScript SDK import Anthropic from "@anthropic-ai/sdk"; const client = new Anthropic(); const resp = await client.messages.create({ model: "claude-sonnet-4-6", max_tokens: 1024, messages: [{ role: "user", content: "Hello" }], }); console.log(resp.content[0].text);
Reproducibility tip For production, pin to a dated alias like claude-haiku-4-5-20251001. The undated alias claude-haiku-4-5 auto-tracks the newest snapshot — great for always getting fixes, dangerous if you've calibrated prompts to a specific version.

API model IDs cheat sheet

FamilyLatest aliasPinned snapshotContext windowVisionTool use
Opus 4.7claude-opus-4-7(per-snapshot — see API docs)200k tokens
Opus 4.6claude-opus-4-6(per-snapshot)200k tokens
Sonnet 4.6claude-sonnet-4-6(per-snapshot)200k tokens
Haiku 4.5claude-haiku-4-5claude-haiku-4-5-20251001200k tokens

Calling the right thing

  • Always pass model — there is no implicit default that is safe to rely on.
  • Use messages, not the legacy complete endpoint, for any model 3.0+.
  • Enable prompt caching for repeated context (system prompts, large docs). It can cut cost 90%+ on repeat calls.
  • Use the Files API for large attachments instead of base64-inlining them.

Deprecated & legacy models

If you see these in old code or docs, plan to migrate. They may still respond, but support windows shrink each generation.

ModelReleasedStatus as of 2026-05-03Migrate to
Claude 1 / Claude Instant 12023-03 / 2023-04Fully retiredHaiku 4.5
Claude 2 / 2.12023-07 / 2023-11Fully retiredSonnet 4.6
Claude 3 Haiku / Sonnet / Opus2024-03-04Legacy — sunset announcedSame tier in 4.x
Claude 3.5 Sonnet (v1 & v2)2024-06 / 2024-10Legacy — sunset announcedSonnet 4.6
Claude 3.5 Haiku2024-11-04LegacyHaiku 4.5
Claude 3.7 Sonnet2025-02-24Active but supersededSonnet 4.6
Claude Opus 4 / Sonnet 42025-05-22Active but supersededOpus 4.7 / Sonnet 4.6
Claude Opus 4.12025-08-05Superseded by 4.6 / 4.7Opus 4.7
Claude Sonnet 4.52025-09-29Superseded by 4.6Sonnet 4.6
Migration gotcha Newer models are usually drop-in better, but token usage, formatting habits, and tool-call shapes can shift. Before swapping a model in production, run your eval set against the new one — don't rely on "it's a newer version, it'll be fine."

Optimal prompts for migrating between models

Migration audit I'm migrating from claude-sonnet-4-5 to claude-sonnet-4-6. Look at my codebase under `src/llm/` and find: 1. Every place a model ID is referenced (string literals or env vars). 2. Every prompt that depends on a specific format only the old model produced. 3. Any token-count assumptions that might break (the new model can be more or less verbose). Produce a checklist I can work through, with file:line for each item. Don't change anything yet — just the audit.
Side-by-side eval Use the claude-api skill to set up a small eval harness: - Read 20 example inputs from test/fixtures/eval-inputs.jsonl - Run each through both claude-sonnet-4-5 and claude-sonnet-4-6 - Score the outputs against test/fixtures/eval-rubric.md - Report a table with per-example win/tie/loss and an aggregate Use prompt caching for the system prompt to keep cost down.
🎈 ELI5

Claude Chat is the basic way to use Claude — type, get an answer. You can drag in pictures, PDFs, spreadsheets, or code files and Claude will read them. You can also turn on tools (like web search) to make Claude smarter for that one task.

Projects are folders for the work you do over and over. Stash your style guide or notes once, and every chat in that folder remembers them. Artifacts are the side panel where Claude builds documents, charts, or little apps you can keep.

Setup & the chat interface

  1. Open claude.ai in any modern browser, or install the desktop/mobile app.
  2. Start a new conversation. The big text box accepts text, drag-and-dropped files, and pasted images.
  3. Use the model picker (top of the chat) to switch between Opus (deepest reasoning), Sonnet (best general default), and Haiku (fastest, cheapest).
  4. Use Projects for any work you'll come back to — they let you stash files, custom instructions, and conversation history in one place.
  5. Toggle tools as needed: web search, code execution, computer use (in supported browsers), connectors. Tools off = faster, more private. Tools on = more capable.

Picking the right model

ModelWhen to useAvoid for
OpusHard reasoning, multi-step planning, ambiguous spec, gnarly code, long docsQuick lookups, casual chat (overkill, slower)
SonnetThe default — writing, coding, analysis, most everyday workTruly hard reasoning where Opus would do better
HaikuCheap classification, summarization at scale, quick rewrites, draftsAnything novel or ambiguous — go up a tier
Practical default Start in Sonnet. If after one good prompt the answer still feels shallow, escalate to Opus rather than rewriting your prompt three more times.

Projects: your reusable workspace

A Project is a folder of related conversations with persistent custom instructions and knowledge files. Use it for anything you do more than once.

  1. Create a project from the left sidebar → "Projects" → "New project."
  2. Add custom instructions — the durable role and rules Claude should follow every time you open a chat in this project.
  3. Upload knowledge files — style guides, brand docs, schemas, transcripts. Claude treats them as background reference, not a one-shot input.
  4. Start chats inside the project instead of from the home screen — they automatically inherit the instructions and knowledge.
Example: "Brand voice" project custom instructions
Project instructions You are a copy editor for ACME Co. Always use British spelling, sentence case for headlines, and the Oxford comma. Never use exclamation marks, em-dashes, or the words "leverage", "synergy", "delve". Default tone: confident, warm, plain. Read like a human, not a brochure. When I share copy, return three things in this order: 1) The fixed copy. 2) A 3-bullet diff explaining what changed and why. 3) One open question I should answer to make it sharper.

Artifacts: docs, code, and apps in a side pane

When Claude generates substantial content (a doc, a chart, an HTML mini-app, a piece of code), it pops it into an Artifact — a versioned panel beside the chat. You can edit it, ask Claude to revise it, and download it.

  • HTML/JS apps render live — great for prototypes, dashboards, calculators.
  • React components render with Tailwind preinstalled.
  • Markdown documents render formatted — useful for memos and one-pagers.
  • Mermaid & SVG render diagrams.
Tip Ask explicitly: "Build this as a single-file HTML artifact with no external dependencies." That keeps it self-contained and easy to share.

Files, PDFs, and images

  1. Drag a file onto the chat box or click the paperclip. PDFs, Word, spreadsheets, CSVs, code files, and images all work.
  2. Be specific about what to do with the file. "Summarize" is weak — say "Pull every dollar amount and the page it appears on, return as a table."
  3. For long PDFs, tell Claude where to focus: "Only the financial statements section, pages 14-22."
  4. For images of UI or whiteboards, ask for transcription first, then analysis — it gives Claude a chance to "look" carefully.

Toggle web search on for anything time-sensitive, factual, or that requires citations. Without it, Claude works from training data only.

Research prompt Research [topic] for me. I need: - 5 primary-source links (prefer official docs, papers, vendor blogs) - One paragraph (~80 words) of TL;DR - A "what changed in the last 6 months" section - A list of open questions worth asking an expert Skip marketing fluff. If a source contradicts another, flag it.

Skills

A Skill is a packaged set of instructions + helpers that Claude loads when relevant. Built-in skills handle PDFs, spreadsheets (xlsx), Word docs (docx), slides (pptx), and more. You don't have to invoke them — they trigger automatically when your task matches.

To create a custom skill, ask Claude in chat: "Use the skill-creator skill to build me a new skill that <does X>." Claude will scaffold the files; you save them to your skills folder.

Connectors & MCP

Connectors hook Claude up to outside tools — Google Drive, GitHub, Linear, Slack, your databases — using the open MCP (Model Context Protocol). Once connected, you can say things like "Find the spec doc for project Atlas in my Drive" without copy-pasting.

  1. Open Settings → Connectors.
  2. Pick a connector from the registry, or paste a custom MCP server URL.
  3. Authorize. Most use OAuth — log in once.
  4. Ask Claude what it can now do: "What's available from my connectors?"

Optimal prompts for Chat

Writing & editing

Edit my draft Act as a sharp editor. Don't rewrite — diagnose and prescribe. For the draft below, return: 1. The single biggest weakness, in one sentence. 2. Three specific edits with before → after. 3. One sentence I should consider cutting entirely, and why. Audience: [who reads this]. Tone target: [tone]. Draft: """ [paste draft] """
Tighten this Cut this text by 40% without losing meaning. Keep the voice. Return only the tightened version, no commentary. """ [paste text] """

Brainstorming

Diverge then converge Help me brainstorm [topic]. Round 1 — diverge: give me 12 ideas covering safe, weird, ambitious, and contrarian. Round 2 — converge: pick the top 3 by [criterion] and explain the trade-offs. Round 3 — sharpen: turn the winner into a one-paragraph pitch. Don't hedge. I want opinions.

Learning a new topic

Teach me Teach me [topic]. I already know [X, Y]. I don't know [Z]. Structure: 1. The one-sentence elevator definition. 2. The mental model (an analogy that maps to something I already know). 3. Three concrete examples of increasing complexity. 4. The two most common misconceptions and why they're wrong. 5. A 5-question self-test (no answers — I'll check myself). Skip filler. Be willing to be wrong if I push back.

Document analysis

PDF deep-read I've attached [doc name]. Treat it as the source of truth. Return, in this order: 1. The thesis in one sentence. 2. The five claims it actually defends, with the page they're argued on. 3. Any claim that's asserted but not supported. 4. Three questions I should ask the author to stress-test the argument. If something is unclear from the doc, say "not stated" — never guess.

Decision-making

Decision memo Help me decide between [option A] and [option B]. Context: [the situation, who's affected, the deadline]. Constraints: [budget, time, reversibility]. What I value most: [criteria, in order]. Produce a 1-page decision memo with: - Recommendation in the first line. - The 3 reasons that drove it. - The strongest counter-argument and your response. - What would have to be true for the other option to win.

Code in chat (one-offs)

Code snippet Write a [language] function that [does X]. Constraints: - Inputs: [types/shape] - Outputs: [types/shape] - Edge cases I care about: [list] - No external dependencies / use only the standard library. After the code, give me 3 test cases I can paste into a REPL.

Spreadsheet / data tasks

Clean my CSV Attached CSV has messy data. Clean it for analysis. Steps to perform: 1. Detect and list every quality issue (nulls, mixed types, dupes, weird dates). 2. Propose a fix for each, with the assumption you're making. 3. Apply the fixes and return a clean .xlsx artifact. 4. End with a "trust this output if…" caveat list. Never silently drop rows — quarantine them in a separate sheet.
🎈 ELI5

CoWork is Claude when you and your teammates share a kitchen together. Everyone uses the same recipes (skills), the same pantry (connectors to Slack, Drive, GitHub, Linear…), and the same family rules (the workspace style guide). What you do in the kitchen stays private to your team.

Best part: instead of typing "go check Slack and Drive and Linear," you can just ask one question and CoWork pulls from all those tools at once. Great for getting up to speed on a project, prepping for a meeting, or doing your standup.

What Claude CoWork is

CoWork is the team-shaped Claude — a workspace where your colleagues, your tools, and your custom skills sit together. Where Chat is "me + Claude," CoWork is "us + Claude": shared instructions, shared connectors, shared agents.

  • Everyone in the workspace inherits the same skills, connectors, and guidelines.
  • Role-matched plugins (engineering, sales, design, etc.) load the right tools per teammate.
  • Background agents can watch inboxes, dashboards, and PRs and ping the team.
  • Conversations can be made shareable so a colleague can pick up where you left off.

Workspace setup, step by step

  1. Open CoWork from the Claude app sidebar (or via the team admin invite link).
  2. Run the setup helper. In any chat, type: /setup-cowork. It walks you through plugin install, connector hookup, and a first skill.
  3. Pick your role when prompted (engineer, designer, PM, sales, support…). Role-matched plugins install automatically.
  4. Connect your tools. Hit "Add connector" and authorize the ones your team actually uses (Slack, Drive, GitHub, Linear, Notion, Salesforce, etc.).
  5. Pin or favorite the skills you'll use most so they're one click away.
  6. Drop a workspace-wide instructions doc (style guide, glossary, escalation policy). Everyone's chats inherit it.
Permissions matter Connectors only see what your account can see. Adding a connector does not bypass an upstream permission — if you can't read a Drive file in your browser, Claude can't read it either.

Role-based plugins (the short list)

RolePlugins typically installedTry first
EngineeringGitHub, Linear, code review, security review"Review my open PR and suggest 3 small improvements before I ship."
ProductNotion, Linear, user-research summarizer"Compile my last 8 user interviews into themes with verbatim quotes."
DesignFigma reader, brand-voice skill, asset organizer"Audit this Figma file for accessibility issues and write a fix list."
SalesSalesforce, Gmail, call-notes summarizer"Summarize this discovery call into next steps, blockers, and stakeholders."
SupportZendesk/Intercom, knowledge-base search"Draft a reply to this ticket using the KB. Cite the article."

Using and creating skills in CoWork

  1. Browse the workspace's skills tab to see what's installed.
  2. Trigger automatically — most skills load when your prompt matches their description (e.g. mention "deck" → pptx skill).
  3. Trigger explicitly by typing /<skill-name>. /review, /security-review, /init are common defaults.
  4. Build a custom skill with the skill-creator skill — it generates the markdown + helpers and saves them to your shared workspace skills.
  5. Promote a skill team-wide from the admin panel so every teammate gets it.

Sharing work with teammates

  • Share a chat — every conversation has a "Share" button that produces a read-only link or, if your admin allows it, a fork-and-edit link.
  • Hand off mid-task — paste the share URL into Slack and a teammate opens the same context Claude already has.
  • Spin off a task — when Claude flags a side issue worth its own session, accept the chip and it spawns a new isolated session that doesn't bloat the original.

Optimal prompts for CoWork

Project status & standups

Auto standup Pull my activity since yesterday 9am from Linear, GitHub, and Slack. Produce my standup in this exact format: - Yesterday: [3 bullets, what shipped or moved] - Today: [3 bullets, what I plan] - Blockers: [name + who can unblock me] Be terse. Skip anything with no progress.

Cross-tool research

Cross-tool research I'm joining the [project name] team next week. Get me up to speed. Pull from: - Notion: the project hub and last 5 weekly notes - Linear: open + recently closed issues - Drive: any doc with the project name in the title - Slack: the project channel, last 14 days Return: - 1-paragraph "what is this project" - 5 key decisions and when they were made - 5 names I should know and why - The 3 open risks, ranked Cite the source for each claim.

Sales / customer prep

Account brief I have a meeting with [account] tomorrow. Build me a brief. Pull from: - Salesforce: account, contacts, open opps, recent activity - Gmail: last 10 threads with anyone @[domain] - Drive: any doc with [account] in title or shared with them One page max: - Their current state and pain we know about - Who's in the room and what they care about - The 3 questions I should ask to advance the deal - One risk that could blow up the deal If a fact is fuzzy, mark it "(needs confirmation)".

Background agent

Standing watch Set up a scheduled task that runs every weekday at 8:30am. Action: scan our #incidents Slack channel and the Datadog "API latency" dashboard for anything new since the last run. If nothing notable: send me "All clear" in DM. If notable: send me a 3-bullet summary with links and a suggested first action. Use the schedule skill to set this up.
🎈 ELI5

Claude Code is Claude living inside your computer's terminal, where it can read your real project files, edit them, run tests, and even make git commits. It's like having a junior coder sitting next to you who never sleeps — but always asks before doing anything scary (deleting files, pushing to main, etc.).

The trick is to tell it what "done" looks like: "add a /healthz endpoint that returns 200 and the build SHA." Drop a CLAUDE.md file at the root of your repo with your project's rules and Claude reads it automatically. Plan mode makes Claude propose a plan before touching files — useful when the task is big.

Install & first run

  1. Install: in your terminal, run npm install -g @anthropic-ai/claude-code (or use the desktop app / IDE extensions for VS Code, JetBrains).
  2. Authenticate: run claude in any directory. The first run prompts you to log in via browser.
  3. cd into a real project (a git repo is best — Claude Code uses git for safety nets like worktrees and revert).
  4. Run claude and type a goal in plain English: "Add a /healthz endpoint that returns 200 and the build SHA."
  5. Approve actions as Claude proposes them — file edits, shell commands, package installs all surface for approval until you allowlist them.
Speed up your loop Use /fewer-permission-prompts after a few sessions — it scans your transcripts and pre-approves the safe, repetitive things you keep clicking yes on.

The basic loop

Claude Code is conversational — type a goal, it proposes a plan + edits + commands, you approve, it iterates. The loop that works best:

  1. State the goal in one sentence, with the acceptance criterion. ("Login form validates email format and shows a clear error.")
  2. Let it explore first. If it dives in and edits, stop it and say "Read the relevant files first and propose an approach before changing anything."
  3. Approve one chunk at a time. Big batched diffs are harder to verify.
  4. Run the verification (tests, lint, typecheck, dev server). Tell Claude what counts as "done."
  5. Commit when green — explicitly ask Claude to commit. It won't commit unprompted.

CLAUDE.md — the project's persistent memory

Drop a CLAUDE.md file at the root of your repo and Claude Code reads it on every session. This is where you put the durable rules.

CLAUDE.md starter # Project: Acme API ## Stack - TypeScript + Bun, Hono framework, Postgres via Drizzle. - Tests: vitest. Lint: biome. Format on save. ## Conventions - All HTTP handlers live in `src/routes/`. One file per resource. - Never throw raw strings — use the `AppError` class in `src/errors.ts`. - Use `zod` schemas at the boundary; never validate inside business logic. ## Commands you can run unprompted - `bun test`, `bun run lint`, `bun run typecheck` ## Commands that require my approval - Any `bun run db:*` (DB migrations) - Any `git push`, `gh pr create`, or commits to `main` ## Style - Comments: only when the *why* is non-obvious. No "what" comments. - New files only when the change can't fit in an existing one.
Run /init From a fresh checkout, the /init skill scans your repo and writes a starter CLAUDE.md tailored to it. Edit, commit, done.

Slash commands

Slash commands are reusable prompt recipes. Built-ins include:

CommandWhat it does
/initGenerate a CLAUDE.md for the current repo.
/reviewCode-review the open changes or a referenced PR.
/security-reviewSecurity-focused review of the diff on the current branch.
/ultrareview new in 4.7Multi-agent cloud review of the current branch (or a referenced PR). User-triggered and billed; great pre-flight before merging.
/simplifyLook at recent changes and remove dead/duplicated/over-engineered code.
/loop <interval> <prompt>Run a prompt repeatedly (e.g. /loop 5m check the deploy).
/scheduleSet up cron-scheduled remote agents.
/fewer-permission-promptsAuto-allowlist the read-only commands you keep approving.
/update-configEdit settings.json (permissions, hooks, env vars).

You can write your own. Drop a markdown file in .claude/commands/<name>.md with frontmatter and a prompt body. It becomes /<name> instantly.

Subagents

Subagents are specialized helpers Claude can call to keep your main context clean. Built-ins include Explore (fast read-only search), Plan (architect), general-purpose, and claude-code-guide. Use them when:

  • You need a broad codebase search ("where is auth checked?") — delegate to Explore.
  • You want an implementation plan before writing — delegate to Plan.
  • You want an independent code review on changes you just made.
Delegating well Use the Explore agent to find every place we issue a JWT and every place we verify one. Report file:line for each, plus whether HMAC or RSA is used. Don't propose changes — just locate and classify.

Skills inside Claude Code

Same skill model as Chat / CoWork — markdown files with a description that triggers them. Useful built-ins in Code:

  • simplify — review a diff for over-engineering and reduce it.
  • fewer-permission-prompts — reduce friction.
  • update-config — change settings.json safely.
  • keybindings-help — customize key shortcuts.
  • claude-api — for code that uses Claude's API.

MCP servers — connect Claude to your tools

MCP (Model Context Protocol) lets Claude Code talk to outside services — Postgres, Sentry, Linear, your custom internal API. Configure in .claude/settings.json under mcpServers, or use the registry to install a vetted one.

Add an MCP server Add the Postgres MCP server to my project settings, pointing at $DATABASE_URL. After it's connected, ask it for the schema of the `orders` table and confirm it has columns id, user_id, total_cents, created_at.

Hooks — make Claude Code follow your rules

Hooks are shell commands the harness runs at events: PreToolUse, PostToolUse, Stop, UserPromptSubmit, etc. Configure in settings.json.

Use a hook when you want a guarantee — Claude can be asked nicely in a prompt or CLAUDE.md to run lint, but only a hook will actually enforce it.

Auto-format hook (settings.json) { "hooks": { "PostToolUse": [ { "matcher": "Edit|Write", "hooks": [ { "type": "command", "command": "bun run format --write \"$CLAUDE_TOOL_FILE_PATH\"" } ] } ] } }
Use the skill Ask: "Use the update-config skill to add a hook that runs bun test after every Edit/Write to a file in src/."

Plan mode & git worktrees

  • Plan mode — Claude proposes a plan and waits for your "go" before writing files. Trigger it with /plan or by asking "plan first, don't edit yet."
  • Worktrees — Claude can spin up an isolated worktree for risky changes so your working directory stays clean. Use when refactoring, trying two approaches, or sandboxing experiments.

Optimal prompts for Claude Code

Bug fixing

Bug repro & fix Bug: when I POST /orders with a missing `user_id`, the API returns 200 instead of 400. Reproduce with a failing test first, then fix the smallest amount of code that turns it green. Constraints: - Don't refactor unrelated code. - Add no new dependencies. - The test goes in `test/orders.spec.ts` next to similar cases. When done: show me the diff, the failing-then-passing test output, and one sentence on the root cause.

Feature work

Feature, plan-first Goal: add a /api/v1/usage endpoint that returns this month's request count per API key, scoped to the calling user. Step 1 — read: explore the auth middleware and the existing /api/v1/* routes. Step 2 — propose: a 5-bullet plan with file paths you'd touch. Stop and wait. Step 3 — implement only what I approve, in one cohesive commit. Step 4 — add a vitest covering: happy path, unauthenticated, wrong user. Acceptance: `curl -H "Authorization: Bearer $T" /api/v1/usage` returns 200 + JSON like `{ key_id, count, period_start, period_end }`.

Refactor

Targeted refactor The file src/billing/invoice.ts is 700 lines and three functions own most of it. Refactor it into smaller pieces. Rules: - No behavior changes — every existing test must still pass without edits. - Split by responsibility, not by line count. - Keep the public API of this module identical. - One commit per extracted module. Start by reading the file and the tests, then propose the split before editing.

Code review

Self-review before push I'm about to push my current branch. Review it as a senior reviewer would. Run: /review Then in your own words flag: - Anything that looks like dead code or a leftover debug. - Tests that test the implementation rather than behavior. - Risky assumptions that aren't documented. - One thing I should add a comment for (if any) — explaining the WHY. Don't suggest stylistic nits the formatter would catch.

Onboarding to a new repo

First day in this repo I just cloned this repo. Help me orient. 1. Run /init and propose a CLAUDE.md (don't write it yet — show me the draft). 2. Identify the entry points (HTTP, CLI, jobs). 3. Map the top-level directories to their purpose in one line each. 4. Find the test command and run a single test to confirm the toolchain works. 5. End with the 3 questions I should ask a teammate before changing anything.

Performance

Find the slow path Endpoint /api/v1/search is slow at p95. Diagnose without guessing. Steps: 1. Read the handler and any DB queries it triggers. 2. List the queries and whether they hit indexes. 3. Identify N+1s or full scans. 4. Propose the smallest change that would move p95 most. Do not implement yet. I want a 1-page diagnosis first with evidence (file:line), then I'll pick the fix.
🎈 ELI5

A prompt is just directions for Claude. Like asking a friend to help — the better your directions, the better the help. Tell Claude four things: WHO it is (a sharp editor? a senior coder?), WHAT you want done, WHO reads it (kids? execs?), and HOW the answer should look (5 bullets? a paragraph? a table?).

If the answer is bad, don't start over. Just say "redo that, but tighter and bossier" in the same chat. Claude already has the context — you just need to nudge it.

Use-case prompt library

Copy-paste-ready prompts for the most common goals. Edit the bracketed parts to fit your situation. Click Copy on any card.

Writing & email

Email reply Reply to this email. Tone: warm, concise, no apology-for-apologies'-sake. Length: 4–6 sentences. End with one clear next step. If the thread mentions a specific commitment I made, restate it back so they know I read carefully. Output only the reply text — no preamble, no signature. Email: """ [paste email] """
Slack / DM rewrite Rewrite this Slack message: - 30% shorter - friendlier tone - keep the load-bearing facts - no apology-for-apologies'-sake Output only the rewritten version, no commentary. Original: """ [paste] """
Difficult message — 3 versions Help me write a [decline / push-back / disagree] message. Recipient: [who they are, relationship] What I need to communicate: [the message] Constraints: [keep relationship intact / be firm / be diplomatic] Draft three versions: gentle, neutral, firm. End each with one open question that invites them to engage further.
Meeting follow-up Turn these meeting notes into: 1. A 3-bullet summary I can paste into Slack 2. An action items list (owner — action — by when) 3. One open question that wasn't resolved Notes: """ [paste] """
LinkedIn post Draft a LinkedIn post about [topic]. Tone: confident, not braggy. Hook in the first line. 5–8 short paragraphs. End with a question that invites comments. No emojis, no hashtags. Avoid "thrilled to share" and "humbled".

Editing & feedback

Sharp editor Act as a sharp editor. Don't rewrite — diagnose and prescribe. For the draft below, return: 1. The single biggest weakness, in one sentence. 2. Three specific edits with before → after. 3. One sentence I should consider cutting entirely, and why. Audience: [who reads this]. Tone target: [tone]. Draft: """ [paste] """
Tighten by N% Cut this text by 40% without losing meaning. Keep the voice. Return only the tightened version, no commentary. """ [paste] """
Stress-test my argument Find the holes in this argument as a tough but fair critic would. Return: 1. The strongest counter-argument I haven't addressed. 2. The weakest claim I'm making, and why it's weak. 3. What evidence would change your mind. 4. One thing I should NOT change — what I'm getting right. Argument: """ [paste] """

Learning & research

Teach me [topic] Teach me [topic]. I already know [X, Y]. I don't know [Z]. Structure: 1. The one-sentence elevator definition. 2. The mental model (an analogy that maps to something I already know). 3. Three concrete examples of increasing complexity. 4. The two most common misconceptions and why they're wrong. 5. A 5-question self-test (no answers — I'll check myself). Skip filler. Be willing to be wrong if I push back.
Concept to analogy Find me the best analogy for explaining [concept] to [audience]. Return 3 candidates ranging from safe to creative. For each, name what the analogy gets right AND what it breaks. Pick your favorite and defend the choice.
Compare options Compare [option A] vs [option B] for [use case]. Return a 1-page brief: - The decision in one sentence - Top 3 dimensions where they differ - Best fit for each (when would you pick A, when B) - One scenario where neither is right and what is Skip the generic "depends on your needs" framing. Pick.

Decision-making

1-page decision memo Help me decide between [option A] and [option B]. Context: [situation, who's affected, deadline]. Constraints: [budget, time, reversibility]. What I value most: [criteria, in order]. Produce a 1-page decision memo with: - Recommendation in the first line - The 3 reasons that drove it - The strongest counter-argument and your response - What would have to be true for the other option to win
Pre-mortem Run a pre-mortem on this plan: [paste plan]. Imagine it's [N months] from now and the project failed. Write the failure post-mortem. List the top 5 root causes in order of likelihood, and what we could do today to prevent each. Be specific. "Insufficient communication" is a non-answer; "the design team and engineering used different definitions of done" is useful.
Devil's advocate Take the position that [my proposal] is wrong. Be the strongest advocate for the opposite view. Give me: 1. Three reasons my proposal is the wrong call. 2. A version of the opposite plan that you'd actually defend. 3. One question that, if answered honestly, would tell us who's right. Don't soften — I want the spicy version. I'll evaluate it on the merits.

Brainstorming

Diverge then converge Help me brainstorm [topic]. Round 1 — diverge: 12 ideas covering safe, weird, ambitious, contrarian. Round 2 — converge: pick the top 3 by [criterion] and explain trade-offs. Round 3 — sharpen: turn the winner into a one-paragraph pitch. Don't hedge. I want opinions.
Opposite of obvious What's the obvious answer to [question]? State it in one sentence. Now: what if the OPPOSITE is true? Make the strongest case for the counterintuitive answer. Don't strawman; defend it as if you believed it. End with: which one do you actually think is right, and why.

Coding

Bug repro & fix Bug: [describe the bug + what should happen]. Reproduce with a failing test first, then fix the smallest amount of code that turns it green. Constraints: - Don't refactor unrelated code. - No new dependencies. - Test goes in [path] next to similar cases. When done: show me the diff, the test output, and one sentence on the root cause.
Code review of a snippet Review this code as a senior reviewer would. Flag: 1. Anything broken or buggy 2. Anything that looks like dead code or leftover debug 3. Tests that test the implementation rather than behavior 4. Risky assumptions that aren't documented 5. One thing that needs a comment explaining the WHY Skip stylistic nits the formatter would catch. Code: """ [paste] """
Refactor for [property] Refactor [file/function] to [be smaller / be testable / remove duplication]. Rules: - No behavior changes — every existing test must still pass without edits. - Keep the public API identical. - Split by responsibility, not by line count. Start by reading the code and tests, then propose the split before editing.
Generate test cases For the function below, generate test cases covering: - Happy path - Edge cases (empty input, null, max size, off-by-one) - Error cases (what should throw, what should return safely) Use [framework]. Output the tests only, no commentary. Function: """ [paste] """
Regex / SQL from English Generate a [regex / SQL query] that [describes what it should do]. Then explain it line-by-line in plain English. Then give me 3 test cases — 2 that should match/return rows and 1 that shouldn't.

Data & analysis

Clean a CSV Attached CSV is messy. Clean it for analysis. Steps: 1. Detect and list every quality issue (nulls, mixed types, dupes, weird dates). 2. Propose a fix for each, with the assumption you're making. 3. Apply the fixes; return a clean version. 4. End with a "trust this output if…" caveat list. Never silently drop rows — quarantine them in a separate sheet.
Extract structured fields Extract structured fields from this text. Required fields: - [field 1] (type) - [field 2] (type) Output as JSON. For any field that's not stated in the text, return null. Never guess. Text: """ [paste] """
Find anomalies Look at this dataset. Identify the 5 most surprising rows or values. For each, explain why it's surprising and what plausible explanations exist (data error, real outlier, definition issue). Don't moralize about which explanation is right — just lay them out.

Documents

TL;DR a long doc TL;DR this document. Return: 1. The thesis in one sentence. 2. The 5 claims it actually defends. 3. Any claim that's asserted but not supported. 4. The single most surprising fact. 5. Three questions worth asking the author. If something is unclear, say "not stated" — never guess.
Find every claim Pull every factual claim from this document into a numbered list. For each, note: - The page/section it appears in - Whether the doc supports it (with evidence) or just asserts it - Confidence level (high / medium / low) Useful for fact-checking before sharing.
Outline → draft Turn this outline into a [first draft / one-pager / blog post]. Tone: [tone]. Length: [target]. Audience: [who]. Don't add new claims I didn't include in the outline. If a section is underspecified, leave a note like [needs example here] instead of inventing.

Creative & personal

Name brainstorm Brainstorm 12 names for [thing]. Cover these vibes: - 3 plain/descriptive (does what it says) - 3 evocative/metaphorical - 3 weird/punchy/short - 3 contrarian (intentionally counterintuitive) For each: a one-line rationale and one risk. Pick your top 3 and rank them.
Cold outreach email Draft a cold outreach email to [recipient role at company type]. Goal: [what I want them to do] What I bring: [my context, what's interesting] Constraint: ≤120 words, no buzzwords ("synergy," "leverage"), one specific reference to their work. End with one easy ask (e.g., 15-min call) — not "let me know if interested".
Travel itinerary Plan a [N-day] trip to [destination] for [N people, ages, interests]. Constraints: [budget, mobility, dietary, packed-vs-slow style]. Day-by-day: - Morning / afternoon / evening for each day - One must-do, one easy fallback - Estimated cost per day - One thing locals do that tourists miss Skip generic recommendations everyone already knows.

Interactive prompt builder

Pick a goal, fill in a few blanks, and copy the result. Works for any Claude surface.

Fill in the fields above…

Patterns library

The "stop and ask" pattern — for ambiguous tasks

When the request has unknown unknowns, force Claude to surface them.

Stop & ask Before doing anything, list the 3 most important things you'd need to know to do this well that I haven't told you. Number them. Then propose the answer you'd assume by default for each. Don't start the work until I confirm.
The "two passes" pattern — for higher-quality output

One pass to draft, a second to critique its own draft. Quality usually jumps.

Two passes Pass 1: draft [the deliverable]. Don't optimize — just get it out. Pass 2: critique your own draft as a tough editor would. List the 3 weakest points specifically. Pass 3: rewrite, fixing those 3 things. Show only the final.
The "narrow-then-widen" pattern — for research

Avoids the "shallow Wikipedia summary" failure mode.

Narrow then widen Step 1 — narrow: pick the single most important sub-question inside [topic] and answer that one in depth (300 words, with 3 cited sources). Step 2 — widen: zoom out and place that answer in the context of the broader field in 5 bullets. Step 3 — gaps: what would I need to read or test to be confident? List 3.
The "constraint stack" pattern — for code

Stack constraints from "must hold" to "nice to have." Claude prioritizes correctly.

Constraint stack Build [thing]. Must hold (non-negotiable): - [hard requirement 1] - [hard requirement 2] Should hold (break only with explicit reason): - [strong preference 1] - [strong preference 2] Nice to have: - [optional 1] If any "must" conflicts with another, stop and ask — don't pick.
The "rubric" pattern — for evaluating output

Hand Claude the grading sheet up front.

Rubric-driven Produce [deliverable]. Then grade your own output against this rubric: - Clarity (1-5) - Specificity (1-5) - Length appropriateness (1-5) - Action-readiness — could the reader act on it without follow-up? (1-5) If any score is below 4, revise once and re-grade. Show only the final + scores.

Anti-patterns (what to stop doing)

Anti-patternWhy it hurtsDo this instead
"Help me with this." Claude has to guess what "help" means. You'll get a generic response. State the deliverable: "Rewrite this email so it's 30% shorter and warmer in tone."
"Be creative." / "Be smart." Vague adjectives don't constrain output. Name a target: "Three options ranging from safe to ambitious. Each with one risk."
Asking again instead of correcting Loses context — Claude restarts from scratch. Say "Same task, but tighter and with no jargon" in the same conversation.
10-paragraph prompt for a simple task Drowns the actual ask. Claude over-thinks. Match prompt length to task complexity. One-line tasks deserve one-line prompts.
"Don't hallucinate." It's a vibe, not a constraint — doesn't actually change behavior much. "If you're not sure, write 'unknown' and explain what would resolve it."
Pasting 50-page docs without focus Claude treats it all as equally important. Tell it what to focus on: "Only the methodology section. Ignore the appendices."
Letting Claude Code edit before reading You get plausible-looking changes that miss the actual codebase patterns. "Read first, propose second, edit only after I approve."
Asking Claude to "make it better" with no criteria "Better" is undefined — output drifts randomly. "Make it 25% shorter while keeping the three numbered claims."

Universal rescue prompts

When something isn't working, paste one of these mid-conversation rather than starting over.

Diagnose & reset Pause. Tell me in 3 bullets why your last response missed what I wanted. Then propose a tighter prompt I could give you to fix it. Wait — don't try again yet.
Cut the fluff Redo your last response with no preamble, no caveats, no "I'd be happy to," no closing summary. Just the substance.
Show your work Walk me through your reasoning before the answer this time. I want to see the trade-offs you considered, even the ones you rejected.
Stronger opinion Drop the both-sides framing. Pick the option you'd choose if it were your own project, and defend it. I'll push back if I disagree.