OpenAI Users Manual

As of

A practical, step-by-step guide to OpenAI's full lineup — every current model, every product surface (ChatGPT, the API, Codex), and copy-paste prompt templates for the most common goals.

🎈 ELI5

ChatGPT is a smart helper that can read, write, and answer questions. The same brain comes in different doors: ChatGPT (just type and ask), Business/Enterprise (ChatGPT for your whole team with your tools plugged in), and the API + Codex (the LEGO version you bolt into your own apps and code projects).

The magic isn't that it's smart — the magic is telling it exactly what you want (who it should be, what to make, who reads it, how it should look). Be specific and you'll get great answers.

Getting started in 60 seconds

  1. Sign in at chatgpt.com for the consumer app, or at platform.openai.com for API/developer access. Free, Plus, Pro, Business, Enterprise, and Edu plans share the same chat interface.
  2. Pick the right surface for the job: ChatGPT for everyday conversations and tasks, Business/Enterprise for team workspaces, the API for building products, Codex for cloud-based coding agents.
  3. Pick a model that matches the task — flagship (GPT-5.5) for hard reasoning, GPT-5 / GPT-5.4 for production defaults, Mini/Nano for speed and cost, gpt-image-2 for images, the Realtime API for voice.
  4. Tell the model what good looks like. Goal, audience, format. The single biggest jump in quality comes from saying these three things up front.

Which OpenAI surface should I use?

ChatGPT

chatgpt.com & mobile

  • Quick questions, writing, brainstorms
  • Vision, files, code, charts
  • Browsing & deep research
  • Voice mode
  • Custom GPTs and memory

ChatGPT Business / Enterprise

Team workspace

  • SSO, admin controls, audit logs
  • Shared connectors (Drive, GitHub, Slack…)
  • Shared GPTs across the team
  • Data retention controls — your data is not used for training
  • Higher rate limits and bigger context

API & Codex

platform.openai.com

  • Build products with any model
  • Tool use / function calling
  • Realtime API for voice
  • Batch API for 50% off async work
  • Codex for cloud-based coding agents

The five prompt fundamentals

Every great prompt — chat or API — has at most five parts. Use the ones that apply.

PartPurposeExample phrase
RoleFrame the model's perspective"You are a senior staff engineer reviewing a junior PR."
GoalWhat "done" looks like"Produce a 1-page exec summary I can paste into Notion."
ContextBackground & constraints"Audience: non-technical execs. Tone: confident, plain English."
InputsThe raw materialPasted text, attached file, URL, image.
FormatShape of the output"5 bullets, ≤15 words each, no preamble."
Rule of thumb If your prompt is shorter than two sentences and the output disappoints, the problem is almost always missing format or audience. Add those two and try again before rephrasing.

Compact universal template

Universal Role: [who the model is acting as] Goal: [the outcome you want] Audience: [who reads/uses this] Constraints: [length, tone, must-include, must-avoid] Format: [structure of the output] Input: """ [paste content / describe the situation] """
🎈 ELI5

OpenAI has lots of models — think of it like a tool drawer. GPT-5.5 is the big screwdriver: most powerful, newest, slowest. GPT-5 is the everyday hammer: what you usually grab. Mini and Nano sizes are tiny tools — cheap and fast for tiny clear jobs. The o-series (o3, o4-mini) are the math helpers — they think extra-hard but slowly, like solving homework one step at a time.

There are also helpers for pictures (gpt-image-2 draws), voice (Realtime API talks), and search-y "memory cards" (embeddings) you use behind the scenes. Pick the one that matches the job and the budget.

The current OpenAI lineup

As of 2026-05-03, OpenAI's text models are organised into three families: the GPT-5.x series (flagship and frontier), the GPT-4.1 family (production-recommended for most apps), and the o-series (deep reasoning). Plus image, audio, and embedding models.

Latest releases GPT-5.5 and GPT-5.5 Pro rolled out 2026-04-23 to ChatGPT (Plus/Pro/Business/Enterprise) and Codex; API access is "coming very soon." ChatGPT Images 2.0 / gpt-image-2 launched 2026-04-21 — first OpenAI image model with native reasoning. GPT-5.4 shipped 2026-03-05 with native Computer Use, tool search, and a 1M+ context window — see What's new in GPT-5.4 below.
About these dates Dates are pulled from OpenAI release notes and announcements where confirmed. Where a date is uncertain, it's marked . Pricing in USD per million tokens. Always confirm against platform.openai.com/docs/models before billing-sensitive decisions.

Frontier & flagship (GPT-5.x)

ModelAPI IDReleasedBest forPricing (in / out)
GPT-5.5 flagship gpt-5.5 2026-04-23 Hardest reasoning, complex goals, multi-step tool use, professional work in ChatGPT & Codex API "coming very soon"
GPT-5.5 Pro gpt-5.5-pro 2026-04-23 Pro/Business/Enterprise users — extended thinking, longer autonomous runs API forthcoming
GPT-5.4 gpt-5.4 2026-03-05 Frontier reasoning + coding (incorporates 5.3-Codex stack) + native Computer Use. 272K context standard, up to 1,050,000 via API/Codex; 128K max output. $2.50 / $15.00 (input doubles to $5.00 above 272K)
GPT-5.4 Pro gpt-5.4-pro 2026-03-05 Premium GPT-5.4 variant for the most complex tasks — extended thinking, deeper reasoning $30.00 / $180.00
GPT-5.4 Mini gpt-5.4-mini 2026-03 † Mid-tier 5.4 for high-volume product use cases (separate announcement) See API pricing page
GPT-5.4 Nano gpt-5.4-nano 2026-03 † Cheapest production model — bulk classification, simple transforms (separate announcement) See API pricing page
GPT-5.3-Codex gpt-5.3-codex 2025-Q4 † Most capable agentic coding model. ~25% faster than GPT-5.2 on coding benchmarks See API pricing page
GPT-5.2 Instant gpt-5.2-instant 2025-Q4 † Fast, grounded, measured tone — good for chat UX See API pricing page
GPT-5 gpt-5 2025-08-07 The original GPT-5; 400K-token context; strong all-rounder for production $1.25 / $10
GPT-5 Mini gpt-5-mini 2025-08-07 GPT-5 architecture at a budget — quality jump over GPT-4.1 Mini $0.25 / $2.00

Production & long-context (GPT-4.1 family)

ModelAPI IDReleasedBest forPricing (in / out)
GPT-4.1 gpt-4.1 2025-04 † Production-recommended replacement for GPT-4o. 1,000,000-token context window $2.00 / $8.00
GPT-4.1 Mini gpt-4.1-mini 2025-04 † Mid-tier for high-volume product use cases See API pricing page
GPT-4.1 Nano gpt-4.1-nano 2025-04 † Cheapest in the GPT-4.1 family — bulk extraction, classification $0.10 / $0.40

Reasoning (o-series)

ModelAPI IDReleasedBest forPricing (in / out)
o3 o3 2025-04 † Multi-step reasoning, math proofs, complex debugging, scientific analysis $2.00 / $8.00
o4-mini o4-mini 2025-04 † Cheaper reasoning at high volume; still beats GPT-4-class on hard tasks See API pricing page
o1 o1 2024-12-05 Original reasoning model — superseded by o3 for most tasks but still in the API See API pricing page

Release timeline (chronological)

Useful when looking at old code, picking up a deprecated app, or understanding capability jumps.

DateReleaseWhat changed
2018-06GPT-1First transformer-based generative model from OpenAI.
2019-02GPT-2Larger, more coherent — initially released in stages over safety concerns.
2020-06GPT-3175B parameters; introduced few-shot prompting at scale.
2022-09WhisperOpen-source multilingual speech recognition. Trained on 680,000 hours of audio.
2022-11-30ChatGPT (GPT-3.5)The product that started the wave. Free chat interface.
2023-03-14GPT-4Multimodal-capable, much stronger reasoning. Initially text-only in API.
2023-09DALL-E 3Text-to-image, integrated into ChatGPT.
2023-11-06GPT-4 Turbo + Custom GPTs + Assistants APIDevDay 2023 — 128K context, GPTs marketplace, first agent-style API.
2024-01-25text-embedding-3-small & -large3rd-gen embeddings — 1536 / 3072 dim, with shrinkable dimensions parameter.
2024-05-13GPT-4o"Omni" — natively multimodal text/audio/vision in one model. Free-tier access.
2024-07-18GPT-4o miniCheap, fast everyday model that replaced GPT-3.5 Turbo as the default.
2024-09-12o1 (preview)First reasoning-trained model — explicit chain-of-thought.
2024-12-05o1 (general availability)Plus o1-pro mode for ChatGPT Pro.
2025-03 †gpt-4o-transcribe / gpt-4o-mini-transcribeNext-gen transcription beating Whisper on word-error rate.
2025-04 †GPT-4.1 family + o3 + o4-mini1M-token context for GPT-4.1; o3 supersedes o1 for most reasoning.
2025-08-07GPT-5 + GPT-5 MiniMajor lift on math, code, finance, multimodal. 400K context. $1.25/$10.
2025-08-28Realtime API — general availabilitySpeech-to-speech voice agents at low latency.
2025-09-30Sora 2Text-to-video model with audio. iOS app, then Android two months later.
2025-Q4 †GPT-5.2 Instant + GPT-5.3-CodexFaster default in ChatGPT; coding-specialised stack for Codex.
2026-03-05GPT-5.4 + GPT-5.4 ProFrontier model unifying reasoning + coding (5.3-Codex stack) + native Computer Use. 75.0% on OSWorld-Verified (vs 47.3% for GPT-5.2; 72.4% human baseline). Tool search in API cuts token usage 47% with many MCP servers. 272K context standard, up to 1,050,000 via API/Codex.
2026-03 †GPT-5.4 Mini + GPT-5.4 NanoSmaller siblings for high-volume and cheap-and-fast workloads.
2026-03-11GPT-5.1 family retiredInstant, Thinking, and Pro variants removed from ChatGPT.
2026-04-21ChatGPT Images 2.0 (gpt-image-2)First OpenAI image model with native reasoning capabilities.
2026-04-23GPT-5.5 + GPT-5.5 ProCurrent flagship; rollout to Plus/Pro/Business/Enterprise + Codex. API access coming.
2026-04-26Sora app shut downMobile app discontinued; API to follow on 2026-09-24.
2026-05-12DALL-E 2 & 3 retiringReplaced by gpt-image-2.

What's new in GPT-5.4 (2026-03-05)

GPT-5.4 is the first mainline OpenAI reasoning model that incorporates the frontier coding capabilities of GPT-5.3-Codex, plus a step change in agentic capability — native Computer Use without a separate specialist model. Highlights:

AreaWhat changed
Native Computer Use new First general-purpose OpenAI model that can take control of a computer — clicking, typing, navigating software using screenshots + mouse/keyboard commands. No specialised CUA model required. 75.0% on OSWorld-Verified (vs 47.3% for GPT-5.2 and 72.4% human baseline).
Reliability at scale On ~30,000 HOA and property-tax portals: 95% success on first attempt, 100% within three attempts (vs ~73–79% with prior CUA models). Sessions ran ~3× faster using ~70% fewer tokens.
Tool search in API new When given many tools, the model searches its toolset before deciding what to call. On 250 tasks from Scale's MCP Atlas benchmark with 36 MCP servers enabled, total token use dropped 47% while accuracy held.
Five-level reasoning effort Finer control than the prior low/medium/high. Tune reasoning depth ↔ latency more precisely per request.
Vision fidelity New "original" image input detail level — full-fidelity perception up to 10.24 megapixels total or 6,000 px max edge (whichever is lower). "High" detail now supports up to 2.56 megapixels total / 2,048 px max edge.
Coding ~80% on SWE-bench Verified. Folds in the GPT-5.3-Codex coding training stack, so you don't have to swap models for hard coding work.
Context window 272K standard, expandable to 1,050,000 (1M+) tokens via the API and Codex. Max output 128,000 tokens.
Variants gpt-5.4-pro for the most demanding tasks. gpt-5.4-mini and gpt-5.4-nano announced separately for high-volume / low-cost workloads.

Pricing

InputOutput
GPT-5.4 (≤272K input)$2.50 per million tokens$15.00 per million tokens
GPT-5.4 (above 272K input)$5.00 per million tokens$15.00 per million tokens
GPT-5.4 Pro$30.00 per million tokens$180.00 per million tokens
GPT-5 (for comparison)$1.25 per million tokens$10.00 per million tokens

Where you can use it

  • ChatGPT (Plus / Pro / Business / Enterprise)
  • OpenAI API (gpt-5.4, gpt-5.4-pro)
  • Codex (cloud and CLI)
Migration note Computer Use is powerful but not free — sessions consume both screenshots (vision tokens) and reasoning tokens. Start with the cheapest task that lets you measure success rate, then scale. Pin to gpt-5.4 initially; only step up to gpt-5.4-pro when an eval shows the standard tier missing on your hardest tasks.

Optimal prompts for GPT-5.4's new features

Computer Use task You are a Computer Use agent. Goal: log into the HOA portal at [URL], download the most recent statement PDF for unit 4B, and save it to ~/Downloads as "hoa-4b-YYYY-MM.pdf". Rules: - Take a screenshot before each action. - Read the screen carefully — never click anything you can't see. - If you hit a CAPTCHA, STOP and ask me to solve it. - If a step fails twice, STOP and report the screenshot + the error. - Do NOT save passwords. If a "save password" prompt appears, dismiss it. Budget: 30 actions max. Stop and summarize if you exceed it.
Tool search with many MCP servers resp = client.responses.create( model="gpt-5.4", input=user_question, tools=ALL_36_MCP_SERVERS, # let the model pick relevant ones tool_choice="auto", # tool_search is automatic in gpt-5.4 when many tools are provided — # no flag needed; expect substantially fewer tokens than gpt-5.2 here. )
High-fidelity image perception resp = client.responses.create( model="gpt-5.4", input=[ {"role": "user", "content": [ {"type": "input_text", "text": "Read every legible string on this dashboard screenshot. " "Return a JSON list of {label, value, x, y} entries. " "Mark anything you can't read confidently as '(unreadable)'."}, {"type": "input_image", "image_url": "https://example.com/dashboard.png", "detail": "original"} # NEW in gpt-5.4 — up to 10.24 MP ]} ], )
Five-level effort control # Cheap classification — minimal effort quick = client.responses.create( model="gpt-5.4", input="Classify sentiment: 'The service was slow but the food was good.'", reasoning={"effort": "minimal"}, ) # Hard architectural review — maximum effort deep = client.responses.create( model="gpt-5.4-pro", input=design_doc, reasoning={"effort": "xhigh"}, # five-level scale: minimal/low/medium/high/xhigh )
1M-context analysis resp = client.responses.create( model="gpt-5.4", instructions=( "Read the entire monorepo bundle. Identify every place we issue a " "JWT, every place we verify one, and any code path that bypasses " "verification. Return file:line for each, and flag the highest-risk " "bypass with a one-paragraph explanation." ), input=full_repo_bundle, # up to ~1,050,000 tokens ) # Note: input above 272K tokens is billed at $5.00/MTok rather than $2.50.

How to pick a model

Pick GPT-5.5 / GPT-5.5 Pro when…

  • The task is genuinely hard reasoning or strategy.
  • Long-horizon agentic work — coding sessions, multi-source research.
  • The cost of a wrong answer is high.
  • You want the best available model regardless of speed/cost.

Pick GPT-5.4 when…

  • You need native Computer Use (browser automation, desktop apps, portals).
  • You're orchestrating many tools / MCP servers — tool search keeps token use down.
  • You need a true 1M+ token context via API/Codex.
  • You want a single model that handles reasoning, coding (5.3-Codex stack), and agents.

Pick GPT-5 when…

  • You want a stable production default at $1.25 / $10 pricing.
  • Your prompts are already calibrated to GPT-5.
  • You don't need Computer Use or the 1M context window.
  • Latency and cost matter more than 5.4's bleeding-edge features.

Pick GPT-4.1 when…

  • You need a true 1M-token context window.
  • You're processing very long docs, codebases, transcripts.
  • Your stack was built around GPT-4o and you don't want to retune yet.

Pick GPT-5.4 Nano / GPT-4.1 Nano when…

  • The task is repetitive, unambiguous, and high-volume.
  • Cost dominates the decision.
  • Real-time UX where latency matters more than depth.

Pick o3 / o4-mini when…

  • The task rewards explicit step-by-step reasoning.
  • Math, formal logic, code debugging, scientific analysis.
  • You're OK with longer latency in exchange for accuracy.

Pick GPT-5.3-Codex when…

  • You're using Codex (cloud or CLI) for agentic coding.
  • Long-running build/refactor jobs that touch many files.
  • You need the strongest coding-specific tuning available.
The "escalate, don't rewrite" rule If GPT-5 or GPT-5.4 gives a shallow answer to a well-formed prompt, escalate to GPT-5.5 instead of rephrasing three more times. Tokens beat hours.

Image, audio, and video models

Image generation

ModelAPI IDReleasedNotes
ChatGPT Images 2.0gpt-image-22026-04-21First OpenAI image model with native reasoning. Better text rendering, layout control, and instruction-following than DALL-E 3.
gpt-image-1gpt-image-12025-04 †Production image model in ChatGPT through 2025.
DALL-E 3dall-e-32023-09Retiring 2026-05-12. Migrate to gpt-image-2.
DALL-E 2dall-e-22022-04Retiring 2026-05-12.

Speech & audio

ModelAPI IDTypeNotes
gpt-4o-transcribegpt-4o-transcribeSpeech → textReleased 2025-03 †. Lower word-error rate than Whisper. API-only.
gpt-4o-mini-transcribegpt-4o-mini-transcribeSpeech → textCheaper sibling of the above.
Whisperwhisper-1Speech → textReleased 2022-09. Open-source weights (MIT). Useful when you need to self-host.
gpt-4o-ttsgpt-4o-ttsText → speechSteerable voices. Pair with the Realtime API for interactive use.
Realtime APIgpt-4o-realtime-preview / gpt-realtimeSpeech ↔ speechGA 2025-08-28. Build voice agents end-to-end without separate STT/TTS.

Video

ModelAPI IDReleasedStatus
Sora 2sora-22025-09-30App shut down 2026-04-26. API planned to be discontinued 2026-09-24. Plan migration paths if you depend on it.

Embeddings

ModelAPI IDReleasedDimPricing
text-embedding-3-largetext-embedding-3-large2024-01-253072 (shrinkable)$0.13 /M tokens
text-embedding-3-smalltext-embedding-3-small2024-01-251536 (shrinkable)$0.02 /M tokens
text-embedding-ada-002 (legacy)text-embedding-ada-0022022-121536Legacy — migrate to 3-small.
Sizing tip text-embedding-3-large with the dimensions parameter set to 1024 or 1536 usually outperforms text-embedding-3-small at full size — at lower storage cost. Benchmark before committing.

Switching models

In ChatGPT

  1. Click the model name at the top of the conversation.
  2. Pick from the dropdown. Free plans see a subset; Plus/Pro/Business/Enterprise see the full lineup.
  3. Switching mid-conversation is fine — the new model inherits the full context. Useful for "draft with GPT-5, polish with GPT-5.5."
  4. Some models have modes — e.g. GPT-5 has Auto / Fast / Thinking. Pick by what the task rewards.

In the API

Pass the model field in your request. Snapshots are dated (gpt-5-2025-08-07); aliases (gpt-5) auto-track the newest.

Python — switch models from openai import OpenAI client = OpenAI() resp = client.responses.create( model="gpt-5", # or "gpt-5.5" once API access opens input="Hello", ) print(resp.output_text)

Deprecated & sunsetting models

ModelStatusMigrate to
GPT-5.1 (Instant / Thinking / Pro)Retired 2026-03-11GPT-5.5 / GPT-5.4
GPT-4oSuperseded by GPT-4.1 / GPT-5GPT-4.1 (drop-in) or GPT-5
GPT-4o miniSupersededGPT-5 Mini or GPT-4.1 Nano
GPT-4 TurboLegacyGPT-4.1
GPT-4 (original)LegacyGPT-4.1 / GPT-5
GPT-3.5 TurboLegacyGPT-4.1 Nano
o1 (preview & GA)Active but supersededo3 / o4-mini
DALL-E 2 / DALL-E 3Retiring 2026-05-12gpt-image-2
Sora 2 appShut down 2026-04-26API still works until 2026-09-24
text-embedding-ada-002Legacytext-embedding-3-small
Migration gotcha Newer models can be more or less verbose than the one you're replacing. Token counts shift, format defaults shift, tool-call shapes shift. Run your eval set against the new model before swapping in production.
🎈 ELI5

ChatGPT is the basic door — type in the box, get an answer. You can drag in files, ask it to draw pictures, or even talk to it with your voice. Canvas is the side panel for editing long documents and code together — like Google Docs, but with a smart helper.

Custom GPTs are like saved bookmarks for the way you like to talk to it. Build one with your style guide or instructions, give it a name, and use it again and again. Memory lets ChatGPT remember stuff about you across chats so you don't have to repeat yourself.

Setup & the ChatGPT interface

  1. Sign in at chatgpt.com or install the desktop / mobile app.
  2. Start a new chat. The big input accepts text, drag-and-dropped files, pasted images, audio recordings.
  3. Pick a model from the top selector. Plus/Pro/Business/Enterprise see the full lineup.
  4. Toggle tools as needed: web search, image generation, code execution (Python sandbox), Canvas, voice mode, deep research.
  5. Use Projects to bundle related chats with shared instructions and files.

Modes & tools

  • Auto — ChatGPT picks the right model and reasoning depth for your prompt. The default in 2026.
  • Fast — quick, low-latency responses. Good for chat-style back-and-forth.
  • Thinking / Pro — the model takes longer and reasons more. Use for hard problems.
  • Search — browses the web, returns cited sources.
  • Deep Research — agentic, longer-running research that produces a structured report.
  • Image — calls gpt-image-2 to generate or edit pictures.
  • Voice — full duplex speech-to-speech via the Realtime API.

Canvas — long-form writing & code

Canvas is a side panel for editing documents and code with the model. Like Claude's Artifacts, but with inline edit suggestions you can accept or reject.

  1. Open Canvas from the tools menu, or just ask: "Open this in canvas."
  2. Highlight a passage and ask for a tightening, a tone shift, or a translation — only that span changes.
  3. Ask for inline comments ("Mark every place I'm being too vague") instead of a rewrite.
  4. Export when done — copy/paste to your final destination.

Files & vision

  1. Drag a file onto the chat box. PDFs, Word, spreadsheets, code files, images, and short audio clips work.
  2. Tell the model what to do with it. "Summarize" is weak — say "Pull every dollar amount and the page it appears on, return as a table."
  3. For long docs, tell it where to focus: "Only the financial statements section, pages 14-22."
  4. For images / screenshots, ask for transcription first, then analysis.

Search returns cited results in a single response. Deep Research kicks off an agentic, multi-source investigation that takes several minutes and produces a structured report.

Deep research Run deep research on [topic]. I need: - 8–12 primary sources (vendor docs, papers, official blogs preferred) - A 200-word executive summary - A "what changed in the last 6 months" section - A list of open questions worth asking an expert - Source quality flagged — rate each link as primary, secondary, or weak Do not pad with marketing copy. Where sources contradict each other, surface it.

Memory & projects

  • Memory stores facts about you across chats — preferences, recurring projects, names of teammates. Manage it in Settings → Personalization → Memory.
  • Projects are folders of related chats with shared custom instructions and files. Use one per ongoing initiative.
  • Custom instructions in a project override your global ones — useful for switching context (work voice vs. personal voice).

Custom GPTs

A Custom GPT is a packaged ChatGPT — a name, instructions, optional knowledge files, optional API actions — that anyone with the link can use.

  1. Open the GPT builder from the sidebar → "Explore GPTs" → "Create."
  2. Tell the builder what the GPT does in plain English. It drafts the system prompt for you.
  3. Add knowledge files — style guides, schemas, FAQs.
  4. Add Actions if it needs to call your API (paste an OpenAPI spec).
  5. Publish as private, link-only, or to the GPT Store.

Voice mode

  1. Tap the voice icon. Phone, desktop, or web. Standard voice = turn-taking; Advanced voice = continuous, with interruptions.
  2. Pick a voice in Settings → Voice.
  3. Use it for thinking-out-loud tasks — driving brainstorms, talking through code, language practice.
  4. Switch to text mid-conversation — context carries over.

Optimal prompts for ChatGPT

Writing & editing

Edit my draft Act as a sharp editor. Don't rewrite — diagnose and prescribe. For the draft below, return: 1. The single biggest weakness, in one sentence. 2. Three specific edits with before → after. 3. One sentence I should consider cutting entirely, and why. Audience: [who reads this]. Tone target: [tone]. Draft: """ [paste draft] """

Brainstorming

Diverge then converge Help me brainstorm [topic]. Round 1 — diverge: give me 12 ideas covering safe, weird, ambitious, and contrarian. Round 2 — converge: pick the top 3 by [criterion] and explain the trade-offs. Round 3 — sharpen: turn the winner into a one-paragraph pitch. Don't hedge. I want opinions.

Learning a topic

Teach me Teach me [topic]. I already know [X, Y]. I don't know [Z]. Structure: 1. The one-sentence elevator definition. 2. The mental model (an analogy that maps to something I already know). 3. Three concrete examples of increasing complexity. 4. The two most common misconceptions and why they're wrong. 5. A 5-question self-test (no answers — I'll check myself).

Document analysis

PDF deep-read I've attached [doc name]. Treat it as the source of truth. Return, in this order: 1. The thesis in one sentence. 2. The five claims it actually defends, with the page they're argued on. 3. Any claim that's asserted but not supported. 4. Three questions I should ask the author to stress-test the argument. If something is unclear from the doc, say "not stated" — never guess.

Image generation

Image with reasoning Use gpt-image-2. Generate a [scene/object/illustration]. Style: [art direction — photorealism, flat illustration, 90s magazine cover, etc.] Composition: [framing — close-up, wide shot, isometric, top-down] Must include (legible): [exact text or signage, if any] Avoid: [what should not appear] Aspect ratio: [16:9 / 1:1 / 4:5]. After generating, check the result against the "must include" list and regenerate if anything is missing or misspelled.

Code in chat (one-offs)

Code snippet Write a [language] function that [does X]. Constraints: - Inputs: [types/shape] - Outputs: [types/shape] - Edge cases I care about: [list] - No external dependencies / use only the standard library. After the code, give me 3 test cases I can paste into a REPL.

Spreadsheet / data tasks

Clean my CSV Attached CSV has messy data. Clean it for analysis. Steps to perform: 1. Detect and list every quality issue (nulls, mixed types, dupes, weird dates). 2. Propose a fix for each, with the assumption you're making. 3. Apply the fixes and return a clean .xlsx download. 4. End with a "trust this output if…" caveat list. Never silently drop rows — quarantine them in a separate sheet.

Decision-making

Decision memo Help me decide between [option A] and [option B]. Context: [the situation, who's affected, the deadline]. Constraints: [budget, time, reversibility]. What I value most: [criteria, in order]. Produce a 1-page decision memo with: - Recommendation in the first line. - The 3 reasons that drove it. - The strongest counter-argument and your response. - What would have to be true for the other option to win.
🎈 ELI5

Same ChatGPT, but for your whole company. Your bosses can decide who sees what, your data stays private (it's not used to train the AI), and your team can share helpers (custom GPTs) so nobody starts from scratch.

You can plug in your work tools — Drive, Slack, GitHub, Salesforce, Notion — so ChatGPT can read them and pull info into your chats. Great for getting up to speed on a project or prepping for a customer call.

What ChatGPT Business / Enterprise gives you

  • SSO & admin controls — central provisioning, audit logs, group-level permissions.
  • Data privacy — your prompts and outputs are not used to train OpenAI models.
  • Higher rate limits and bigger context than consumer plans.
  • Shared connectors — Drive, Slack, GitHub, SharePoint, Box, Outlook, Confluence, and more.
  • Shared GPTs — your team builds custom GPTs that everyone can use.
  • Workspace memory and analytics — usage reports for the org.

Setting up a workspace

  1. Admin creates the workspace at chatgpt.com/admin.
  2. Configure SSO (Okta, Azure AD, Google, etc.) and SCIM if you want auto-provisioning.
  3. Invite teammates in bulk by domain or individually.
  4. Pin a default model and decide which models are available org-wide.
  5. Drop a workspace-wide instructions doc (style guide, glossary, escalation policy). Every chat inherits it.
Permissions matter Connectors only see what your account can see. Adding a connector does not bypass an upstream permission — if you can't read a Drive file in your browser, ChatGPT can't read it either.

Connectors & data

Common connectors in 2026:

ConnectorWhat it unlocks
Google Drive / OneDrive / BoxSearch and read files inside a chat without copy-paste.
Slack / TeamsPull recent threads from a channel as context.
GitHubRead PRs, issues, file contents.
Salesforce / HubSpotSurface accounts, contacts, opportunities.
Linear / Jira / AsanaRead tickets, summarize backlogs.
Confluence / NotionTreat your team wiki as searchable context.
Outlook / GmailRead recent threads, draft replies (you send).

Shared GPTs

  1. Build a GPT the same way as in consumer ChatGPT.
  2. Publish to your workspace instead of "Anyone with the link."
  3. Pin to the sidebar for the team.
  4. Govern with admin tools — turn off external GPTs in settings if you want to lock things down.

Optimal prompts for Business / Enterprise

Cross-tool research

Onboarding brief I'm joining the [project name] team next week. Get me up to speed. Pull from: - Confluence / Notion: the project hub and last 5 weekly notes - Linear / Jira: open + recently closed tickets - Drive: any doc with the project name in the title - Slack: the project channel, last 14 days Return: - 1-paragraph "what is this project" - 5 key decisions and when they were made - 5 names I should know and why - The 3 open risks, ranked Cite the source for each claim.

Sales prep

Account brief I have a meeting with [account] tomorrow. Build me a brief. Pull from: - Salesforce: account, contacts, open opps, recent activity - Outlook/Gmail: last 10 threads with anyone @[domain] - Drive: any doc with [account] in title or shared with them One page max: - Their current state and pain we know about - Who's in the room and what they care about - The 3 questions I should ask to advance the deal - One risk that could blow up the deal If a fact is fuzzy, mark it "(needs confirmation)".

Knowledge-grounded support reply

KB-grounded reply Draft a reply to this customer ticket using only our internal KB connector. Rules: - Cite the KB article ID/title for every factual claim. - If the answer isn't in the KB, say "not covered — escalate to T2". - Tone: warm, concise, no jargon, no apologies-for-apologies' sake. - End with one suggested next step the customer can take. Ticket: """ [paste ticket] """
🎈 ELI5

The API is the LEGO version of ChatGPT — you bolt the AI brain into your own apps with code. You get an API key (a secret password), pick a model, and call it from Python or JavaScript. The Responses API is the modern way to do this; function calling lets the AI call your code (look up a price, send an email, search a database) and feed the answer back.

Codex is a coding-specialist version. Connect it to a GitHub project and tell it "add this feature." It writes the code in a branch and opens a pull request — like having an extra teammate who never sleeps. There's also a CLI version that runs on your laptop.

Account & API keys

  1. Sign in at platform.openai.com. You can use the same account as ChatGPT.
  2. Create an API key under "API keys." Use named, scoped keys per project — never reuse.
  3. Set a usage limit in Settings → Limits before you start. Cheap insurance against runaway costs.
  4. Add a payment method and watch the first day's usage closely.
  5. Pick an SDK: openai (Python) or openai (Node/TS) are the official ones.
Security Never check API keys into git. Use env vars or a secret manager. Revoke and rotate if a key ever leaves your machine.

First API call

Python — first call pip install openai export OPENAI_API_KEY="sk-..." python - <<'PY' from openai import OpenAI client = OpenAI() resp = client.responses.create( model="gpt-5", input="Write a haiku about debugging at 2am.", ) print(resp.output_text) PY
Node — first call npm install openai export OPENAI_API_KEY="sk-..." node -e " import('openai').then(async ({ default: OpenAI }) => { const client = new OpenAI(); const resp = await client.responses.create({ model: 'gpt-5', input: 'Write a haiku about debugging at 2am.', }); console.log(resp.output_text); }); "

Responses API — the modern shape

The responses endpoint is OpenAI's unified API for text, images, audio, and tool use. Prefer it over the older chat.completions endpoint for new projects.

System + user with output controls from openai import OpenAI client = OpenAI() resp = client.responses.create( model="gpt-5", instructions=( "You are a precise technical editor. Reply in markdown only. " "If unsure, say 'not stated'." ), input=[ {"role": "user", "content": "Summarize the attached doc in 5 bullets."}, ], max_output_tokens=600, temperature=0.2, ) print(resp.output_text)

Tool use & function calling

Define functions the model can call and the API returns structured arguments you execute, then feed back the result. This is how you build agents that touch real systems.

Function calling (Python) from openai import OpenAI client = OpenAI() tools = [{ "type": "function", "name": "get_weather", "description": "Get current weather for a city.", "parameters": { "type": "object", "properties": {"city": {"type": "string"}}, "required": ["city"], }, }] resp = client.responses.create( model="gpt-5", input="What's the weather in Lagos right now?", tools=tools, ) # Inspect resp.output for a tool call, run your function, then send results back # in a follow-up call with role="tool".

Realtime API — voice agents

Bidirectional WebSocket / WebRTC stream for speech-to-speech experiences. GA since 2025-08-28. Use it for phone bots, voice assistants, language tutors.

When NOT to use Realtime If you're doing one-shot transcription and don't need turn-by-turn latency, gpt-4o-transcribe + gpt-4o-tts is simpler and cheaper.

Voice agent prompt

Voice agent system prompt You are a voice agent. Your output will be spoken aloud. Therefore: - Use short sentences. No bullet lists, headers, or markdown. - Avoid em-dashes, parentheticals, and acronyms not pronounced as words. - If the user interrupts, stop talking immediately and listen. - When given a list, mention "I'll list three items" first, then read them. - If you're going to take a long action, say "one moment" before doing it. Persona: [warm, concise, professional]. Domain: [your domain]. Out of scope: [what you don't do]. When out of scope, say what you can do instead.

Batch API — 50% off, 24-hour turnaround

Submit a JSONL file of requests; results come back within 24 hours at half price. Perfect for evals, classification at scale, embedding back-fills, nightly summarization.

Batch request file # requests.jsonl {"custom_id":"row-1","method":"POST","url":"/v1/responses","body":{"model":"gpt-5-mini","input":"Classify: 'I love this!' → positive/negative/neutral"}} {"custom_id":"row-2","method":"POST","url":"/v1/responses","body":{"model":"gpt-5-mini","input":"Classify: 'Worst experience ever.' → positive/negative/neutral"}} # upload + create batch client.files.create(file=open("requests.jsonl","rb"), purpose="batch") client.batches.create(input_file_id="file_...", endpoint="/v1/responses", completion_window="24h")

Codex — the cloud coding agent

Codex is OpenAI's agentic coding product. It can run as a cloud agent (parallel tasks against your GitHub repo) or as a local CLI (similar to Claude Code).

  1. Connect a repo at chatgpt.com/codex (Plus/Pro/Business/Enterprise).
  2. Pick a model — GPT-5.3-Codex is the coding-specialised tune; GPT-5.5 / GPT-5.4 also work.
  3. Give it a task in plain English. It clones the repo, makes changes in a branch, runs your tests, and opens a PR for review.
  4. Review and merge like any human PR.

Codex CLI (local)

  1. Install: npm install -g @openai/codex (or follow the platform docs).
  2. Authenticate with your OpenAI account.
  3. Run inside a project: codex and type your goal.
  4. Approve actions as it proposes file edits and shell commands.
  5. Configure per-project rules in a codex.md at repo root (similar to CLAUDE.md).

Optimal prompts for the API & Codex

Structured extraction

JSON schema output resp = client.responses.create( model="gpt-5-mini", input=f"Extract structured fields from this resume:\n\n{resume_text}", response_format={ "type": "json_schema", "json_schema": { "name": "Resume", "schema": { "type": "object", "properties": { "name": {"type": "string"}, "email": {"type": "string"}, "years_experience": {"type": "integer"}, "skills": {"type": "array", "items": {"type": "string"}}, }, "required": ["name", "skills"], }, }, }, )

Codex feature work

Codex — plan-first feature Goal: add /api/v1/usage endpoint returning this month's request count per API key, scoped to the calling user. Step 1 — read: explore auth middleware and existing /api/v1/* routes. Step 2 — propose: a 5-bullet plan with file paths you'd touch. Stop and wait. Step 3 — implement only what I approve, in one cohesive PR. Step 4 — add tests covering: happy path, unauthenticated, wrong user. Acceptance: `curl -H "Authorization: Bearer $T" /api/v1/usage` returns 200 + JSON like `{ key_id, count, period_start, period_end }`.

Long-context summarization (GPT-4.1)

1M-token context resp = client.responses.create( model="gpt-4.1", # 1,000,000-token context instructions=( "Summarize the entire transcript. Output a markdown brief with: " "TLDR (2 sentences), Decisions Made, Open Questions, Action Items " "(with owners). Cite line numbers when quoting." ), input=f"<transcript>\n{long_transcript}\n</transcript>", )

Reasoning model (o3) for hard analysis

o3 — multi-step reasoning resp = client.responses.create( model="o3", input=( "We're seeing an O(n^2) pattern in p99 latency above 10K rows. " "Here's the relevant code:\n\n" + code + "\n\nReason about: where the quadratic comes from, the smallest " "change that fixes it without breaking the API, and what we'd " "lose by switching to a different data structure." ), # o-series models reason internally; you'll get a more thorough answer # but with longer latency. )

Cost-aware routing

Two-pass routing # Cheap first pass — does this even need the flagship? triage = client.responses.create( model="gpt-5-mini", input=f"Reply with only 'simple' or 'hard' for this query:\n{query}", ).output_text.strip() model = "gpt-5.5" if triage == "hard" else "gpt-5" final = client.responses.create(model=model, input=query)
🎈 ELI5

A prompt is directions for the AI. The better the directions, the better the help. Tell it four things: WHO it should be (a sharp editor? a patient teacher?), WHAT you want done, WHO reads the answer, and HOW the answer should look (5 bullets? a paragraph? a JSON object?).

If the answer is bad, don't start over. Say "redo that, but tighter and bossier" or "drop the both-sides framing — pick one and defend it." It already has the context — just steer it.

Use-case prompt library

Copy-paste-ready prompts for the most common goals. Edit the bracketed parts to fit your situation. Click Copy on any card.

Writing & email

Email reply Reply to this email. Tone: warm, concise, no apology-for-apologies'-sake. Length: 4–6 sentences. End with one clear next step. If the thread mentions a specific commitment I made, restate it back so they know I read carefully. Output only the reply text — no preamble, no signature. Email: """ [paste email] """
Slack / DM rewrite Rewrite this Slack message: - 30% shorter - friendlier tone - keep the load-bearing facts - no apology-for-apologies'-sake Output only the rewritten version, no commentary. Original: """ [paste] """
Difficult message — 3 versions Help me write a [decline / push-back / disagree] message. Recipient: [who they are, relationship] What I need to communicate: [the message] Constraints: [keep relationship intact / be firm / be diplomatic] Draft three versions: gentle, neutral, firm. End each with one open question that invites them to engage further.
Meeting follow-up Turn these meeting notes into: 1. A 3-bullet summary I can paste into Slack 2. An action items list (owner — action — by when) 3. One open question that wasn't resolved Notes: """ [paste] """
LinkedIn post Draft a LinkedIn post about [topic]. Tone: confident, not braggy. Hook in the first line. 5–8 short paragraphs. End with a question that invites comments. No emojis, no hashtags. Avoid "thrilled to share" and "humbled".

Editing & feedback

Sharp editor Act as a sharp editor. Don't rewrite — diagnose and prescribe. For the draft below, return: 1. The single biggest weakness, in one sentence. 2. Three specific edits with before → after. 3. One sentence I should consider cutting entirely, and why. Audience: [who reads this]. Tone target: [tone]. Draft: """ [paste] """
Tighten by N% Cut this text by 40% without losing meaning. Keep the voice. Return only the tightened version, no commentary. """ [paste] """
Stress-test my argument Find the holes in this argument as a tough but fair critic would. Return: 1. The strongest counter-argument I haven't addressed. 2. The weakest claim I'm making, and why it's weak. 3. What evidence would change your mind. 4. One thing I should NOT change — what I'm getting right. Argument: """ [paste] """

Learning & research

Teach me [topic] Teach me [topic]. I already know [X, Y]. I don't know [Z]. Structure: 1. The one-sentence elevator definition. 2. The mental model (an analogy that maps to something I already know). 3. Three concrete examples of increasing complexity. 4. The two most common misconceptions and why they're wrong. 5. A 5-question self-test (no answers — I'll check myself). Skip filler. Be willing to be wrong if I push back.
Concept to analogy Find me the best analogy for explaining [concept] to [audience]. Return 3 candidates ranging from safe to creative. For each, name what the analogy gets right AND what it breaks. Pick your favorite and defend the choice.
Compare options Compare [option A] vs [option B] for [use case]. Return a 1-page brief: - The decision in one sentence - Top 3 dimensions where they differ - Best fit for each (when would you pick A, when B) - One scenario where neither is right and what is Skip the generic "depends on your needs" framing. Pick.

Decision-making

1-page decision memo Help me decide between [option A] and [option B]. Context: [situation, who's affected, deadline]. Constraints: [budget, time, reversibility]. What I value most: [criteria, in order]. Produce a 1-page decision memo with: - Recommendation in the first line - The 3 reasons that drove it - The strongest counter-argument and your response - What would have to be true for the other option to win
Pre-mortem Run a pre-mortem on this plan: [paste plan]. Imagine it's [N months] from now and the project failed. Write the failure post-mortem. List the top 5 root causes in order of likelihood, and what we could do today to prevent each. Be specific. "Insufficient communication" is a non-answer; "the design team and engineering used different definitions of done" is useful.
Devil's advocate Take the position that [my proposal] is wrong. Be the strongest advocate for the opposite view. Give me: 1. Three reasons my proposal is the wrong call. 2. A version of the opposite plan that you'd actually defend. 3. One question that, if answered honestly, would tell us who's right. Don't soften — I want the spicy version. I'll evaluate it on the merits.

Brainstorming

Diverge then converge Help me brainstorm [topic]. Round 1 — diverge: 12 ideas covering safe, weird, ambitious, contrarian. Round 2 — converge: pick the top 3 by [criterion] and explain trade-offs. Round 3 — sharpen: turn the winner into a one-paragraph pitch. Don't hedge. I want opinions.
Opposite of obvious What's the obvious answer to [question]? State it in one sentence. Now: what if the OPPOSITE is true? Make the strongest case for the counterintuitive answer. Don't strawman; defend it as if you believed it. End with: which one do you actually think is right, and why.

Coding

Bug repro & fix Bug: [describe the bug + what should happen]. Reproduce with a failing test first, then fix the smallest amount of code that turns it green. Constraints: - Don't refactor unrelated code. - No new dependencies. - Test goes in [path] next to similar cases. When done: show me the diff, the test output, and one sentence on the root cause.
Code review of a snippet Review this code as a senior reviewer would. Flag: 1. Anything broken or buggy 2. Anything that looks like dead code or leftover debug 3. Tests that test the implementation rather than behavior 4. Risky assumptions that aren't documented 5. One thing that needs a comment explaining the WHY Skip stylistic nits the formatter would catch. Code: """ [paste] """
Refactor for [property] Refactor [file/function] to [be smaller / be testable / remove duplication]. Rules: - No behavior changes — every existing test must still pass without edits. - Keep the public API identical. - Split by responsibility, not by line count. Start by reading the code and tests, then propose the split before editing.
Generate test cases For the function below, generate test cases covering: - Happy path - Edge cases (empty input, null, max size, off-by-one) - Error cases (what should throw, what should return safely) Use [framework]. Output the tests only, no commentary. Function: """ [paste] """
Regex / SQL from English Generate a [regex / SQL query] that [describes what it should do]. Then explain it line-by-line in plain English. Then give me 3 test cases — 2 that should match/return rows and 1 that shouldn't.

Data & analysis

Clean a CSV Attached CSV is messy. Clean it for analysis. Steps: 1. Detect and list every quality issue (nulls, mixed types, dupes, weird dates). 2. Propose a fix for each, with the assumption you're making. 3. Apply the fixes; return a clean version. 4. End with a "trust this output if…" caveat list. Never silently drop rows — quarantine them in a separate sheet.
Extract structured fields Extract structured fields from this text. Required fields: - [field 1] (type) - [field 2] (type) Output as JSON. For any field that's not stated in the text, return null. Never guess. Text: """ [paste] """
Find anomalies Look at this dataset. Identify the 5 most surprising rows or values. For each, explain why it's surprising and what plausible explanations exist (data error, real outlier, definition issue). Don't moralize about which explanation is right — just lay them out.

Documents

TL;DR a long doc TL;DR this document. Return: 1. The thesis in one sentence. 2. The 5 claims it actually defends. 3. Any claim that's asserted but not supported. 4. The single most surprising fact. 5. Three questions worth asking the author. If something is unclear, say "not stated" — never guess.
Find every claim Pull every factual claim from this document into a numbered list. For each, note: - The page/section it appears in - Whether the doc supports it (with evidence) or just asserts it - Confidence level (high / medium / low) Useful for fact-checking before sharing.
Outline → draft Turn this outline into a [first draft / one-pager / blog post]. Tone: [tone]. Length: [target]. Audience: [who]. Don't add new claims I didn't include in the outline. If a section is underspecified, leave a note like [needs example here] instead of inventing.

Creative & personal

Name brainstorm Brainstorm 12 names for [thing]. Cover these vibes: - 3 plain/descriptive (does what it says) - 3 evocative/metaphorical - 3 weird/punchy/short - 3 contrarian (intentionally counterintuitive) For each: a one-line rationale and one risk. Pick your top 3 and rank them.
Cold outreach email Draft a cold outreach email to [recipient role at company type]. Goal: [what I want them to do] What I bring: [my context, what's interesting] Constraint: ≤120 words, no buzzwords ("synergy," "leverage"), one specific reference to their work. End with one easy ask (e.g., 15-min call) — not "let me know if interested".
Travel itinerary Plan a [N-day] trip to [destination] for [N people, ages, interests]. Constraints: [budget, mobility, dietary, packed-vs-slow style]. Day-by-day: - Morning / afternoon / evening for each day - One must-do, one easy fallback - Estimated cost per day - One thing locals do that tourists miss Skip generic recommendations everyone already knows.

Interactive prompt builder

Pick a goal, fill in a few blanks, and copy the result. Works for any OpenAI surface.

Fill in the fields above…

Patterns library

The "stop and ask" pattern — for ambiguous tasks
Stop & ask Before doing anything, list the 3 most important things you'd need to know to do this well that I haven't told you. Number them. Then propose the answer you'd assume by default for each. Don't start the work until I confirm.
The "two passes" pattern — for higher-quality output
Two passes Pass 1: draft [the deliverable]. Don't optimize — just get it out. Pass 2: critique your own draft as a tough editor would. List the 3 weakest points specifically. Pass 3: rewrite, fixing those 3 things. Show only the final.
The "narrow-then-widen" pattern — for research
Narrow then widen Step 1 — narrow: pick the single most important sub-question inside [topic] and answer that one in depth (300 words, with 3 cited sources). Step 2 — widen: zoom out and place that answer in the context of the broader field in 5 bullets. Step 3 — gaps: what would I need to read or test to be confident? List 3.
The "constraint stack" pattern — for code
Constraint stack Build [thing]. Must hold (non-negotiable): - [hard requirement 1] - [hard requirement 2] Should hold (break only with explicit reason): - [strong preference 1] Nice to have: - [optional 1] If any "must" conflicts with another, stop and ask — don't pick.
The "rubric" pattern — for evaluating output
Rubric-driven Produce [deliverable]. Then grade your own output against this rubric: - Clarity (1-5) - Specificity (1-5) - Length appropriateness (1-5) - Action-readiness — could the reader act on it without follow-up? (1-5) If any score is below 4, revise once and re-grade. Show only the final + scores.

Anti-patterns (what to stop doing)

Anti-patternWhy it hurtsDo this instead
"Help me with this."Generic input → generic output.State the deliverable: "Rewrite this email so it's 30% shorter and warmer."
"Be creative." / "Be smart."Vague adjectives don't constrain output.Name a target: "Three options ranging from safe to ambitious. Each with one risk."
Asking again instead of correctingLoses context — model restarts from scratch."Same task, but tighter and with no jargon" in the same conversation.
10-paragraph prompt for a simple taskDrowns the actual ask.Match prompt length to task complexity.
"Don't hallucinate."It's a vibe, not a constraint."If you're not sure, write 'unknown' and explain what would resolve it."
Pasting 50-page docs without focusModel treats it all as equally important."Only the methodology section. Ignore the appendices."
Using GPT-5.5 for everythingSlow + expensive when GPT-5 / GPT-5.4 would do.Default to GPT-5; escalate when an answer disappoints.
Ignoring temperature for structured outputDrift between calls breaks downstream parsing.Set temperature 0.0–0.2 + use json_schema response format.

Universal rescue prompts

When something isn't working, paste one of these mid-conversation rather than starting over.

Diagnose & reset Pause. Tell me in 3 bullets why your last response missed what I wanted. Then propose a tighter prompt I could give you to fix it. Wait — don't try again yet.
Cut the fluff Redo your last response with no preamble, no caveats, no "I'd be happy to," no closing summary. Just the substance.
Show your work Walk me through your reasoning before the answer this time. I want to see the trade-offs you considered, even the ones you rejected.
Stronger opinion Drop the both-sides framing. Pick the option you'd choose if it were your own project, and defend it. I'll push back if I disagree.