How discovery actually works (no hand-waving)
- Metadata drives invocation. ChatGPT decides when to call your connector from the conversation based on your names, descriptions, and parameter docs. Treat this like product copy you iterate and test.
- Suggestions in chat exist. Users can invoke by name, and ChatGPT may suggest an app when it’s relevant to the conversation—so precision and clarity in metadata matter.
- Quality gates affect visibility. OpenAI states that higher quality (design + functionality) can lead to more prominent featuring once submissions open. Build toward that bar.
A practical framework to optimize metadata (and prove it works)
Use this repeatable loop from OpenAI’s guidance:
- Research use cases → write “golden prompts.” Draft representative prompts for your target tasks; you’ll use them later to tune recall (how often your app is selected when it should be) and precision (how often it’s selected when it shouldn’t).
- Author concise, actionable metadata.
- Name: literal and specific (e.g., “Create Shipping Labels”).
- Description: “Use this when…” + outcomes + key constraints.
- Parameter docs: plain language, map 1:1 to your JSON Schema.
This is exactly what OpenAI’s Optimize Metadata guide prescribes.
- Test in Developer Mode. Connect your HTTPS
/mcpendpoint, run your prompt set, and inspect request/response pairs (or use the API Playground for raw traces). Track recall/precision numerically. - Iterate and re-measure. Hill-climb your copy based on misses you observe, without overfitting to a single prompt.
Content & UX signals that influence discovery (and conversion)
- Design for chat, not a web port. Components run in a sandboxed iframe, communicate via
window.openai, and should be compact; keep fullscreen for deep review only—this aligns with OpenAI’s design guidelines. - Map UI to schema. Every field in your UI should correspond to a parameter the model can supply, which improves successful tool calls after discovery. (This is implicit in the Apps SDK reference and metadata guidance.)
- Safety and clarity win. Clear labels and explicit write-action confirmations are required—and they build trust, which reduces abandonment once your app is invoked.
Pre-submission checklist (what review teams expect)
OpenAI’s App developer guidelines outline the bar for listing; start assembling these now while Apps SDK is in preview:
- Privacy policy (public), data minimization, and adherence to sensitive-data restrictions.
- Accurate tool names/descriptions/labels (especially marking writes) and a support contact.
- A stable build; “beta/trial” experiences are not acceptable for listing.
- Expect submissions to open later this year; plan a hardening window accordingly.
Evidence you should capture (so you can prove quality)
- Discovery metrics: precision/recall across your golden prompts before and after metadata edits.
- End-to-end traces: Developer Mode or API Playground screenshots showing correct tool selection and parameters.
- Safety gates: screenshots of write-action confirmation dialogs and audit notes mapping to Security & Privacy controls.
Launch plan (source-aligned)
- Build & test now in Developer Mode (Apps SDK is in preview). Connect your MCP server over HTTPS and validate flows.
- Optimize metadata using the loop above until your discovery KPIs stabilize.
- Prepare submission assets (policy links, screenshots, verified developer info) so you’re ready when OpenAI opens the queue later this year.
- Design for featuring. Revisit the design guidelines and tighten UX polish; higher-quality apps may be featured in directory and conversation surfaces.
How we help
We ship contract-first MCP tools, craft metadata that converts, and run a measurement loop (Developer Mode + API Playground) until your discovery KPIs hit target—then package everything for submission against App developer guidelines. All tactics above are straight from OpenAI’s official docs and product posts.