Stop Sequences

Stop Sequences are parameters passed during sampling requests to control the output of an AI model. They prevent the model from continuing beyond a desired endpoint (e.g., stopping after a single sentence or before it starts hallucinating a new user turn).

Behavior in MCP

When a client receives a sampling/createMessage request from a server, it can specify stopSequences in the underlying LLM call.

Use Cases

Configuration

In most MCP-enabled clients, stop sequences are managed by the host's inference engine and applied to the sampling context provided by the server.

Questions & Answers

What are "Stop Sequences" in the context of MCP sampling?

Stop sequences are specific character patterns that signal an AI model to terminate its text generation. They are passed as parameters during a sampling request to ensure output is contained within desired boundaries.

Why are stop sequences useful when an agent is generating structured data?

When generating formats like JSON, a stop sequence—such as a closing brace }—can prevent the model from continuing to generate unnecessary text or hallucinating data outside the intended structural format.

How do stop sequences contribute to the efficiency of AI applications?

By ending text generation as soon as the relevant information is provided, stop sequences reduce token consumption, which lowers costs and significantly improves response latency for the end-user.

Back to Glossary