GuideModel Context ProtocolTerminator MCP agent

What are MCP servers, really? The boot sequence nobody writes about.

Every article on the first page of Google defines an MCP server the same way: a service that exposes tools, resources, and prompts to an LLM over the Model Context Protocol. All correct, all abstract. None of them show you what happens when the process actually boots. Terminator's MCP agent runs roughly 150 lines of pre-handshake code before it accepts its first request: orphan cleanup, port retry loops, panic hooks, Windows UTF-8 fixes, optional parent-PID auto-destruct, transport-specific lifecycle setup. That's the operational shape of an MCP server, and it's open source, so let's walk through it.

T
Terminator
11 min read
4.9from Open-source, MIT
main.rs line 151: kill_previous_mcp_instances runs before the MCP handshake
Port 17373 has a 5-attempt retry loop with 2000ms grace window
Custom panic hook redirects to stderr so stdout stays JSON-RPC clean
Three transport modes (stdio, SSE, streamable HTTP) in one binary

The short answer, then the interesting answer

Short answer: MCP servers are processes that speak JSON-RPC 2.0 over stdio or HTTP, implement list_tools and tools/call, and expose a named catalogue of capabilities an LLM can invoke. Every article says that.

Interesting answer: MCP servers are long-lived processes, which means almost everything that matters about running them is what happens around the protocol. They can be orphaned, they can fight each other over ports, they can panic mid-call, they can leak when the editor that spawned them crashes. Terminator's MCP agent handles every one of those cases explicitly, and you can read the code. Each of the sections below pulls directly from crates/terminator-mcp-agent/src/main.rs.

Terminator's MCP server boot, by the numbers

Each number below comes from the current open-source implementation. Line counts are wc -l on the actual files. Retry counts and timeouts are constants in main.rs.

0Lines in main.rs
0Port it verifies after cleanup
0Port-bind retry attempts
0Transport modes in one binary

What actually happens when the server boots

When an editor spawns Terminator's MCP server, the process does not immediately start accepting MCP traffic. It does these eight things first, in this order:

1

Args parsed, transport picked

clap reads --transport, --port, --host, --auth-token, --watch-pid, --enforce-single-instance. Before any of that matters, the agent runs boot hygiene.

2

Kill previous instances

Scan the system process list with sysinfo. For every terminator-mcp-agent or terminator-bridge-service that isn't us, decide by mode: enforce_single kills all copies, default kills only copies whose parent died.

3

Wait and verify port 17373

After any kill, sleep 2000ms for the OS to release ports. Then try TcpListener::bind up to 5 times, 1-second gap between attempts. Log each retry.

4

Install the panic hook

Override the default panic behavior so that every panic payload writes to stderr only. stdout is off-limits for anything that isn't a valid JSON-RPC frame.

5

Windows UTF-8 fix, non-blocking

Spawn cmd /c chcp 65001 with CREATE_NO_WINDOW, without waiting on the result. On some Azure VMs chcp takes 6+ seconds; waiting would break health checks.

6

Init telemetry, Sentry, PostHog, logging

Now that the process is safe, set up observability: log_capture, sentry guard, OpenTelemetry, execution_logger, posthog startup event. The binary also logs its own build date, size, and git hash.

7

Optionally spawn the PID watcher

If --watch-pid was passed and we're on Windows, tokio::spawn a loop that polls is_process_alive every second and calls exit(0) when the parent dies.

8

Finally: serve the transport

Now match on TransportMode. Stdio pipes the DesktopWrapper into rmcp::serve(stdio()). SSE starts an SseServer on the bound port. Http sets up a StreamableHttpService with a shared DesktopWrapper behind an Arc<RwLock>. Only now does the MCP handshake happen.

What the boot actually prints

Paste this into a terminal and watch a leftover instance die. Notice how the protocol handshake (the MCP part) doesn't start until the cleanup is done.

terminator-mcp-agent startup

The uncopyable part: kill_previous_mcp_instances

This is the function that makes Terminator's agent safe to launch repeatedly from an editor that crashes. Every MCP server needs something like it, but almost none ship with it. Open crates/terminator-mcp-agent/src/main.rs at line 151 and you'll find this exact shape, abbreviated below:

crates/terminator-mcp-agent/src/main.rs

Two modes fall out of the enforce_single boolean. In default mode the agent is polite: it only kills other copies whose parent PID is already dead. Another active editor keeps its own agent. In --enforce-single-instance mode (production), it kills every other copy on the machine, no exceptions. The port retry loop at the bottom is what makes the operation safe: the OS needs ~2 seconds to actually release port 17373 after SIGKILL, and the retry verifies we got it.

Why stdout is sacred, and how the panic hook protects it

On the stdio transport, the MCP protocol frames JSON-RPC 2.0 messages and writes them to stdout. The client parses stdout as JSON. One stray print, one panic stack trace, one library that thinks it can log to stdout, and the connection dies. Terminator installs a panic hook that fully redirects every panic payload to stderr. On Windows, it also fixes the console code page to UTF-8 so unicode in the accessibility tree doesn't arrive mangled.

crates/terminator-mcp-agent/src/main.rs

The chcp 65001 call is deliberately non-blocking: the comment in the source explains that on some Azure VMs the call can take 6 seconds, which would break the health check window. Fire-and-forget is the right trade-off.

Everything the agent does before accepting its first request

This is what the generic "MCP server" articles skip. When the LLM sends its first tools/list, all of this has already happened.

Boot-time checklist

  • kill orphaned copies of itself (default: only if their parent is dead)
  • enforce single-instance mode when launched with --enforce-single-instance
  • sleep 2000ms and retry binding port 17373 up to 5 times
  • install a panic hook that writes to stderr so stdout stays JSON-RPC clean
  • on Windows, non-blocking chcp 65001 to force UTF-8 console output
  • create a Job Object so bun and node workers die with the agent
  • initialize Sentry, OpenTelemetry, execution_logger, PostHog startup event
  • stamp GIT_HASH, GIT_BRANCH, BUILD_TIMESTAMP into logs via build.rs rustc-env
  • if --watch-pid set, spawn a 1-second tokio poller that exits when parent dies
  • only after all of that, start the chosen transport and accept the first request

Three transports, one server binary

Most reference MCP servers only support stdio. Terminator ships three transport modes and the lifecycle differs meaningfully between them. The key detail: the DesktopWrapper state (recorder, in-progress sequences, focus history) has to persist across requests in HTTP mode, which is why Arc<RwLock<Option<DesktopWrapper>>> shows up only there.

crates/terminator-mcp-agent/src/main.rs

Three inputs, one dispatcher, four OS surfaces

Whatever transport you pick, the requests all land in the same dispatch_tool function, and the tool handlers fan back out to the OS. That's why the server is one binary and not three.

From transport to OS surface

Stdio transport
SSE transport
Streamable HTTP
dispatch_tool
Windows UIA
macOS AX
Browser JS
Shell / bun / node

Six things the definition pages never mention

An MCP server is a long-lived OS process

Not a lambda. Not a request handler. A process that stays alive between tool calls because the state inside it (recorder, focus, cancellation tokens, Windows UIA handles) outlives any single request. That changes everything about its boot sequence.

Boot hygiene is mandatory

Terminator's MCP agent kills its own orphans before it opens a socket. Two modes: kill every copy (enforce_single_instance) or kill only copies whose parent is dead (default).

Port 17373 has a retry loop

After cleanup it sleeps 2000ms, then tries to bind 127.0.0.1:17373 up to 5 times with 1-second gaps. If it can't bind after 5 tries it logs a warning but continues. Belt and braces.

stdout is sacred on stdio transport

stdio MCP servers speak JSON-RPC 2.0 over stdout. A stray println or panic message and the client sees malformed JSON. Terminator's panic hook explicitly redirects every panic payload to stderr so stdout stays protocol-clean.

Child processes get killed with the server

When the MCP server receives Ctrl+C, it calls child_process::kill_all() which reaches into a Windows Job Object set up at boot and terminates every bun/node worker it spawned. No orphaned transpilers.

--watch-pid means auto-destruct

Pass --watch-pid <editor_pid> and the agent spawns a tokio task that checks is_process_alive(pid) every second. When the editor dies, the agent calls std::process::exit(0). No leaked automation processes.

Generic MCP tutorial server vs Terminator's production agent

Almost every "build your first MCP server" tutorial skips everything on the right-hand column. You can ship a working MCP server in an afternoon without any of it. You will regret that the first time your editor crashes mid-call.

FeatureGeneric MCP tutorial serverTerminator MCP agent
What happens before the first request?Docs don't say. Reference servers usually just start a listener.kill_previous_mcp_instances() runs, with two modes: kill orphans only (default) or kill every other copy (--enforce-single-instance).
Port contention on restartSilent bind failure, client sees an unclear disconnect.2000ms grace, then 5 retry attempts binding 127.0.0.1:17373, with log lines on each attempt.
What happens when a tool handler panics?Stack trace goes to stdout, JSON-RPC stream corrupts, client dies.Custom panic hook writes to stderr only. stdout is reserved for protocol bytes, no matter what.
Transport modes in one binaryMost reference MCP servers support stdio, period.stdio, SSE, and streamable HTTP, selected by a --transport flag with per-mode lifecycles.
What if the parent editor dies?The MCP server leaks, stays in the process list forever.--watch-pid <parent> spawns a tokio task that polls is_process_alive() every 1s and calls std::process::exit(0) when parent goes away (Windows).
Unicode on Windows consolesDefault code page is IBM437. Accessibility tree output corrupts.Non-blocking chcp 65001 runs at boot. UTF-8 everywhere without blocking startup.
Binary traceabilityHash of a binary, maybe.build.rs stamps GIT_HASH, GIT_BRANCH, and BUILD_TIMESTAMP into the binary via rustc-env. Logged on every boot.
941 lines

of main.rs handle boot hygiene, panic isolation, and transport lifecycles before the MCP handshake ever starts.

terminator-mcp-agent/src/main.rs

The actual calls that run before your first tool call

Every chip below is a real function, hook, or syscall that executes in the first second of the agent's life.

kill_previous_mcp_instances()TcpListener::bind 127.0.0.1:17373std::panic::set_hook → eprintln onlychcp 65001 (non-blocking)init_job_object()init_telemetry() + init_execution_logger()--watch-pid tokio poller (1s)StreamableHttpService + Arc<RwLock>SseServer::servestdio() transport + service.waiting()child_process::kill_all()build.rs → GIT_HASH, GIT_BRANCH, BUILD_TIMESTAMP

Run one yourself in 0 seconds

Wire Terminator's MCP server into Claude Code, then watch your startup logs to see every boot step we walked through fire in real time.

setup + live log

Frequently asked questions

What are MCP servers in one sentence?

MCP servers are long-lived OS processes that speak the Model Context Protocol (JSON-RPC 2.0 over stdio, SSE, or streamable HTTP) and expose a catalog of named tools an LLM can call. In practice, they are not abstract 'services' but real processes on your machine with startup sequences, memory, and lifecycle concerns. Terminator's MCP agent is a concrete example: a Rust binary that runs 150+ lines of boot hygiene (kill orphans, verify port 17373, install a panic hook, UTF-8 fix, start telemetry) before it accepts its first list_tools request.

Why does an MCP server need to kill previous instances of itself?

Because MCP clients spawn MCP servers as child processes. When an editor like Claude Code or Cursor crashes, restarts, or loses its connection, the child MCP servers can be orphaned: still running, still holding ports, still hanging onto automation handles. Terminator's agent is especially prone to this because it holds OS-level UIAutomation resources. So main.rs line 151's kill_previous_mcp_instances function scans for other copies of terminator-mcp-agent and terminator-bridge-service, skips itself, and kills the rest. Default mode only kills processes whose parent is dead (so other active editors keep their agents). Pass --enforce-single-instance and it kills every other copy regardless.

What's special about the panic hook in Terminator's MCP server?

Stdio MCP servers communicate by writing JSON-RPC 2.0 frames to stdout. Anything non-JSON on stdout breaks the protocol: the client sees malformed data and disconnects. The default Rust panic behavior writes to stdout. So Terminator installs a custom panic hook at main.rs line 270 that writes only to stderr, never to stdout. Every panic payload, every location line goes to stderr. This is one of those details that never appears in the MCP spec but is essential if you want your server to survive a single handler bug without killing the whole session.

Why does Terminator's MCP agent run chcp 65001 on Windows, and why non-blocking?

Windows consoles default to code page IBM437, which mangles UTF-8 output. The accessibility tree from Windows UIA is full of unicode (app names, element properties), so Terminator forces the console to UTF-8 via cmd /c chcp 65001. It uses std::process::Command::spawn without waiting for completion because, per the code comment, on some Azure VMs chcp can take 6+ seconds and a blocking call would miss the health check window. So the fix is fire-and-forget. By the time any real work starts, the console is already UTF-8.

What does the --watch-pid flag do?

It turns the MCP server into an auto-destructing process. When you pass --watch-pid <parent_pid>, main.rs spawns a tokio task that calls is_process_alive(pid) every 1000ms (Windows-only, via PROCESS_QUERY_LIMITED_INFORMATION with a fallback to PROCESS_QUERY_INFORMATION). The moment that parent PID stops responding, the agent calls std::process::exit(0). This exists because editors can crash without cleaning up their children; --watch-pid ensures the MCP server doesn't linger after the editor it was spawned for is gone.

How many transport modes does a single MCP server support?

Depends on the server. The reference SDK servers are usually stdio only. Terminator's MCP agent supports three, selected by a --transport flag: Stdio (default, used by local editors), Sse (Server-Sent Events, for legacy web integrations), and Http (streamable HTTP, for remote clients and load-balanced deployments). The three have different lifecycles: stdio blocks on service.waiting(), SSE blocks on Ctrl+C then calls child_process::kill_all(), and HTTP uses a shared Arc<RwLock<Option<DesktopWrapper>>> so recorder state persists across requests. Same dispatch_tool, same tools, three wildly different process shapes.

Is there a port MCP servers are expected to use?

There is no reserved port in the MCP spec. Servers pick whatever they want. Terminator's MCP agent defaults to 127.0.0.1:3000 for SSE and HTTP transports and uses 17373 for internal coordination (that's the port it verifies after cleanup). You can override host and port via --host and --port. If you're running multiple MCP servers behind a reverse proxy, you'll end up assigning them ports yourself anyway.

What's the difference between an MCP server and a REST API from the operator's perspective?

A REST API is usually stateless and designed to run behind a load balancer with N replicas. An MCP server on stdio is a single process owned by a single editor session, and its internal state (focus restoration, active recorders, in-progress workflows, cancellation tokens for long-running clicks) has to survive between tool calls. That's why an MCP server cares about boot hygiene in a way a REST API usually doesn't: one process per session, holding real OS resources, means you really don't want two copies fighting. Terminator's enforce_single_instance flag exists specifically for production deployments where exactly one agent per machine is required.

What else does build.rs do besides extracting tool names?

Three things on top of the tool extraction. It runs git rev-parse HEAD to capture the commit hash into GIT_HASH, git rev-parse --abbrev-ref HEAD for the branch into GIT_BRANCH, and chrono::Utc::now().to_rfc3339() for BUILD_TIMESTAMP. All three get injected via cargo:rustc-env so the final binary carries its own provenance. Every boot logs those three values along with the binary path, modification time, and file size. When a user reports a bug, you can ask 'what was the Git commit line in your startup logs?' and narrow the source revision instantly.

Why would I care about any of this if I just want to use an MCP server?

If you only ever use one AI editor on one machine and never restart it, you won't care. The moment you run Claude Code and Cursor at the same time, or restart your editor in the middle of an automation, or try to run Terminator in a CI pipeline or on a shared Windows VM, you'll hit every one of these concerns. The difference between an MCP server that 'works on my machine' and one that holds up in production is exactly this boot-hygiene layer: orphan cleanup, port retries, panic isolation, lifecycle flags. It isn't glamorous but it's the majority of the code that separates a demo from a product.

Want a desktop agent that survives the editor crashing?

Terminator is the MCP server that made us care about every line on this page. It gives your AI coding assistant the ability to control every app on your desktop, not just write code. Open source, MIT licensed, a single npx command to install.

Install Terminator
terminatorDesktop automation SDK
© 2026 terminator. All rights reserved.