Treating MCP like an API creates security blind spots

In this Help Net Security interview, Michael Yaroshefsky, CEO at MCP Manager, discusses how Model Context Protocol’s (MCP) trust model creates security gaps that many teams overlook and why MCP must not be treated like a standard API. He explains how misunderstandings about MCP’s runtime behavior, governance, and identity requirements can create exposure. With MCP usage expanding across organizations, well-defined controls and a correct understanding of the protocol become necessary.

What aspects of MCP’s trust model are most misunderstood right now, and can you share a real example where implementers made incorrect assumptions?

Many people hold an erroneous (and dangerous) assumption that communication between MCP servers and clients is essentially the same as API-based transactions. However, MCP and APIs are incredibly different, especially when it comes to your security posture. It’s dangerous to think otherwise. 

APIs generally don’t cause arbitrary, untrusted code to run in sensitive environments. MCP does though, which means you need a completely different security model. LLMs treat text as instructions, they follow whatever you feed them. MCP servers inject text into that execution text. For example, “what tools exist? What are the descriptions for these tools?” 

That text can influence LLM behavior. Further, unlike APIs where you can use a specific API version, you can’t review and pin trusted versions within an MCP environment. Upon each connection, your MCP client will receive the latest published metadata provided by the MCP server. In other words, MCP provides runtime-provided text that you have no way to inspect. While an MCP server may seem benign upon initial connection, there’s the latent possibility for a trusted MCP server to inject malicious context in the future. That would be called a rug pull. These risks are unique to MCP, and they require specialized solutions that ordinary API security frameworks cannot provide. 

Security professionals might also erroneously assume that they can trust all clients registering with their MCP servers, this is why the MCP spec is updating. MCP builders will have to update their code to receive the additional client identification metadata, as dynamic client registration and OAuth alone are not always enough. 

Another trust model that is misunderstood is when MCP users confuse vendor reputation with architectural trustworthiness. Ever since the MCP spec began supporting streamable HTTP transport, reputable SaaS vendors could easily publish MCP servers that users can then run by any local- or cloud-based MCP client. However, teams shouldn’t assume that first-party servers from reputable companies are immune to security vulnerabilities. 

For example, researchers have also uncovered prompt injection vulnerabilities with GitHub’s MCP server and Atlassian’s servers in May and June of this year. There was also a report about Microsoft Copilot still being at risk of prompt injection as well. So, you can’t assume that these servers are all safe.

Lastly, and most importantly, MCP is a protocol (not a product). And protocols don’t offer a built-in “trust guarantee.” Ultimately, the protocol only describes how servers and clients communicate through a unified language. MCP does not solve authentication and identity management, enterprise operations (e.g., audit trails, observability, compliance) and infrastructure (e.g., hosting, error handling, rate limiting). 

Organizations are beginning to deploy large numbers of MCP servers internally. What governance blind spots appear when MCP becomes a widespread integration fabric, and can you describe a case where poor governance created operational or security issues?

Organizations often lack centralized MCP observability and controls, leaving more room for vulnerabilities to emerge outside the purview of security team members. Many organizations don’t even have an internal MCP registry, which is table stakes for setting up processes to approve and govern MCP servers. 

When companies don’t have processes to approve and monitor MCP servers, shadow MCP and server sprawl both happen. With shadow MCP, employees introduce servers that IT knows nothing about (and wouldn’t approve). IT/security teams also can’t monitor that server’s security in the long run (e.g., if a server starts out fine but becomes vulnerable later on, they’d never know someone internally was using that, even if they became aware of this server having a vulnerability). Server sprawl happens when duplicative, unnecessary, or unused MCP servers create an ever-expanding attack surface.

MCP gateways allow companies to have an internal registry, which mitigates both shadow MCP and server sprawl. Internal registries make it clear to employees how to get approvals, allowing IT teams to provision tools and provision servers to teams.

We’ve onboarded a large number of teams that want to create MCP gateways after having poor governance wreck havoc in their organization. I’ve seen security leaders who felt burnt after teams deployed MCP servers locally without sandboxing, or using insecure token storage, access control, and scoping practices. Local MCP servers are especially dangerous, because they may have access to sensitive on-device credentials or files, there could be bearer or API tokens in an MCP.json file (which is concerning because they’re production-access tokens sitting on a machine). So, any vulnerability that can read files could suck those up and send them somewhere nefarious. 

These are the kinds of issues that security teams either encounter or foresee that causes them to seek an MCP governance solution. Because ultimately, poor governance gives rise to inconsistency of deployment methods, auth processes, and identity management, which can introduce further, wide-ranging risks, and make your MCP ecosystem even more difficult to provision, observe, and fortify.

As more models gain the ability to call MCP tools, the risk of unauthorized agents or spoofed contexts grows. What steps should organizations take to verify that both the MCP server and the invoking model are authentic, and what protections are still missing from the specification?

Firstly, organizations should create a review and approval process for adding all MCP clients and servers. This will help protect them from supply chain risks, and it can reduce the likelihood of team members inadvertently introducing malicious clients and servers into the organization. 

Security conscious organizations should also insist that all MCP servers use OAuth 2.1 with Proof Key for Code Exchange (PKCE) and harden their approach by ensuring that they use regularly rotated, finely-scoped, and securely stored tokens. OAuth is the recommended (but not required) auth flow in the MCP spec because other, more basic auth flows aren’t always time-scoped, which can give access for longer than any IT professional would want. It can be risky to use bearer tokens (instead of OAuth) because they’re often stored in plain text on a machine, which can be used for nefarious purposes if a local MCP server is compromised. 

Risks can also emerge from the names of tools within MCP servers. If tool names are too similar, the AI model can become confused and select the wrong tool. Malicious actors can exploit this in an attack vector known as Tool Impersonation or Tool Mimicry. The attacker simply adds a tool within their malicious server that tricks the AI into using it instead of a similarly named legitimate tool in another server you use. This can lead to data exfiltration, credential theft, data corruption, and other costly consequences. 

Implementing and mandating the use of an MCP gateway in your organization provides a solution to most of these risks, as it enables you to:

  • Create and manage your organization’s server and client registry
  • Standardize and ensure the robustness of all MCP auth flows
  • Ensure proper token rotation
  • Create allowlists and blocklists for MCP servers, tools, and clients
  • Add namespaces to tools to assist the AI model in selecting the correct tool
Where do you think practitioners underestimate the operational effort required to run MCP securely? Is it observability, key management, server hardening, or something else, and what examples have you seen where teams were caught off guard?

Teams underestimate how much work it takes to implement strong access controls and permission boundaries when using MCP. In addition, the way that most enterprise companies handle identity management and authorization doesn’t always fit into what MCP requires for safe, secure, and scalable deployment. 

For example, the MCP specification relies upon processes like dynamic client registration (DCR) to register the MCP client with a server. Not all engineers are familiar with DCR because not all auth flows require it. But more importantly, enterprises don’t want anonymized auth flows or shared “service accounts” to access systems, data, applications, and other resources.

Enterprises we’ve worked with want MCP to plug into their existing identity management infrastructure. They also want real identities attached to both human users and AI agents, along with policies and control. 

However, implementing even the most basic level of identity and permissions management for MCP servers is a very heavy lift. In addition, there are a lot of flashy (and very dangerous) attack vectors with cool names that get more attention. Identity management, on the other hand, is complex, tricky, and continuously changing, as the capabilities of AI models, use cases for MCP, and the MCP specification itself all evolve. This is why identity management often gets overshadowed and overlooked.

I’ll sound like a broken record here but that’s where MCP gateways come in. When assessing an MCP gateway, ensure that it offers proper identity management. In addition, you’ll want a gateway that allows teams to provision gateways in such a way that requires each user accessing the gateway to use their own personal credentials for the MCP servers. This prevents the overuse or abuse of shared credentials or “bot” / “service” accounts that may provide too much access and not enough auditability.

What do you see as the most significant governance challenge as MCP adoption expands across industries, and which emerging best practice do you expect to become standard within the next year?

Regulatory compliance will become an increasingly important governance challenge as MCP adoption expands across industries and jurisdictions. Using MCP servers creates real risks around data security, protection, and privacy. 

If organizations don’t have strict, granular access controls and guardrails against sensitive data use and exfiltration, then they will face internal pressure to implement them, along with external pressure. Organizations may need to create or implement measures to comply with near-future legislation that specifically addresses the use of personal data, financial information, health records, and other highly regulated data by AI models. That will include safeguards to prevent data from being accessed by AI or ways to provide auditable logs of AI’s access and actions based on this information.

I think any security professional who has come into contact with MCP servers and begun to consider the implications for their organization will have concluded that an MCP gateway is a non-negotiable, essential tool they need to deploy, secure, manage, and monitor MCP servers. The best parallel may be how nearly all organizations have robust protections around corporate email, including strong multi-factor authentication requirements, anti-spam, anti-phishing, and audit logs. While you could use email without a platform offering these features, and for many years teams did, it’s an unnecessary risk, and nearly all organizations now use sophisticated email software with these capabilities. MCP governance platforms will become similarly ubiquitous as the ecosystem matures, so it’s just a question of when companies will adopt MCP governance capabilities.

In terms of other best practices, larger organizations will likely adopt policy-based access controls early on in their MCP adoption. Taking a policy-based approach is a more scalable, secure, and granular way to control access to resources and permissions that fits better with the unpredictable ways that agentic AI uses MCP servers and resources.

Lastly, many organizations are already deploying MCP servers as internal services, hosted in their own cloud. This shift towards managed MCP deployments will increase, and you’ll see fewer purely local or remote MCP deployments, at least within enterprises.

Continue Reading