109 comments

  • antirez10 hours ago
    Don&#x27;t focus on what <i>you</i> prefer: it does not matter. Focus on what tool the LLM requires to do its work in the best way. MCP adds friction, imagine doing yourself the work using the average MCP server. However, skills alone are not sufficient if you want, for instance, creating the ability for LLMs to instrument a complicated system. Work in two steps:<p>1. Ask the LLM to build a tool, under your guide and specification, in order do a specific task. For instance, if you are working with embedded systems, build some monitoring interface that allows, with a simple CLI, to do the debugging of the app as it is working, breakpoints, to spawn the emulator, to restart the program from scratch in a second by re-uploading the live image and resetting the microcontroller. This is just an example, I bet you got what I mean.<p>2. Then write a skill file where the usage of the tool at &quot;1&quot; is explained.<p>Of course, for simple tasks, you don&#x27;t need the first step at all. For instance it does not make sense to have an MCP to use git. The agent knows how to use git: git is comfortable for you, to use manually. It is, likewise, good for the LLM. Similarly if you always estimante the price of running something with AWS, instead of an MCP with services discovery and pricing that needs to be queried in JSON (would you <i>ever</i> use something like that?) write a simple .md file (using the LLM itself) with the prices of the things you use most commonly. This is what you would love to have. And, this is what the LLM wants. For complicated problems, instead, build the dream tool you would build for yourself, then document it in a .md file.
    • prohobo8 hours ago
      I feel like the MCP conversation conflates too many things and everyone has strong assumptions that aren&#x27;t always correct. The fundamental issue is between one-off vs. persistent access across sessions:<p>- If you need to interact with a local app in a one-off session, then use CLI.<p>- If you need to interact with an online service in a one-off session, then use their API.<p>- If you need to interact with a local app in a persistent manner, and if that app provides an MCP server, use it.<p>- If you need to interact with an online service in a persistent manner, and if that app provides an MCP server, use it.<p>Whether the MCP server is implemented well is a whole other question. A properly configured MCP explains to the agent how to use it without too much context bloat. Not using a proper MCP for persistent access, and instead trying to describe the interaction yourself with skill files, just doesn&#x27;t make any sense. The MCP owner should be optimizing the prompts to help the agent use it effectively.<p>MCP is the absolute best and most effective way to integrate external tools into your agent sessions. I don&#x27;t understand what the arguments are against that statement?
      • CuriouslyC5 hours ago
        MCP is less discoverable than a CLI. You can have detailed, progressive disclosure for a CLI via --help and subcommands.<p>MCPs needs to be wrapped to be composed.<p>MCPs needs to implement stateful behavior, shell + cli gives it to you for free.<p>MCP isn&#x27;t great, the main value of it is that it&#x27;s got uptake, it&#x27;s structured and it&#x27;s &quot;for agents.&quot; You can wrap&#x2F;introspect MCP to do lots of neat things.
        • Eldodi2 hours ago
          &quot;MCP is less discoverable than a CLI&quot; -&gt; not true anymore with Tool_search. The progressive discovery and context bloat issue of MCP was a MCP Client implementation issue, not a MCP issue.<p>&quot;MCPs needs to be wrapped to be composed.&quot; -&gt; Also not true anymore, Claude Code or Cowork can chain MCP calls, and any agent using bash can also do it with mcpc<p>&quot;MCPs needs to implement stateful behavior, shell + cli gives it to you for free.&quot; -&gt; having a shell+cli running seems like a lot more work than adding a sessionId into an MCP server. And Oauth is a lot simpler to implement with MCP than with a CLI.<p>MCP&#x27;s biggest value today is that it&#x27;s very easy to use for non-tech users. And a lot of developers seem to forget than most people are not tech and CLI power users
        • prohobo5 hours ago
          &quot;MCP is less discoverable than a CLI&quot; - that doesn&#x27;t make any sense in terms of agent context. Once an MCP is connected the agent should have full understanding of the tools and their use, before even attempting to use them. In order for the agent to even know about a CLI you need to guide the agent towards it - manually, every single session, or through a &quot;skill&quot; injection - and it needs to run the CLI commands to check them.<p>&quot;MCPs needs to implement stateful behavior&quot; - also doesn&#x27;t make any sense. Why would an MCP need to implement stateful behavior? It is essentially just an API for agents to use.
          • coldtea1 hour ago
            &gt;<i>&quot;MCP is less discoverable than a CLI&quot; - that doesn&#x27;t make any sense in terms of agent context. Once an MCP is connected the agent should have full understanding of the tools and their use, before even attempting to use them. In order for the agent to even know about a CLI you need to guide the agent towards it - manually, every single session, or through a &quot;skill&quot; injection - and it needs to run the CLI commands to check them.</i><p>Knowledge about any MCP is not something special inherent in the LLM, it&#x27;s just an agent side thing. When it comes to the LLM, it&#x27;s just some text injected to its prompting, just like a CLI would be.
          • CuriouslyC5 hours ago
            If you have an API with thousands of endpoints, that MCP description is going to totally rot your context and make your model dumb, and there&#x27;s no mechanism for progressive disclosure of parts of the tool&#x27;s abilities, like there is for CLIs where you can do something like:<p>tool --help<p>tool subcommand1 --help<p>tool subcommand2 --help<p>man tool | grep &quot;thing I care about&quot;<p>As for stateful behavior, say you have the google docs or email mcp. You want to search org-wide for docs or emails that match some filter, make it a data set, then do analysis. To do this with MCP, the model has to write the files manually after reading however many KB of input from the MCP. With a cli it&#x27;s just &quot;tool &gt;&gt; starting_data_set.csv&quot;
            • prohobo5 hours ago
              This is a design problem, and not something necessarily solved by CLI --help commands.<p>You can implement progressive disclosure in MCP as well by implementing those same help commands as tools. The MCP should not be providing thousands of tools, but the minimum set of tools to <i>help the AI use the service</i>. If your service is small, you can probably distill the entire API into MCP tools. If you&#x27;re AWS then you provide tools that then <i>document</i> the API progressively.<p>Technically, you could have an AWS MCP provide <i>one tool</i> that guides the AI on how to use specific AWS services through search&#x2F;keywords and some kind of cursor logic.<p>The entire point of MCP is <i>inherent knowledge of a tool</i> for agentic use.
            • BeetleB3 hours ago
              &gt; that MCP description is going to totally rot your context and make your model dumb, and there&#x27;s no mechanism for progressive disclosure of parts of the tool&#x27;s abilities,<p>Completely false. I was dealing with this problem recently (a few tools, consuming too many tokens on each request). MCP has a mechanism for dynamically updating the tools (or tool descriptions):<p><a href="https:&#x2F;&#x2F;code.claude.com&#x2F;docs&#x2F;en&#x2F;mcp#dynamic-tool-updates" rel="nofollow">https:&#x2F;&#x2F;code.claude.com&#x2F;docs&#x2F;en&#x2F;mcp#dynamic-tool-updates</a><p>We solved it by providing a single, bare bones tool: It provides a very brief description of the types of tools available (1-2 lines). When the LLM executes that tool, all the tools become available. One of the tools is to go back to the &quot;quiet&quot; state.<p>That first tool consumes only about 60 tokens. As long as the LLM doesn&#x27;t need the tools, it takes almost no space.<p>As others have pointed out, there are other solutions (e.g. having all the tools - each with a 1 line description, but having a &quot;help&quot; tool to get the detailed help for any given tool).
            • medbrane2 hours ago
              &gt;here&#x27;s no mechanism for progressive disclosure of parts of the tool&#x27;s abilities<p>In fact there is: <a href="https:&#x2F;&#x2F;platform.claude.com&#x2F;docs&#x2F;en&#x2F;agents-and-tools&#x2F;tool-use&#x2F;tool-search-tool" rel="nofollow">https:&#x2F;&#x2F;platform.claude.com&#x2F;docs&#x2F;en&#x2F;agents-and-tools&#x2F;tool-us...</a><p>If the special tool search tool is available, then a client would not load the descriptions of the tools in advance, but only for the ones found via the search tool. But it&#x27;s not widely supported yet.
            • kordlessagain3 hours ago
              Nobody said anything about an API with thousands of endpoints. Does that even exist? I&#x27;ve never seen it. Wouldn&#x27;t work on it if I had seen it. Such is the life of a strawman argument.<p>Further, isn&#x27;t a decorator in Python (like @mcp.tool) the easy way to expose what is needed to an API, if even if all we are doing is building a bridge to another API? That becomes a simple abstraction layer, which most people (and LLMs) get.<p>Writing a CLI for an existing API is a fool&#x27;s errand.
              • CuriouslyC3 hours ago
                Cloudflare wrote a blog post about this exact case. The cloud providers and their CLIs are the canonical example, so 100% not a strawman.
              • locknitpicker2 hours ago
                &gt; Writing a CLI for an existing API is a fool&#x27;s errand.<p>I don&#x27;t think your opinion is reasonable or well grounded. A CLI app can be anything including a script that calls Curl. With a CLI app you can omit a lot of noise from the context things like authentication, request and response headers, status codes, response body parsing, etc. you call the tool, you get a response, done. You&#x27;d feel foolish to waste tokens parsing irrelevant content that a deterministic script can handle very easily.
            • fennecbutt5 hours ago
              &gt;man tool | grep &quot;thing I care about&quot;<p>Isn&#x27;t the same true of filtering tools available thru mcp?<p>The mcp argument to me really seems like people arguing about tabs and spaces. It&#x27;s all whitespace my friends.
      • xyzzy1238 hours ago
        My main complaint with mcp is that it doesn&#x27;t compose well with other tools or code. Like if I want to pull 1000 jira tickets and do some custom analysis I can do that with cli or api just fine, but not mcp.
        • prohobo8 hours ago
          Right, that feels like something you&#x27;d do with a script and some API calls.<p>MCP is more for a back and forth communication between agent and app&#x2F;service, or for providing tool&#x2F;API awareness during <i>other</i> tasks. Like MCP for Jira would let the AI know it <i>can</i> grab tickets from Jira when needed while working on other things.<p>I guess it&#x27;s more like: the MCP isn&#x27;t for us - it&#x27;s for the agent to decide when to use.
          • xyzzy1236 hours ago
            I just find that e.g. cli tools scale naturally from tiny use cases (view 1 ticket) to big use cases (view 1000 tickets) and I don&#x27;t have to have 2 ways of doing things.<p>Where I DO see MCPs getting actual use is when the auth story for something (looking at you slack, gmail, etc) is so gimped out that basically, regular people can&#x27;t access data via CLI in any sane or reasonable way. You have to do an oauth dance involving app approvals that are specifically designed to create a walled garden of &quot;blessed&quot; integrations.<p>The MCP provider then helpfully pays the integration tax for you (how generous!) while ensuring you can&#x27;t do inconvenient things like say, bulk exporting your own data.<p>As far as I can tell, that&#x27;s the _actual_ sweet spot for MCPs. They&#x27;re sort of a technology of control, providing you limited access to your own data, without letting you do arbitrary compute.<p>I understand this can be considered a feature if you&#x27;re on the other side of the walled garden, or you&#x27;re interested in certain kinds of enterprise control. As a programmer however I prefer working in open ecosystems where code isn&#x27;t restricted because it&#x27;s inconvenient to someone&#x27;s business model.
            • hadlock32 minutes ago
              &gt;while ensuring you can&#x27;t do inconvenient things like say, bulk exporting your own data<p>I think this is the key; I want my analysts to be able to access 40% of the database they need to do their job, but not the other 60% parts that would allow them to dump the business-secrets part of the db, and start up business across the street. You can do this to some extent with roles etc but MCP in some ways is the data firewall as your last line of protection&#x2F;auth.
          • michaelbuckbee4 hours ago
            MCPs are for documentation. CLI-&gt;API is for interaction.
        • somnium_sn2 hours ago
          Give the model a REPL and let it compose MCP calls either by using tool calls structured output, doing string processing or piping it to a fast cheap model to provide structured output.<p>This is the same as a CLI. Bash is nothing but a programming language and you can do the same approach by giving the model JavaScript and have it call MCP tools and compose them. If you do that you can even throw in composing it with CLis as well
        • insin7 hours ago
          You can make it compose by also giving the agent the necessary tools to do so.<p>I encountered a similar scenario using Atlassian MCP recently, where someone needed to analyse hundreds of Confluence child pages from the last couple of years which all used the same starter template - I gave the agent a tool to let it call any other tool in batch and expose the results for subsequent tools to use as inputs, rather than dumping it straight into the context (e.g. another tool which gives each page to a sub-agent with a structured output schema and a prompt with extraction instructions, or piping the results into a code execution tool).<p>It turned what would have been hundreds of individual tool calls filling the context with multiple MBs of raw confluence pages, into a couple of calls returning relevant low-hundreds of KBs of JSON the agent could work further with.
          • __alexs5 hours ago
            The agent <i>cannot</i> compose MCPs.<p>What it <i>can</i> do is call multiple MCPs, dumping tons of crap into the context and then separately run some analysis on that data.<p>Composable MCPs would require some sort of external sandbox in which the agent can write small bits of code to transform and filter the results from one MCP to the next.
            • csallen3 hours ago
              This is confusing to me. What is composability if not calling a program, getting its program, and feeding it into another program as input? Why does it matter if that output is stored in the LLM&#x27;s context, or if it&#x27;s stored in a file, or if it&#x27;s stored ephemerally?<p>Maybe I&#x27;m misunderstanding the definition of composability, but it sounds like your issue isn&#x27;t that MCP isn&#x27;t composable, but that it&#x27;s wasteful because it adds data from interstitial steps to the context. But there are numerous ways to circumvent this.<p>For example, it wouldn&#x27;t be hard to create a tool that just runs an LLM, so when the main LLM convo calls this tool it&#x27;s effectively a subagent. This subagent can do work, call MCPs, store their responses in its context, and thereby feed that data as input into other MCPs&#x2F;CLIs, and continue in this way until it&#x27;s done with its work, then return its final result and disappear. The main LLM will only get the result and its context won&#x27;t be polluted with intermediary steps.<p>This is pretty trivial to implement.
            • somnium_sn2 hours ago
              Give the model an interpreter like mlua and let it write code to compose MCP calls together. This is a well established method.<p>It’s the equivalent to calling CLIs in bash, except mlua is a sandboxes runtime while bash is not.
            • insin5 hours ago
              At the level of the agent, it knows nothing about MCP, all it has is a list of tools. It can do anything the tools you give it let it do.
              • __alexs4 hours ago
                It cannot do &quot;anything&quot; with the tools. Tools are very constrained in that the agent must insert into it&#x27;s context the tool call, and it can only receive the response of the tool directly back into its context.<p>Tools themselves also cannot be composed in any SOTA models. Composition is not a feature the tool schema supports and they are not trained on it.<p>Models obviously understand the general concept of function composition, but we don&#x27;t currently provide the environments in which this is actually possible out side of highly generic tools like Bash or sandboxed execution environments like <a href="https:&#x2F;&#x2F;agenttoolprotocol.com&#x2F;" rel="nofollow">https:&#x2F;&#x2F;agenttoolprotocol.com&#x2F;</a>
            • hrimfaxi5 hours ago
              They can already do this, no? MCPs regularly dump their results to a textfile and other tools (cli or otherwise) filter it.
          • losvedir6 hours ago
            But in the context of this discussion, Atlassian has a CLI tool, acli. I&#x27;m not quite following why that wouldn&#x27;t have worked here. As a normal CLI you have all the power you need over it, and the LLM could have used it to fetch all the relevant pages and save to disk, sample a couple to determine the regular format, and then write a script to extract out what they needed, right? Maybe I don&#x27;t understand the use case you&#x27;re describing.
            • insin5 hours ago
              Not all agents are running in your CLI or even in any CLI, which is why people are arguing past each other all over the topic of MCP.<p>I implemented this in an agent which runs in the browser (in our internal equivalent of ChatGPT or Claude&#x27;s web UI), connecting directly to Atlassian MCP.
          • xyzzy1237 hours ago
            Hmm, but you can&#x27;t write a standard MCP (e.g. batch_tool_call) that calls other MCPs because the protocol doesn&#x27;t give you a way to know what other MCPs are loaded in the runtime with you or any means to call them? Or have I got that wrong?<p>So I guess you had to modify the agent harness to do this? or I guess you could use... mcp-cli ... ??
            • jmcodes5 hours ago
              I don&#x27;t maintain this anymore but I experimented with this a while back: <a href="https:&#x2F;&#x2F;github.com&#x2F;jx-codes&#x2F;lootbox" rel="nofollow">https:&#x2F;&#x2F;github.com&#x2F;jx-codes&#x2F;lootbox</a><p>Essentially you give the agent a way to run code that calls MCP servers, then it can use them like any other API.<p>Nowadays small bash&#x2F;bun scripts and an MCP gateway proxy gets me the same exact thing.<p>So yeah at some level you do have to build out your own custom functionality.
      • Eldodi2 hours ago
        There was a great presentation at the MCP Dev Summit last week explaining MCP vs CLI vs Skills vs Code Mode: <a href="https:&#x2F;&#x2F;www.figma.com&#x2F;deck&#x2F;H6k0YExi7rEmI8E6j6R0th&#x2F;MCP-Dev-Summit---MCP-vs-Code-Mode-vs-Skills?node-id=4375-984" rel="nofollow">https:&#x2F;&#x2F;www.figma.com&#x2F;deck&#x2F;H6k0YExi7rEmI8E6j6R0th&#x2F;MCP-Dev-Su...</a>
      • mbesto3 hours ago
        The way I see it is more like this:<p>- Skills help the LLM answer the &quot;how&quot; to interact with API&#x2F;CLIs from your original prompt<p>- API is what actually sends&#x2F;receives the interaction&#x2F;request<p>- CLI is the actual doing &#x2F; instruct set of the interaction&#x2F;request<p>- MCP helps the LLM understand what is available from the CLI and API<p>They are all complementary.
      • mbreese5 hours ago
        I think a lot of the MCP arguments conflate MCP the protocol versus how we currently discover and use MCP tool servers. I think there’s a lot of overhead and friction right now with how MCP servers are called and discovered by agents, but there’s no reason why it has to be that way.<p>Honestly, an agent shouldn’t really care how it’s getting an answer, only that it’s getting an answer to the question it needs answered. If that’s a skill, API call, or MCP tool call, it shouldn’t really matter all that much to the agent. The rest is just how it’s configured for the users.
      • addandsubtract7 hours ago
        Meanwhile, I&#x27;m using MCP for the LLM to lookup up-to-date documentation, and not hallucinate APIs.
      • Aperocky4 hours ago
        It&#x27;s like saying it is very safe and nice to drive a F150 with half ton of water on the truck bed.<p>How about driving the same truck without that half ton of water?
      • JamesSwift4 hours ago
        Hard disagree. Apis and clis have been THOROUGHLY documented for human consumption for years and guess what, the models have that context already. Not only of the docs but actual in the wild use. If you can hook up auth for an agent, using any random external service is generally accomplished by just saying “hit the api”.<p>I wrap all my apis in small bash wrappers that is just curl with automatic session handling so the AI only needs to focus on querying. The only thing in the -h for these scripts is a note that it is a wrapper around curl. I havent had a single issue with AI spinning its wheels trying to understand how to hit the downstream system. No context bloat needed and no reinventing the wheel with MCP when the api already exists
      • noodletheworld8 hours ago
        &gt; MCP is the absolute best and most effective way to integrate external tools into your agent sessions<p>Nope.<p>The best way to interact with an external service is an api.<p>It was the best way before, and its the best way now.<p>MCP doesn&#x27;t scale and it has a bloated unnecessarily complicated spec.<p>Some MCP servers are good; but <i>in general</i> a new bad way of interacting with external services, is not the best way of doing it, and the assertion that it is <i>in general</i>, best, is what I refer to as “works for me” coolaid.<p>…because it probably <i>does</i> work well for you.<p>…because you are using a few, good, MCP servers.<p>However, that doesn&#x27;t scale, for all the reasons listed by the many detractors of MCP.<p>Its not that it <i>cant be used effectively</i>, it is that <i>in general</i> it is a solution that has been incompetently slapped on by many providers who dont appreciate how to do it well <i>and</i> even then, it scales badly.<p>It is a bad solution for a solved problem.<p>Agents have made the problem MCP was solving obsolete.
        • brabel7 hours ago
          You haven’t actually done that have you. If you did, you would immediately understand the problems MCP solves on top of just trying to use an API directly:<p>- easy tool calling for the LLM rather than having to figure out how to call the API based on docs only. - authorization can be handled automatically by MCP clients. How are you going to give a token to your LLM otherwise?? And if you do, how do you ensure it does not leak the token? With MCP the token is only usable by the MCP client and the LLM does not need to see it. - lots more things MCP lets you do, like bundle resources and let the server request off band input from users which the LLM should not see.
          • thepasch2 hours ago
            &gt; easy tool calling for the LLM rather than having to figure out how to call the API based on docs only<p>I think the best way to run an agent workflow with custom tools is to use a harness that allows you to just, like, <i>write custom tools</i>. Anthropic expects you to use the Agent SDK with its “in-process MCP server” if you want to register custom tools, which sounds like a huge waste of resources, particularly in workflows involving swarms of agents. This is abstraction for the sake of abstraction (or, rather, market share).<p>Getting the tool built in the first place is a matter of pointing your agent at the API you’d like to use and just have them write it. It’s an easy one-shot even for small OSS models. And then, you know <i>exactly</i> what that tool does. You don’t have to worry about some update introducing a breaking change in your provider’s MCP service, and you can control every single line of code. Meanwhile, every time you call a tool registered by an MCP server, you’re trusting that it does what it says.<p>&gt; authorization can be handled automatically by MCP clients. How are you going to give a token to your LLM otherwise??<p>env vars or a key vault<p>&gt; And if you do, how do you ensure it does not leak the token?<p>env vars or a key vault
          • bitexploder3 hours ago
            An authnz aware egress proxy that also puts guard rails on MCP behavior?
        • prohobo8 hours ago
          Let&#x27;s say I made a calendar app that stores appointments for you. It&#x27;s local, installed on your system, and the data is stored in some file in ~&#x2F;.calendarapp.<p>Now let&#x27;s say you want all your Claude Code sessions to use this calendar app so that you can always say something like &quot;ah yes, do I have availability on Saturday for this meeting?&quot; and the AI will look at the schedule to find out.<p>What&#x27;s the best way to create this persistent connection to the calendar app? I think it&#x27;s obviously an MCP server.<p>In the calendar app I provide a built-in MCP server that gives the following tools to agents: read_calendar, and update_calendar. You open Claude Code and connect to the MCP server, and configure it to connect to the MCP for all sessions - and you&#x27;re done. You don&#x27;t have to explain what the calendar app is, when to use it, or how to use it.<p>Explain to me a better solution.
          • frotaur7 hours ago
            Why couldn&#x27;t the calendar app expose in an API the read_calendar and update_calendar functionalities, and have a skill &#x27;use_calendar&#x27; that describes how to use the above?<p>Then, the minimal skill descriptions are always in the model&#x27;s context, and whenever you ask it to add something to the calendar, it will know to fetch that skill. It feels very similar to the MCP solution to me, but with potentially less bloat and no obligation to deal with MCP? I might be missing something, though.
            • prohobo7 hours ago
              Why would I do that if the MCP already handles it? The MCP exposes the API with those tools, it explains what the calendar app is and when to use it.<p>Connected MCP tools are also always in the model&#x27;s context, and it works for any AI agent that supports MCP, not just Claude Code.
              • noodletheworld7 hours ago
                &gt; The MCP exposes the API with those tools, it explains what the calendar app is<p>So does an API and a text file (or hell, a self describing api).<p>Which is more complex and harder to maintain, update and use?<p>This is a solved problem.<p>The world doesnt need MCP to reinvent a solution to it.<p>If we’re gonna play the ELI5 game, why does MCP define a UI as part of its spec? Why does it define a bunch of different resource types of which <i>only tools</i> are used by most servers? Why did not have an auth spec at launch? Why are there so many MCP security concerns?<p>These are not idle questions.<p>They are indicative of the “more featurrrrrres” and “lack of competence” that went into designing MCP.<p>Agents, running a sandbox, with normal standard rbac based access control or, for complex operations standard stateful cli tooling like the azure cli are fundamentally better.
                • prohobo7 hours ago
                  How would the AI know about the calendar app unless you make the text file and attach it to the session?<p>Self-describing APIs require probing through calls, they don&#x27;t tell you what you need to know <i>before</i> you interact with them.<p>MCP servers are very simple to implement, and the developers of the app&#x2F;service maintain the server so you don&#x27;t have to create or update skills with incomplete understanding of the system.<p>Your skill file is going to drift from the actual API as the app updates. You&#x27;re going to have to manage it, instead of the developers of the app. I don&#x27;t understand what you&#x27;re even talking about.
                  • noodletheworld7 hours ago
                    &gt; MCP servers are very simple to implement<p>…<p>&gt; Let&#x27;s say I made a calendar app that stores appointments for you. It&#x27;s local, installed on your system,<p>&gt; and the developers of the app&#x2F;service maintain the server so you don&#x27;t have to create or update skills<p>…<p>&gt; I don&#x27;t understand what you&#x27;re even talking about.<p>You certainly do not.
                    • prohobo6 hours ago
                      You do understand that what it sounds like you&#x27;re talking about is essentially a proto-MCP implementation right? Except more manual work involved.
                      • throwanem6 hours ago
                        This has devolved into &quot;MCP is web scale.&quot; <a href="https:&#x2F;&#x2F;youtu.be&#x2F;b2F-DItXtZs" rel="nofollow">https:&#x2F;&#x2F;youtu.be&#x2F;b2F-DItXtZs</a>
                        • prohobo2 hours ago
                          You&#x27;re clearly very intelligent and a real software engineer, maybe you can explain where I&#x27;m wrong?
                          • throwanem2 hours ago
                            Sure thing! That probably won&#x27;t take more than a couple years at 10-20 hours a week of tutelage, and although my usual rate for consulting of any stripe is $150 an hour, for you I&#x27;m willing to knock that all the way down to just $150 an hour.
                            • prohobo1 hour ago
                              Just give us a taste of what we&#x27;d be paying for? I&#x27;m sure you&#x27;re an expert but before I commit to 2+ years of consultation I&#x27;d like to see your approach.
                              • throwanem1 hour ago
                                I&#x27;ve already pointed this out as the silly, purposeless argument it&#x27;s become. (Or <i>more</i> become.) Even I at this point can&#x27;t figure out who is advocating what or why, other than for the obvious ego reasons. You&#x27;re bikeshedding at each other and wasting all the time and effort it requires, because no one else is enjoying it any more than <i>you</i> two are: if anything you have left your audience more confused than we began, but I see I repeat myself.<p>Show me you can stop doing that, and I&#x27;ll happily mediate a <i>technical</i> version of this conversation that proceeds respectfully from the two of you each making a clear and concise statement of your design thesis, and what you see as its primary pros and cons.<p>For that I&#x27;ll take a flat $150 for up to 4 hours. I usually bill by the 15-minute increment, but obviously we would dispense with that here, and ordinarily I would not, of course, offer such a remarkable discount. But it doesn&#x27;t really take $150 worth of effort to remind someone that he should take better care to distinguish his engineering judgment and his outraged insecurity.
                                • prohobo13 minutes ago
                                  I don&#x27;t get it, you joined this thread to call me an idiot with a meme, and now you&#x27;re talking about being a neutral arbiter for a technical discussion that I supposedly ruined.<p>More than anything I&#x27;m getting frustrated with HN discussions because people just insinuate that I&#x27;m stupid instead of making substantive arguments reasoning how what I&#x27;m saying is wrong.<p>Are we performing for an audience or having a discussion?
                • raincole7 hours ago
                  &gt; So does an API and a text file (or hell, a self describing api).<p>That sounds great. How about we standardize this idea? We can have an endpoint to tell the agents where to find this text file and API. Perhaps we should be a bit formal and call it a protocol!
                  • bavell5 hours ago
                    &gt; How about we standardize this idea? We can have an endpoint to tell the agents where to find this text file and API<p>Good news! It&#x27;s already standardized and agents already know where to find it!<p><a href="https:&#x2F;&#x2F;code.claude.com&#x2F;docs&#x2F;en&#x2F;skills" rel="nofollow">https:&#x2F;&#x2F;code.claude.com&#x2F;docs&#x2F;en&#x2F;skills</a>
                • edgyquant6 hours ago
                  [dead]
              • juped4 hours ago
                Why would you put a second, jankier API in front of your API when you could just use the API?
          • saberience6 hours ago
            You realize you can just create your own tools and wire them up directly using the Anthropic or OpenAI APIs etc?<p>It&#x27;s not a choice between Skills or MCP, you can also just create your own tools, in whatever language you want, and then send in the tool info to the model. The wiring is trivial.<p>I write all my own tools bespoke in Rust and send them directly to the Anthropic API. So I have tools for reading my email, my calendar, writing and search files etc. It means I can have super fast tools, reduce context bloat, and keep things simple without needing to go into the whole mess of MCP clients and servers.<p>And btw, I wrote my own MCP client and server from the spec about a year ago, so I know the MCP spec backwards and forwards, it&#x27;s mostly jank and not needed. Once I got started just writing my own tools from scratch I realised I would never use MCP again.
          • redsocksfan454 hours ago
            [dead]
      • pavelbuild8 hours ago
        [dead]
    • 1minusp3 hours ago
      Feels to me like the toolchain for using LLMs in various tasks is still in flux (i interpret all of this as &quot;stuff in different places like .md or skills or elsewhere that is appended to the context window&quot; (i hope that is correct)). Shouldnt this overall process be standardized&#x2F;automated? That is, use some self-reflection to figure out patterns that are then dumped into the optimal place, like a .md file or a skill?
      • jpadkins1 hour ago
        too early for standardization. resist the urge. Let a bunch of ideas flow, then watch the Darwinian process of the best setup will be found. Then standardize.
    • morgaesis3 hours ago
      This is my life motto. Progressive exploration, codifying, use your codified workflows.<p>&gt; for each desired change, make the change easy (warning: this may be hard), then make the easy change - Kent Beck<p><a href="https:&#x2F;&#x2F;x.com&#x2F;KentBeck&#x2F;status&#x2F;250733358307500032" rel="nofollow">https:&#x2F;&#x2F;x.com&#x2F;KentBeck&#x2F;status&#x2F;250733358307500032</a>
    • BatteryMountain6 hours ago
      This is exactly what I do too. Works very well. I have a whole bunch of scripts and cli tools that claude can use, most of them was built by claude too. I very rarely need to use my IDE because of this, as I&#x27;ve replicated some of Jetbrains refactorings so claude doens&#x27;t have to burn tokens to do the same work. It also turns a 5 minute claude session into a 10 second one, as the scripts&#x2F;tools are purpose made. Its reallly cool.<p>edit: just want to add, i still haven&#x27;t implemented a single mcp related thing. Don&#x27;t see the point at all. REST + Swagger + codegen + claude + skills&#x2F;tools works fine enough.
      • smusamashah40 minutes ago
        &gt; I&#x27;ve replicated some of Jetbrains refactorings<p>How? Jetbrains in a Java code baes is amazing and very thorough on refactors. I can reliably rename, change signature, move things around etc.
      • evanmoran1 hour ago
        This is a great idea. Did you happen to release the source for this? I run into this all the time!
    • gitgud6 hours ago
      &gt; <i>For instance it does not make sense to have an MCP to use git.</i><p>What if you don’t want the AI to have any write access for a tool? I think the ability to choose what parts of the tool you expose is the biggest benefit of MCP.<p>As opposed to a READ_ONLY_TOOL_SKILL.md that states “it’s important that you must not use any edit API’s…”
      • Majromax3 hours ago
        Anyone who&#x27;s ever `DROP TABLE`d on a production rather than test database has encountered the same problem in meatspace.<p>In this context, the MCP interface acts as a privilege-limiting proxy between the actor (LLM&#x2F;agent) and the tool, and it&#x27;s little different from the standard best practice of always using accounts (and API keys) with the minimum set of necessary privileges.<p>It might be easier in practice to set up an MCP server to do this privilege-limiting than to refactor an API or CLI-tool, but that&#x27;s more an indictment of the latter than an endorsement of the former.
      • NiloCK4 hours ago
        Just as easy to write a wrapper to the tool you want to restrict. You ban the restricted tool outright, and the <i>skill</i> instructs on usage of the wrapper.<p>Safer than just giving an <i>instruction</i> to use the tool a specific way.
    • siva77 hours ago
      &gt; MCP adds friction, imagine doing yourself the work using the average MCP server.<p>Why on earth don&#x27;t people understand that MCP and skills are complementary concepts, why? If people argue over MCP v. Skills they clearly don&#x27;t understand either deeply.
      • robot-wrangler3 hours ago
        &gt; clearly don&#x27;t understand either deeply<p>No appetite for that. The MCP vs Skills debate has gradually become just a proxy war for the camps of AI skeptics vs AI boosters. Both sides view it as another chance to decide about more magic vs less, <i>in absolute terms</i>, without doing the work of thinking about anything situational. Nuance, questions, reasoning from first principles, focusing on purely engineering considerations is simply not welcome. The extreme factions do tend to agree that it might be a good idea to attack the middle though! There&#x27;s no changing this stuff, so when it becomes tiresome it&#x27;s time to just leave the HN comment section.
      • bavell5 hours ago
        They&#x27;re complementary but also have significant overlap. Hence all the confusion and strong opinions.
      • _pdp_7 hours ago
        I won&#x27;t be surprised if MCP start shipping skills. They already ship prompts and other things exposed as resources. It is not even difficult to do with the current draft as skills can be exposed by convention without protocol changes.<p>Future version of the protocol can easily expose skills so that MCPs can acts like hubs.
        • radiospiel3 hours ago
          Doesn&#x27;t it already? <a href="https:&#x2F;&#x2F;modelcontextprotocol.io&#x2F;specification&#x2F;2025-11-25&#x2F;server&#x2F;prompts" rel="nofollow">https:&#x2F;&#x2F;modelcontextprotocol.io&#x2F;specification&#x2F;2025-11-25&#x2F;ser...</a>
          • _pdp_3 hours ago
            these are prompts - similar yes - but not the same
      • insin7 hours ago
        The more things change in tech, the more they stay the same.<p>The shoe is the sign. Let us follow His example!<p>Cast off the shoes! Follow the Gourd!
    • jFriedensreich2 hours ago
      If your llm sees even a difference between local skill and remote MCP thats a leak in your abstraction and shortcoming of the agent harness and should not influence the decision how we need to build these system for the devs and end users. They way this comment thinks about building for agents would lead to a hellscape.
      • mikestorrent1 hour ago
        Do you know who you&#x27;re responding to?<p>&gt; a difference between local skill and remote MCP<p>A local skill is a text file with a bunch of explanations of what to do and how, and what pitfalls to avoid. An MCP is a connection to an API that can perform actions on anything. This is a pretty massive difference in terms of concept and I don&#x27;t think it can be abstracted away. A skill may require an MCP be available to it, for instance, if it&#x27;s written that way.<p>Antirez&#x27; advice is what I&#x27;ve been doing for a year: use AI to write proper, domain-specific tools that you and it can then use to do more impressive things.
    • tomaytotomato9 hours ago
      Although the author is coming from a place of security and configuration being painful with Skills, I think the future will be a mix of MCP, Agents and Skills. Maybe even a more granular defined unit below a skill - a command...<p>These commands would be well defined and standardised, maybe with a hashed value that could be used to ensure re-usability (think Docker layers).<p>Then I just have a skill called:<p>- github-review-slim:latest - github-review-security:8.0.2<p>MCPs will still be relevant for those tricky monolithic services or weird business processes that aren&#x27;t logged or recorded on metrics.
      • senordevnyc3 hours ago
        Commands are already a thing, but they&#x27;re falling out of favor because a user can just invoke a skill manually instead.
    • fny4 hours ago
      This is covered well in the article too. See &quot;The Right Tool for the Job&quot; and &quot;Connectors vs. Manuals.&quot;<p>Perhaps the title is just clickbait. :)
    • richardlblair3 hours ago
      I&#x27;ve found makefiles to be useful. I have a small skill that guides the LLM towards the makefile. It&#x27;s been great for what you&#x27;re talking about, but it&#x27;s also a great way to make sure the agent is interacting with your system in a way you prefer.
    • neya6 hours ago
      &gt; Focus on what tool the LLM requires to do its work in the best way.<p>I completely agree with you. There was a recent finding that said Agents.md outperforms skills. I&#x27;m old school and I actually see best results by just directly feeding everything into the prompt context itself.<p><a href="https:&#x2F;&#x2F;vercel.com&#x2F;blog&#x2F;agents-md-outperforms-skills-in-our-agent-evals" rel="nofollow">https:&#x2F;&#x2F;vercel.com&#x2F;blog&#x2F;agents-md-outperforms-skills-in-our-...</a>
      • yunwal5 hours ago
        How do you shut off particular api calls with an agents.md?
        • neya4 hours ago
          I personally use tool calling for APIs, so really not sure (I don&#x27;t use agents.md per se, I directly stuff info into the context window)
    • ReDeiPirati8 hours ago
      &gt; Don&#x27;t focus on what you prefer: it does not matter. Focus on what tool the LLM requires to do its work in the best way.<p>I noticed that LLMs will tend to work by default with CLIs even if there&#x27;s a connected MCP, likely because a) there&#x27;s an overexposure of CLIs in training data b) because they are better composable and inspectable by design so a better choice in their tool selection.
    • the_axiom5 hours ago
      this comment just assumes skills ori better without dealing with any of the arguments presented<p>low quality troll
    • federicosimoni4 hours ago
      [dead]
    • eblair8 hours ago
      [dead]
  • tow2110 hours ago
    This argument always sounds like two crowds shouting past each other.<p>Are you a solo developer, are you fully in control of your environment, are you focused on productivity and extremely tight feedback loops, do you have a high tolerance for risk: you should probably use CLIs. MCPs will just irritate you.<p>Are you trying to work together with multiple people at organizational scale and alignment is a problem; are you working in a range of environments which need controls and management, do you have a more defensive risk tolerance ... then by the time you wrap CLIs into a form that are suitable you will have reinvented a version of the MCP protocol. You might as well just use MCP in the first place.<p>Aside - yes, MCP in its current iteration is fairly greedy in its context usage, but that&#x27;s very obviously going to be fixed with various progressive-disclosure approaches as the spec develops.
    • theshrike794 hours ago
      In an organisation we can’t limit MCP access. It’s all or nothing. Everything the user can touch, the MCP can touch.<p>We can trust humans not to do stupid things. They might accidentally delete maybe two items by fat-fingering the UI.<p>An Agent can delete a thousand items in a second while doing 30 other things.<p>With bespoke CLI tools we can configure them so that they cannot access anything except specific resources, limiting the possible blast radius considerably.
      • tow212 hours ago
        (everything I write about MCP means &quot;remote MCP&quot; by the way. Local MCP is completely pointless)<p>MCP provides you a clear abstracted structure around which you can impose arbitrary policy. &quot;identity X is allowed access to MCP tool Y with reference to resource pool Z&quot;. It doesn&#x27;t matter if the upstream MCP service provides that granularity or not, it&#x27;s architecturally straightforward to do that mapping and control all your MCP transactions with policies you can reason about meaningfully.<p>CLI provides ... none of that. Yes, of course you can start building control frameworks around that and build whatever bespoke structures you want. But by the time you have done that you have re-invented exactly the same data and control structures that MCP gives you.<p>&quot;Identity X can access tool Y with reference to resource pool Z&quot;. That literally is what MCP is structured to do - it&#x27;s an API abstraction layer.
      • jjice4 hours ago
        &gt; In an organisation we can’t limit MCP access.<p>Why not? I&#x27;d imagine that you could grant specific permissions upon MCP auth. Is the issue that the services you&#x27;re using don&#x27;t support those controls, or is it something else?
        • theshrike793 hours ago
          I haven’t seen a single major MCP provider that would let us limit access properly<p>Miro, Linear, Notion etc… They just casually let the MCP do anything the user can and access everything.<p>For example: Legal is never letting us connect to Notion MCP as is because it has stuff that must NEVER reach any LLM even if they pinky swear not to train with our stuff.<p>-&gt; thus, hard deterministic limits are non-negotiable.
          • pjm3313 hours ago
            it&#x27;s straightforward to spin up a custom MCP wrapper around any API with whatever access controls you want<p>the only time i reach for official MCP is when they offer features that are not available via API - and this annoys me to no end (looking at you Figma, Hex)
            • BeetleB2 hours ago
              Indeed, ever since MCPs came out, I would always either wrap or simply write my own.<p>I needed to access Github CI logs. I needed to write Jira stories. I didn&#x27;t even bother glancing at any of the several existing MCP servers for either one of them - official or otherwise. It was trivial to vibe code an MCP server with <i>precisely</i> the features I need, with the appropriate controls.<p>Using and auditing an existing 3rd party MCP server would have been more work.
            • theshrike792 hours ago
              That’s what we’re doing, but it’s annoying. Why can’t they just let us limit access for the official MCP easily?
              • jjice2 hours ago
                Agreed. Sounds like a failure of the services, but not MCP. Can&#x27;t believe in 2026 we don&#x27;t have better permissions on systems like this.
      • dec0dedab0de1 hour ago
        maybe make an mcp that has whatever limitations you need baked in?
      • morgaesis3 hours ago
        &gt; We can trust humans not to do stupid things. <i>hold my beer</i><p>I can definitely delete a thousand items with a typo in my bash for loop&#x2F;pipe. You should always defend against stupid or evil users or agents. If your documents are important, set up workflows and access to prevent destructive actions in the first place. Not every employee needs full root access to the billing system; they need readonly access to their records at most.
        • theshrike793 hours ago
          These people aren’t doing bash loops, they’re regular non-technical people who just want to use an AI Agent to access services and aggregate data.<p>If people accidentally delete stuff, they tend to notice it and we can roll back. If an agent does a big whoops, it’s usually BIG one and nobody notices because it’s just humming away processing stuff with little output.<p>An accountant might have access to 5 different clients accounts, they need to do their work. They can, with their brain, figure out which one they’re processing and keep them separate.<p>An AI with the same access via MCP might just decide to “quickly fix” the same issue in all 5 accounts to be helpful. Actually breaking 7 different laws in the process.<p>See the issue here?<p>(Yes the AI is approved for this use; that’s not the problem here)
          • BeetleB1 hour ago
            &gt; These people aren’t doing bash loops, they’re regular non-technical people who just want to use an AI Agent to access services and aggregate data.<p>Over the last few months, this pattern of discussion has become pervasive on HN.<p>Point.<p>Counterpoint.<p>(Not finding a flaw with the counterpoint) &quot;Yeah, but most people aren&#x27;t smart enough to do it right.&quot;<p>I see it in every OpenClaw thread. I see it here now.<p>I also saw it when agents became a thing (&quot;Agents are bad because of the damage they can do!&quot;) - yet most of us have gotten over it and happily use them.<p>If your organization is letting &quot;regular non-technical&quot; people download&#x2F;use 3rd party MCPs without understanding the consequences, the problem isn&#x27;t with MCP. As others have pointed out in this thread, you can totally have as secure an MCP server&#x2F;tool as a sandboxed CLI.<p>Having said that, I simply don&#x27;t understand yours (and most of others&#x27;) examples on how CLI is really any different. If the CLI tool is not properly sandboxed, it&#x27;s as damaging as an unsecured MCP. Most regular non-technical people don&#x27;t know how to sandbox. Even where I work, we&#x27;re told to run certain agentic tools in a sandboxed environment. Yet they haven&#x27;t set it up to <i>prevent</i> us from running the tools without the sandbox. If my coworker massively screws up, does it make sense for me to say &quot;No, CLI tools are bad!&quot;?
      • arondeparon3 hours ago
        [dead]
    • joshwarwick159 hours ago
      Context usage is a client problem - progressive disclosure can be implemented without any spec changes (Claude&#x2F;code has this built in for example). That being said the examples for creating a client could be massively expanded to show how to do this well
    • exossho9 hours ago
      agree I don&#x27;t get this discussion anyways Those are two different things, and actually they work well together..
    • pavelbuild8 hours ago
      [dead]
  • bloppe6 minutes ago
    Every CLI can be expressed as an API and vice versa. Thus every skill can be expressed as an MCP server and vice versa. Any argument about the technical or practical merits of one over the other is willfully ignoring the fact that you can <i>always</i> use exactly the same patterns in one vs. the other.<p>So it&#x27;s really all about availability or preference. Personally, I don&#x27;t think we needed a whole new standard with all its complexities and inevitable future breaking changes etc.
  • plandis15 hours ago
    I could not agree any less with the author. I don’t want APIs, I want agents to use the same CLI tooling I already use that is locally available. If my agents are using CLI tooling anyways there is no need to add an extra layer via MCP.<p>I don’t want remote MCP calls, I don’t even want remote models but that’s cost prohibitive.<p>If I need to call an API, a skill with existing CLI tooling is more than capable.
    • stingraycharles11 hours ago
      I often just put direct curl commands in a skill, the agent uses that, and it works perfectly for custom API integrations. Agents are perfectly capable of doing these types of things, and it means the LLM just uses a flexible set of tools to achieve almost anything.
      • notpushkin11 hours ago
        I think this is the best of both worlds. Design a sane API (that is easy to consume for both humans and agents), then teach the agents to use it with a skill.<p>But I agree with the author on custom CLI tooling. I don’t want to install another opaque binary on my machine just to call some API endpoints.
        • stingraycharles10 hours ago
          Obviously opaque binaries are hardly an improvement over MCP, but providing a few curl + jq oneliners to interact with a REST API works great in my experience. Also means no external scripts, just a single markdown file.
    • TheTaytay14 hours ago
      I keep getting hung up on securely storing and using secrets with CLI vs MCP. With MCP, you can run the server before you run the agent, so the agent never even has the keys in its environment. That way. If the agent decides to install the wrong npm package that auto dumps every secret it can find, you are less likely to have it sitting around. I haven’t figured out a good way to guarantee that with CLIs.
      • Aperocky13 hours ago
        A CLI can just be a RPC call to a daemon, exact same pattern apply. In fact my most important CLI based skill are like this.. a CLI by itself is limited in usefulness.
        • linkregister11 hours ago
          In other words, a wrapper around an MCP that&#x27;s less verbose.
          • throwup2387 hours ago
            MCP is a wrapper around <i>it</i>. The CLI-daemon RPC pattern is much older and is used all over the place in modern systems.
          • otabdeveloper43 hours ago
            &quot;MCP&quot; here is not needed.
      • usrbinbash10 hours ago
        And in a skill, I can store the secret in the skill itself, or a secure storage the skill accesses, and the agent never gets to see the secret.<p>Sure, if I want my agents to use naked curl on the CLI, they need to know secrets. But that&#x27;s not how I build my tools.
        • lukewarm7078 hours ago
          what stops the agent from echoing the secure storage?<p>what i see is that you give it a pass manager, it thinks, &quot;oh, this doesn&#x27;t work. let me read the password&quot; and of course it sends it off to openai.
          • usrbinbash2 hours ago
            &gt; what stops the agent from echoing the secure storage?<p>The fact that it doesn&#x27;t see it and cannot access it.<p>Here is how this works, highly simplified:<p><pre><code> def tool_for_privileged_stuff(context:comesfromagent): creds = _access_secret_storage(framework.config.storagelocation) response = do_privileged_stuff(context.whatagentneeds, creds) return response # the agent will get this, which is a string </code></pre> This, in a much more complex form, runs in my framework. The agent gets told that this tool exists. It gets told that it can do privileged work for it. It gets told how `context` needs to be shaped. (when I say &quot;it gets told&quot;, I mean the tool describes itself to the agent, I don&#x27;t have to write this manually ofc.)<p>The agent never accesses the secrets storage. The tool does. The tool then uses the secret to do whataever privileged work needs doing. The secret never leaves the tool, and is never communicated back to the agent. The agent also doesn&#x27;t need, or indeed can give the tool a secret to use.<p>And the &quot;privileged work&quot; the tool CAN invoke, does not include talking to the secrets storage on behalf of the agent.<p>All the info, and indeed the <i>ability</i> to talk to the secrets storage, belongs to the framework the tool runs in. The agent cannot access it.
            • mvkg45 minutes ago
              If the tool fails for some reason, couldn&#x27;t an overly eager agent attempt to fix what&#x27;s blocking it by digging into the tool (e.g. attaching a debugger or reading memory)? I think the distinction here is that skill+tool will have a weaker security posture since it will inherently run in the same namespaces as the agent where MCP could impose additional security boundaries.
          • jgilias8 hours ago
            OpenAI is not the worst it could or would send it to.
      • seriousmountain9 hours ago
        [dead]
      • pavelbuild8 hours ago
        [dead]
    • zaphirplane10 hours ago
      This has been hashed to death and back. The mcp allows a separation between the agent and the world, at its most basic not giving the agent your token or changing a http header , forcing a parameter.<p>Well yes you don’t need those things all the time and who knows if the inventor of mcp had this idea in mind but here we are
      • Aperocky4 hours ago
        The separation is being oversold as if only MCP can do it, which is laughable. Any CLI can trivially do exactly what MCP do in terms of separation.
    • woeirua14 hours ago
      Ok, but there are still many environments where an LLM will not have access to a CLI. In those situations, skills calling CLI tools to hook into APIs are DOA.
      • egeozcan14 hours ago
        What are the advantages of using an environment that doesn&#x27;t have access to a CLI, only having to run&#x2F;maintain your own server, or pay someone else to maintain that server, so AI has access to tools? Can&#x27;t you just use AI in the said server?
        • DrJokepu14 hours ago
          The advantage is that I can have it in my pocket.
          • Aperocky13 hours ago
            gateway agent is a thing for many months now (and I don&#x27;t mean openclaw, that&#x27;s grown into a disaster security wise). There are good, minimal gateway agents today that can fit in your pocket.
          • patates10 hours ago
            Why can&#x27;t you have the agent running on its own server&#x2F;vm in your pocket?
        • daemonologist14 hours ago
          Obvious example is a corporate chatbot (if it&#x27;s using tools, probably for internal use). Non-technical users might be accessing it from a phone or locked-down corporate device, and you probably don&#x27;t want to run a CLI in a sandbox somewhere for every session, so you&#x27;d like the LLM to interface with some kind of API instead.<p>Although, I think MCP is not really appropriate for this either. (And frankly I don&#x27;t think chatbots make for good UX, but management sure likes them.)
          • nostrebored14 hours ago
            Why are they not calling APIs directly with strictly defined inputs and outputs like every other internal application?<p>The story for MCP just makes no sense, especially in an enterprise.
            • ok_dad14 hours ago
              MCP is an API with strictly defined inputs and outputs.
              • nostrebored13 hours ago
                This is obviously not what it is. If I give you APIGW would you be able to implement an MCP server with full functionality without a large amount of middleware?
                • ok_dad9 hours ago
                  I’ve implemented an MCP tool calling client for my application, alongside OAuth for it. It was hard but no harder than anything else similar. I implemented a client for interference with the OpenAI API spec for general inference providers, and it was similarly as hard. MCP. SDKs help make it easy; MCP servers are dead simple. Clients are the hard part, IMO.<p>MCP is basically just an RPC API that uses HTTP and JSON, with some other features useful for AI agents today.
                • victorbjorklund10 hours ago
                  If I gave you that could you implement Graphql from scratch without a large amount of middleware? Or are we now saying graphql api:s are not api:s?
                • notpushkin11 hours ago
                  Sorry, could you rephrase that?
              • oblio11 hours ago
                Does MCP support authentication, SSO?
                • ok_dad9 hours ago
                  Yes it’s literally just standard OAuth that’s defined in the MCP spec. I spent this week implementing an auth layer for my app’s MCP client gateway.
                • notpushkin11 hours ago
                  It supports OAuth, IIRC. But I suppose the internal chatbot itself would require auth, and pass that down to the tools it calls.
                  • insin10 hours ago
                    The chatbot app initiates an OAuth flow, user SSOs, chatbot app receives tokens to its callback URL, then tool calls can access whatever the user can access.<p>If you use the official MCP SDK, it has interfaces you implement for auth, so all you need to do is kick off the OAuth flow with a URL it figures out and hands you, storing the resulting tokens and producing them when requested. It also handles using refresh tokens, so there&#x27;s just a bit of light friendly owl finishing on top.<p>Source: I just implemented this for our (F100) internal provider and model agnostic chat app. People can&#x27;t seem to see past the coding agents they&#x27;re running on their own machines when MCP comes up.
            • woeirua12 hours ago
              MCP really only makes sense for chatbots that don’t want to have per session runtime environments. In that context, MCP makes perfect sense. It’s just an adapter between an LLM and an API. If you have access to an execution engine, then yes CLI + skills is superior.
              • functional_dev3 hours ago
                actually local MCP just spawns a subprocess and talks via stdin&#x2F;stdout.. same as CLI tool. Extra layer is only for remote case.<p>This might help if interested - <a href="https:&#x2F;&#x2F;vectree.io&#x2F;c&#x2F;implementation-details-of-stdio-and-sse-transport-streams-in-cross-process-communication" rel="nofollow">https:&#x2F;&#x2F;vectree.io&#x2F;c&#x2F;implementation-details-of-stdio-and-sse...</a>
              • 9dev10 hours ago
                Only is doing a lot of work here. There are tons of use cases aside from local coding assistants, e.g., non-code related domain specific agentic systems; these don’t even necessarily have to be chatbots.
                • friendzis9 hours ago
                  OP&#x27;s point is about per session sandboxes, not them necessarily being &quot;chatbots&quot;. But if you don&#x27;t burry the agent into a fresh sandbox for every session you have bigger problems to worry about than MCP vs CLI anyway
          • friendzis10 hours ago
            &gt; and you probably don&#x27;t want to run a CLI in a sandbox somewhere for every session<p>You absolutely DO want to run everything related to LLMs in a sandbox, that&#x27;s basic hygiene
            • williamdclt8 hours ago
              You&#x27;re missing their point, they&#x27;re saying that you&#x27;d need a sandbox -&gt; it&#x27;d be a pain -&gt; you don&#x27;t want to run a CLI _at all_
        • ghywertelling8 hours ago
          [dead]
      • hansonkd14 hours ago
        idk, just have a standard internet request tool that skills can describe endpoints to. like you could mock `curl` even for the same CLI feel
        • woeirua13 hours ago
          Now you’ve replicated MCP but with extra steps and it’s harder to debug.
          • hansonkd4 hours ago
            Its actually simpler since the skill can be 100% a MD file.
      • yawnxyz14 hours ago
        skills can have code bundled with them, including MCP code
        • woeirua13 hours ago
          The agent still doesn’t have an execution environment. It can’t execute the code!
          • yawnxyz10 hours ago
            well that&#x27;s harness territory! give it the right harness&#x2F;environment!!
    • lll-o-lll10 hours ago
      Cool cool. Except.<p>What about auth? Authn and authz. Agent should be you always? If not, every API supports keys? If so, no fears about context poisoned agents leaking those keys?<p>One thing an MCP (server) gives you is a middleware layer to control agent access. Whether you need that is use-case dependent.
      • mstipetic10 hours ago
        Also resources - which are by far the coolest part of MCP. Prompts? Elicitation? Resource templates? If you think of MCP as only a replacement for tool calls I can see the argument but it&#x27;s much more than that.
      • friendzis10 hours ago
        &gt; If not, every API supports keys?<p>How would MCP help you if the API <i>does not</i> support keys?<p>But that&#x27;s not the point. The agent calls CLI tools, which reads secrets from somewhere where the agent cannot even access. How can agent leak the keys it does not have access to?<p>You ARE running your agents in containers, right?
        • lll-o-lll9 hours ago
          &gt; How would MCP help you if the API does not support keys?<p>Kerberos, OAuth, Basic Auth (username&#x2F;password), PKI. MCP can be a wrapper (like any middleware).<p>&gt; But that&#x27;s not the point. The agent calls CLI tools, which reads secrets from somewhere where the agent cannot even access. How can agent leak the keys it does not have access to?<p>If the cli can access the secrets, the agent can just reverse it and get the secret itself.<p>&gt; You ARE running your agents in containers, right?<p>Do you inject your keys into the container?
          • Marha015 hours ago
            &gt; If the cli can access the secrets, the agent can just reverse it and get the secret itself.<p>What do you mean by this? How &quot;reverse it&quot;? The CLI tool can access the secure storage, but that does not mean there is any CLI interface in the tool for the LLM to call and get the secret printed into the console.
            • _flux5 hours ago
              In principle it could use e.g. the `gdb` and step until it gets the secret. Or it can know ahead where the app stores the cerentials.<p>We could use suid binaries (e.g. sudo) to prevent that, but currently I don&#x27;t think we can. Most anyone would agree that using a separate process, for which the agent environment provides a connection, is a better solution.
    • rimliu9 hours ago
      what you want and what works may be very different things.
  • alierfan15 hours ago
    This isn&#x27;t a zero-sum game or a choice of one over the other. They solve different layers of the developer experience: MCP provides a standardized, portable interface for external data&#x2F;tools (the infrastructure), while Skills offer project-specific, high-level behavioral context (the orchestration). A robust workflow uses MCP to ensure tool reliability and Skills to define when and how to deploy those tools.
    • Aperocky13 hours ago
      MCP is just CLI wrapped in boxes.<p>CLI is the same API in more concise format. At minimum, the same amount of context overhead exist for MCP, but most of the time more because the boxes have size.<p>CLI can be secure, AWS CLI is doing just fine. You can also play simple tricks to hide secret in a daemon or run them remotely, and all of them are still smaller than a MCP.
      • BeetleB1 hour ago
        I&#x27;ve always wondered: Doesn&#x27;t the fact that the MCP input&#x2F;output is more structured lead to higher reliability? With MCP you declare the types for input (string, int, list, etc) and output.<p>As part of our product, we have an MCP server. Since many of our MCP tools are expensive, for our tests we simply give the LLM all the tool descriptions (but in text form, not structured) and ask it which tool it would call for a given query and assert on the response.<p>The tests are flaky. In practice, I&#x27;ve always seen the LLM make the right tool call with the proper formatting of args, etc. In the tests (same LLM model), it occasionally makes mistakes on the argument types and it has to try again before it gets it right.<p>My assumption was that the structure MCP provides was the reason there was a discrepancy.
        • Aperocky48 minutes ago
          This maybe one of the area that MCP are ok-ish, however at huge cost to context.
          • BeetleB37 minutes ago
            As I and others have pointed out: The context problem with MCP is mostly solved.<p>See <a href="https:&#x2F;&#x2F;news.ycombinator.com&#x2F;item?id=47719249">https:&#x2F;&#x2F;news.ycombinator.com&#x2F;item?id=47719249</a> for an example I gave.
    • zhdc111 hours ago
      Completely agree. I don’t see why people view this as an either or decision.<p>Also worth mentioning that some paid MCP providers offer an actual value added. Sure, I can use curl or a self hosted crawler for web searches, but is it really worth the pain?
    • chris_ivester9 hours ago
      This is exactly right, and I&#x27;d add one more layer that the thread is mostly missing: this combination matters most when the agent itself is hosted in the cloud rather than running locally. Skills + MCP is the architecture for cloud-hosted agents - Skills give the agent its context and workflow, MCP tools give it reach into external services without the agent needing to manage credentials or runtime dependencies.
    • dvcrn8 hours ago
      Hi, author here! I fully agree with your comment here and that’s exactly my point in the post: Different tools that work great for different tasks. If anything, the post is a take against treating Skills + CLI as a zero-sum replacement for MCP, and calling MCP dead&#x2F;outdated<p>Especially portability is just not possible with Skills+CLI (yet). I can use the same MCP servers through remote MCP on my phone, web, iPad, in ChatGPT, Perplexity, Claude, Mistral and so on, which I can’t do with Skills.
    • soco10 hours ago
      Also, the skills can be ignored or thwarted if the LLM feels like, while a policy at the MCP server level stays there.
  • _pdp_8 hours ago
    Scanning through the comments here I am almost certain the majority of people in this thread run coding agents on-device. Skills that access already available resources is then more convenient and you can easily make the argument that it is more agronomic.<p>That being said, majority of users on this planet don&#x27;t use AI agents like that. They go to ChatGPT or equivalent. MCP in this case is the obvious choice because it provides remote access and it has better authentication story.<p>In order to make any argument about pro&#x2F;con of MCP vs Skills you first need to find out who is the user.
    • thecupisblue8 hours ago
      I am not 100% sure I follow your train of thought.<p>Isn&#x27;t in that case an API what they want?<p>An &quot;MCP for a local app&quot; is just an API that exposes the internal workings of the app. An &quot;MCP for mixpanel&quot; is just an API that exposes Mixpanel API behind Auth. There is nothing special about them for any type of user. It&#x27;s just that MCP&#x27;s were &quot;made popular&quot;.<p>For the same type of user, I have built better and smoother solutions that included 0 MCP servers, just tools and pure API&#x27;s.Define a tool standard DX and your LLM can write these tools, no need to run a server anywhere.<p>That is also what the author seems to be mistaken about - you don&#x27;t need a CLI. A CLI is used because the DX is nice and easily permutable with all the preexisting bash tooling that is ingrained into every LLM&#x27;s dataset. You don&#x27;t need a .env file if you&#x27;re using an API with a skill. A skill can include a script, or mentions of tools, and you are the one who controls these.<p>All in all, the whole &quot;MCP vs Skill&quot; debate online is mostly based on fundamental misunderstandings of LLM&#x27;s and how they work, how harnesses work and how API&#x27;s in general work, with a lot of it being fueled by people who have no relevant coding experience and are just youtube&#x2F;twitter &quot;content creators&quot;.<p>Some arguments against MPC&#x27;s, no matter who is the user:<p>- MCP is just a noisy, hacky wrapper around an API or IPC (well, API behind IPC) - MCP&#x27;s are too noisy for LLM&#x27;s to be useful long-term, as they require a server. - You don&#x27;t need an MCP, you need an easy accessible API with simple DX that the machine can use with as little context and decision making as required. - Skills are better than MCP because they basically encode the API docs&#x2F;context in an LLM friendly manner. No need to run servers, just push text to system prompt.
      • _pdp_8 hours ago
        You are mostly right except forgetting that not all SaaS companies want their users to shoot themselves in the foot by exposing the entire API surface and all of its quirks and risks to AI agents.<p>Furthermore, In many cases some APIs, for better or worse, are not even sufficient. For example, the Notion MCP has full text search capabilities. Their API allows searching by title only. I don&#x27;t know why but I am sure there are reasons.<p>MCP looks redundant until you start working with real users that don&#x27;t know a thing about AI agents, programming and security.
        • thecupisblue7 hours ago
          Honestly it&#x27;s on them, not on the users.<p>In today&#x27;s day and age, it&#x27;s absurdly easy to create a proxy API for your API that only exposes a subset of operations. And not like other &quot;easy&quot; things which depend on them having done &quot;the right thing&quot; before, like OpenAPI specs, auth scoping etc. This is so easy, even corporations consider it easy, and everything there is a PITA.<p>This is simple to make, to document and since it&#x27;s a proxy you&#x27;re also able to include all bunch of LLM friendly shenanigans and overly verbal errors with suggestions to fix.<p>Shit, I should obviously make a SaaS for this, huh?
          • 0x696C69615 hours ago
            If this is not &#x2F;s then you need to read the MCP spec.
    • Oras8 hours ago
      &gt; majority of users on this planet don&#x27;t use AI agents like that<p>Source?
      • Jaxkr8 hours ago
        Common sense. Most users are not running Claude Code or an on-device coding agent.<p>They&#x27;re using ChatGPT, Gemini, or Claude on the web.
    • vasco8 hours ago
      More agronomic means shittier, eh? I guess you meant ergonomic but funny typo
      • _pdp_8 hours ago
        Yep, and yes my bad. I typed the comment quickly without using AI.
  • localhost300023 minutes ago
    How I think about this:<p>If you&#x27;re using an agent in a shell environment with unfettered internet access and code execution: CLI + Skills.<p>If you&#x27;re using a hosted agent on a website or in an app without code execution and limited&#x2F;no internet access: MCP.<p>We want both patterns. Folks who are agro about MCP do ~all of their work in the former, so it seems pointless. Most people interact with agents in the later.
  • WhyNotHugo5 hours ago
    I like skills because they rely on the same tools which humans rely upon. A well-written skill can be read and used by a human too.<p>A skill is just a description for how to use an existing CLI tool. You don&#x27;t need to write new code for the LLM to interact with some system. You just tell the LLM to use the same tool humans do. And if you find the CLI is lacking in some way, you can improve it and direct human usage benefits from that improvement too.<p>On the other hand, an MCP requires implementing a new API for a service, an API exclusive to LLMs, and keeping parallel documentation for that. Every hour of effort put into it is an hour that&#x27;s taken away from improving the human-facing API and documentation.<p>The way skills are lazy-loaded when needed also keeps context clean when they&#x27;re not used. To be fair, MCPs <i>could</i> be lazy-loaded the same way, that&#x27;s just an implementation detail.
  • noisy_boy2 hours ago
    I feel like MCPs are encapsulation of multiple steps where the input to the first step is sufficient to drive the flow. Why would I spend tokens for the LLM to do reasoning at each of the steps when I can just provide the input + MCP call backed by a fixed program that can deal with the overall flow deterministically. If I have to do the same series of steps everytime, a script beats LLM doing the each step individually in terms of cost and time. If the flow involved some sort of fuzzy analysis or decision making in multiple places, I would probably let the LLM carry out the flow or break it into a combination of MCP calls orchestrated by the LLM.<p>In my case, my MCP is setup with the endpoints being very thin LLM facing layer with the meat of the action being done by helper methods. I also have cli scripts that import&#x2F;use the same helpers so the core logic is centralized and the only difference is that thin layer, which could be the LLM endpoint or cli&#x27;s argparse. If I need another type of interface, that can also call the same helpers.
  • neosat1 hour ago
    The juxtaposition of MCP <i>vs</i> Skills in the article is very strange. These are not competing ways to achieve something. Rather skills is often a way to enable an optimization on top of MCPs.<p>A simplified but clarifying way to think about it is that MCP exposes all the things that <i>can</i> be done, and Skills encode a workflow&#x2F;expertise&#x2F;perspective on how something <i>should</i> be done given all the capabilities.<p>So I&#x27;m not sure why the article portrays one to be conflicting with the other (e.g. &quot;the narrative that “MCP is dead” and “Skills are the new standard” has been hammered into my brain. Everywhere I look, someone is celebrating the death of the Model Context Protocol in favor of dropping a SKILL.md into their repository.&quot;).<p>You can just not choose to use a skill if it&#x27;s not useful. But if it&#x27;s useful a skill can add to what an MCP alone can do.
  • grensley14 hours ago
    The &quot;only skills&quot; people are usually non-technical and the &quot;only CLI&quot; people are often solo builders.<p>MCP makes a lot of sense for enterprise IMO. Defines auth and interfaces in a way that&#x27;s a natural extension of APIs.
    • bikelang14 hours ago
      I think many of us have been burned by the absolutely awful and unstable JIRA MCP and found that skills using `acli` actually work and view the rest of the MCP space thru that lens. Lots of early - and current! - MCP implementations were bad. So it’s an uphill battle to rebuild reputation.
      • 0x696C69615 hours ago
        If Atlassian put out a horrible CLI tool, would your conclusion be that &quot;CLIs are bad&quot;?
        • bikelang2 hours ago
          The Atlassian CLI is pretty bad too! But at least the robot can consistently use it. And I can use it to help the robot figure out Atlassian’s garbage data structures. There’s not much I can do to debug their awful MCP.
      • walthamstow9 hours ago
        `acli` doesn&#x27;t cover Confluence and I found it limited compared to the MCP by sooperset on GitHub.
      • hnlmorg10 hours ago
        Can you share more about acli?<p>Literally my biggest use case for MCP is Jira and Confuence
        • dugmartin8 hours ago
          It is available here:<p><a href="https:&#x2F;&#x2F;developer.atlassian.com&#x2F;cloud&#x2F;acli&#x2F;guides&#x2F;introduction&#x2F;" rel="nofollow">https:&#x2F;&#x2F;developer.atlassian.com&#x2F;cloud&#x2F;acli&#x2F;guides&#x2F;introducti...</a><p>It has a pretty discoverable cli syntax (at least for Claude). I use it in my custom skills to pull Jira story info when creating and reviewing specs.
    • bicx14 hours ago
      I built an internal company MCP that uses Google Workspace auth and injects a mix of guidance (disguised as tools) on how we would like certain tasks to be accomplished via Claude as well as API-like capabilities for querying internal data and safely deploying small apps internally.<p>I’d really love to get away from the SSE MCP endpoints we use, as the Claude desktop app can get really finicky about disconnects. I thought about distributing some CLIs with Skills instead. But, MCP can be easily updated with new tools and instructions, and it’s easy to explain how to add to Claude for non-technical people. I can’t imagine trying to make sure everyone in my company had the latest skill and CLI on their machine.
    • CuriouslyC5 hours ago
      CLIs are technically better for a number of reasons.<p>If an enterprise already has internal tooling with authn&#x2F;z, there&#x27;s no reason to overlay on top of that.<p>MCPs main value is as a structured description of an agent-usable subset of an API surface with community traction, so you can expect it to exist, be more relevant than the OpenAPI docs.
    • zhdc111 hours ago
      Or just rapidly spinning up something.<p>Codex -&gt; LiteLLM -&gt; VLLM<p><pre><code> |____&gt; MCP </code></pre> Takes a couple of minutes to setup.
    • jillesvangurp14 hours ago
      I&#x27;ve started thinking of these systems as legacy systems. We have them. They are important and there&#x27;s a lot of data in them. But they aren&#x27;t optimal any more.<p>How we access them and where data lives is essentially an optimization problem. And AI changes what is optimal. Having data live in some walled garden with APIs designed to keep people out (most SAAS systems) is arguably sub optimal at this point. Sorting out these plumbing issues is actually a big obstacle for people to do productive things via agentic tools with these systems.<p>But a good way to deal with this is to apply some system thinking and figure out if you still need these systems at all. I&#x27;ve started replacing a lot of these things with simple coder friendly solutions. Not because I&#x27;m going to code against these things but because AI tools are very good at doing that on my behalf. If you are going to access data, it&#x27;s nicer if that data is stored locally in a way that makes it easy to access that data. MCP for some SAAS thing is nice. A locally running SQL database with the data is nicer. And a lot faster to access. Processing data close to where it is stored is optimal.<p>As for MCP. I think it&#x27;s not that important. Most agentic coding tools switch effortlessly between protocols and languages. In the end MCP is just another RPC protocol. Not a particularly good or optimal one even. If you had an API or cli already, it&#x27;s a bit redundant to add MCP. Auth is indeed a key challenge. And largely not solved yet. I don&#x27;t think MCP adds a whole lot of new elements for that.
  • lifeisstillgood10 hours ago
    I agree for a slightly different reason - human stupidity.<p>Despite many decades of proof that automation simplifies and reveals the illogical in organisations, digitisation has mostly stopped at below the “CXO” level - and so there are not APIs or CLIs available to anyone - but MCP is cutting through<p>Just consider:<p>Throughout companies large and small, Agile is what coders do, real project managers still use deadlines and upfront design of what will be in the deadline - so any attempt to convert the whole company to react to the reality of the road is blocked<p>Reports flow upwards - but through the reporting chain. So those PowerPoints are … massaged to meet to correct story, and the more levels it’s massaged the more it fails to resemble reality. Everyone knows this but managing the transition means potentially losing control …<p>There are plenty of digitisationmprojects going on - but do they enable full automation or are they another case of an existing political arena building its own political choices in software - “our area in a database to be accessed via an UI by our people” - almost never “our area to be used by others via API and totally replacing our people”.<p>(I think I need to be more persuasive
    • nimonian7 hours ago
      I&#x27;m with you on this (I think). Digitising my org is much easier if I can assume my colleagues&#x27; agents will be acting on their behalf. Even if I can&#x27;t convince most humans to cooperate with solutions, I can usually trust their agents to do so. MCP hides the wiring somewhat, which I enjoy.
  • hasyimibhar3 hours ago
    We use MCP at work. In my team of about 6 people, everyone has Claude access, but about half of us are non-engineers. I built an MCP over our backend and Clickhouse, and setup a Claude Project with instruction (I&#x27;m assuming this count as skill?). The instruction is mostly for enriching the analytics data that we have, e.g. hinting Claude to prefer certain datasets for certain questions.<p>This allows the non-engineers (and also engineers) to use Claude Desktop to do day-to-day operations (e.g. ban user X for fraud) and analytics (e.g. how much revenue we made past 7 days? Any fraud patterns?). The MCP helps to add audit, authorization, and approval layer (certain ops action like banning user will require approval).
  • nextaccountic10 hours ago
    &gt; Context Bloat: Using a skill often requires loading the entire SKILL.md into the LLM’s context window, rather than just exposing the single tool signature it needs. It’s like forcing someone to read the entire car’s owner’s manual when all they want to do is call car.turn_on().<p>MCP has severe context bloat just by starting a thread. If harnesses were smart enough to, during install time, summarize the tools provided by a MCP server (rather than dumping the whole thing in context), it would be better. But a worse problem is that the output of MCP goes straight into the context of the agent, rather than being piped somewhere else<p>A solution is to have the agent run a cli tool to access mcp services. That way the agent can filter the output with jq, store it in a file for analysis later, etc
    • dvcrn8 hours ago
      &gt; MCP has severe context bloat just by starting a thread<p>Hi, author here. The “MCP has severe context bloat” problem has already been solved with tool discovery. Modern harnesses don’t load every single tool + their descriptions into the context on load, but use tool search to discover the tools lazily when they’re needed. You can further limit this by telling the LLM exactly which tool to load, the rest will stay unloaded &#x2F; invisible<p>&gt; But a worse problem is that the output of MCP goes straight into the context of the agent, rather than being piped somewhere else<p>This is semi-solved as agents and harnesses get smarter. Claude Code for example does discovery in subagents. So it spawns a sub-agent with a cheaper model that explores your codebase &#x2F; environment (also through MCP) and provides a summary to the parent process. So the parent won’t get hit with the raw output log
    • gum_wobble10 hours ago
      &gt; A solution is to have the agent run a cli tool to access mcp services.<p>lol and why do you need mcp for that, why cant that be a classic http request then?
      • senordevnyc3 hours ago
        I use this pattern with mcp-cli, and I do that instead of a curl request for two reasons: 1) not leaking my creds into the agent session, and 2) so I can allowlist &#x2F; denylist specific tools on an MCP server, which I can&#x27;t do as easily if I give the agent an API token and curl.
    • mathis-l10 hours ago
      At least when working with local MCP servers I solved this problem by wrapping the mcp tools inside an in-memory cache&#x2F;store. Each tool output gets stored under a unique id and the id is returned with the tool output. The agent can then invoke other tools by passing the id instead of generating all the input. Adding attribute access made this pretty powerful (e.g. pass content under tool_return_xyz.some.data to tool A as parameter b). This saves token costs and is a lot faster. Granted, it only works for passing values between tools but I could imagine an additional tool to pipe stuff into the storage layer would solve this.
  • password43215 hours ago
    Surprised to see no mention in the article or discussion yet about using MCPs in &#x27;code mode&#x27;, where an API is generated client-side relying on MCP primarily as an interface standard. I&#x27;m still learning but I&#x27;ve read this reduces the amount of context required to use the MCP.<p>It seems like a lot of the discussion is arguing in favor of API usage without realizing that MCP basically standardizes a universal API, thus enabling code mode.
  • losvedir6 hours ago
    For my use I prefer just a raw CLI. As long as it&#x27;s built following conventions (e.g. using cobra for a Go app) then the agent will just natively know how to use it, by which I mean how to progressively learn what it needs by reading the `help` output. In that case you don&#x27;t need a skill or anything. Just say &quot;I want this information, use the xyz app&quot;. It will then try `xyz --help` or `xyz help` or a variant, just like a human would, see the subcommands, do `xyz help subcommand` and eventually find what it needs to do the job. Good tools provide an OAuth flow like `xyz login`, which will open a browser window where you can determine which resources you want to give the CLI (and thereby the agent) access to.<p>This only works for people using agents themselves on computers they control, rather than, e.g., the Claude web app, but is a good chunk of my usage.<p>I think people are either over or under thinking the auth piece, though. The agent should have access to <i>their</i> own token. Both CLIs and MCPs and even raw API requests work this way. I don&#x27;t think MCPs provide any further security. You should assume the agent can access anything in its environment and do everything up to what the credential permits. You don&#x27;t want to give <i>your</i> more powerful credential to the MCP server and hope that the MCP server somehow restricts the agent to doing less (it can probably find the credential and make out-of-band calls if it wants). The only way I think it could work like that is how... is it Sprite does it?... where you give use a fake token and have an off-machine proxy that it goes through where it MitMs the request and injects the real credential.
    • 0x696C69615 hours ago
      You run the MCP server outside of the agent sandbox so it doesn&#x27;t have access to the credentials.
      • lukewarm7075 hours ago
        yes and also you can firewall the container so that it can only contact the mcp&#x2F;proxy.<p>this way it doesn&#x27;t download a trojan or leak your data to someone
  • CharlieDigital5 hours ago
    One thing that I have found is that the platforms are surprisingly poor at consistently implementing MCP, which is actually a pretty simple protocol.<p>Take Codex, for example, it does not support the MCP prompts spec[0][1] which is quite powerful because it solves a lot of friction with deploying and synchronizing SKILL.md files. It also allows customization of virtual SKILL.md files since it allows compositing the markdown on the server.<p>It baffles me why such a simple protocol and powerful capability is not supported by Codex. If anyone from OpenAI is reading this, would love to understand the reasoning for the poor support for this relatively simple protocol.<p>[0] <a href="https:&#x2F;&#x2F;github.com&#x2F;openai&#x2F;codex&#x2F;issues&#x2F;5059" rel="nofollow">https:&#x2F;&#x2F;github.com&#x2F;openai&#x2F;codex&#x2F;issues&#x2F;5059</a><p>[1] <a href="https:&#x2F;&#x2F;modelcontextprotocol.io&#x2F;specification&#x2F;2025-06-18&#x2F;server&#x2F;prompts" rel="nofollow">https:&#x2F;&#x2F;modelcontextprotocol.io&#x2F;specification&#x2F;2025-06-18&#x2F;ser...</a>
  • robotobos15 hours ago
    Despite thinking this is AI-generated, I agree but everything has a caveat.<p>Skills are good for instilling non-repeatable, yet intuitive or institutional knowledge.<p>MCP’s are great for custom, repeatable tasks. After 5-10 runs of watching my LLM write the same exact script, I just asked it to hardcode the solution and make it a tool. The result is runs are way faster and repeatable.
    • dgb238 hours ago
      I would go further than this. Call the script yourself (or via some other mechanism&#x2F;program) and then give the results to the LLM.<p>The majority of processes don&#x27;t need nearly as many decision making points as an agent could deal with and look somewhat like this:<p>1. gather raw information =&gt; script<p>2. turn it into structured data =&gt; script<p>3. produce an actionable plan =&gt; script&#x2F;user&#x2F;agent (depends)<p>4. validate the plan =&gt; user<p>5. narrow down the implementation workflow and the set of tools needed =&gt; user&#x2F;agent<p>6. follow workflow iteratively =&gt; user&#x2F;agent<p>Doesn&#x27;t need to be this exact shape, but the lesson I learned is to quasi front load and structure as much as possible with scripts and data. That can be done with agent assistance as well, for example by watching it do the task, or a similar one, in freeform at first.
      • robotobos1 hour ago
        There&#x27;s definitely some optimization that can occur, like an orchestrator or Ralph.
    • ashraymalhotra15 hours ago
      You could hardcode the script as a file within a skill too right? Skills can contain code, not just markdown files.
      • robotobos1 hour ago
        Have not tried, but interesting. I guess my concern would be the Skill still takes up context space, where as MCP is just using CPU.
    • dvcrn8 hours ago
      &gt; Despite thinking this is AI-generated, I agree but everything has a caveat.<p>Definitely not AI generated. I wrote this during a non-internet flight. :)
      • robotobos1 hour ago
        Haha sorry for the callout! Saw the M-dash and auto-assumed. Nice write up and thanks for sharing :)
    • et-al15 hours ago
      &gt; <i>Skills are good for instilling non-repeatable, yet intuitive or institutional knowledge.</i><p>Maybe I&#x27;m misinterpreting you, but can you explain this more? I&#x27;ve been using skills for repeatable tasks. Why an MCP instead?
      • robotobos1 hour ago
        Saying &quot;non-repeatable&quot; was probably wrong. &quot;Unique&quot; might be better. Things LLMs arent naturally able to do or infer.
      • robotobos14 hours ago
        If the model can figure it out with tokens, but my institutional knowledge MCP tool can do it with a few CPU cycles, it’s faster and deterministic and repeatable.
    • sjdv19829 hours ago
      It is all about API contracts, right?<p>After the first run, you have a script and an API: the agent discovery mechanism is a detail. If the script is small enough, and the task custom enough, you could simply add the script to the context and say &quot;use this, adapt if needed&quot;.<p>Or am I misunderstanding you?
    • BenFrantzDale15 hours ago
      &gt; Skills are good for instilling non-repeatable, yet intuitive or institutional knowledge.<p>What about just putting that sort of thing in human-targeted documentation? Why call it a “skill” and hide it somewhere a human is less likely to look?<p>(Skills are nice for providing &#x2F;shortcuts.)
  • socketcluster3 hours ago
    I prefer skills with simple curl commands. It&#x27;s easy. You just need to create a server with HTTP endpoints and Claude (or other LLM) can call them with the curl commands you provide in your skills files. Claude is really good with curl and it&#x27;s a well known HTTP client so what Claude is doing is more transparent to the user.<p>Also, with skills, you can organize your files in a hierarchy with the parent page providing the most general overview and each child page providing a detailed explanation of each endpoint or component with all possible parameters and errors. I also made a separate page where I list all the common issues for troubleshooting. It works very well.<p>I created some skills for my no-code platform so that Claude could access and make changes to the control panel via HTTP. My control panel was already designed to update in real-time so it&#x27;s cool to watch it update as Claude creates the schema and adds dummy data in the background.<p>I spent a huge amount of effort on refining my HTTP API to make it as LLM-friendly as possible with flexible access control.<p>You can see how I built my skills marketplace from the docs page if anyone is interested: <a href="https:&#x2F;&#x2F;saasufy.com&#x2F;" rel="nofollow">https:&#x2F;&#x2F;saasufy.com&#x2F;</a>
  • cphoover4 hours ago
    I think language grammars are the an interesting way to define a ruleset too. Forget REST API&#x27;s or MCP Servers for a second... Define a domain specific language, and let the language model generate a valid instruction within the confines of that grammar.<p>Than pass the program, your server or application can parse the instructions and work from the generated AST to do all sorts of interesting things, within the confines of your language features.<p>It&#x27;s verifiable, since you are providing within the defined grammar, and with the parser provided.<p>It is implicitly sandboxed by the powers you give (or rather exclude) to your runtime via an interpreter&#x2F;compiler<p>I&#x27;ve tried this before for a grammar I defined for searching documents, and found it to be quite good at creating valid often complex search instructions.
  • usrbinbash10 hours ago
    &gt; The core philosophy of MCP is simple: it’s an API abstraction. The LLM doesn’t need to understand the how; it just needs to know the what.<p>Wrong. It needs to &quot;understand&quot; both these things. The only difference is where and how the strings explaining them are generated.
    • dgb239 hours ago
      That&#x27;s an important point.<p>Whether it&#x27;s tools, MCP or skills: they are fundamentally all just prompts. Even if the LLM is trained to recognize those and produce the right shape of tokens that validate most of the time.<p>But I wouldn&#x27;t use the word &quot;understand&quot; here, because that builds the wrong intuition. I think a more useful term would be &quot;get guided by&quot; or &quot;get nudged by&quot;. Even &quot;recognize&quot; is slightly misleading, because it implies too much.
  • 0xbadcafebee3 hours ago
    I have vibe-coded 4 different software projects recently, on multiple platforms. I added search, RAG, ticketing, notifications, voice, and more features to them, in 2 minutes. All I had to do was implement MCP client, and suddenly all that other complex functionality &quot;just worked&quot;, both locally and remotely.<p>Skills would have required me to 1) add all the skill files to all those projects (and maintain all those files), and 2) install software tools (some of these tools don&#x27;t have CLIs) to be usable by the skills. Not to mention: the skills aren&#x27;t deterministic! You have to iterate on a skill file for a while to get the LLM to reliably use it the way you want.
  • hereme8882 hours ago
    I see the real argument is against poorly-designed MCP servers and where a skill&#x2F;script would be a better fit.<p>If all you need is &quot;teach the model how to use an existing tool&quot;, then use a skill, or even scripts, which are great for bulk work or teaching workflows.<p>MCPs are good at giving agents a stable, app-owned interface to a system w&#x2F;o making the agents having to rediscover the integration every session. There&#x27;s no way a skill&#x2F;script would be able to handle the stuff I do via my local MCPs for managing certain apps and databases.
  • the_arun1 hour ago
    Don&#x27;t know if Skills &amp; MCP are comparable. One is static &amp; another is dynamic. It is like comparing static content vs dynamic apis. Probably we need both.
  • bharat10102 hours ago
    The MCP vs skills debate feels like it&#x27;s still very early days — I suspect we&#x27;ll look back in a year and laugh at how much we debated this once the patterns become more obvious through real-world use.
  • Aperocky14 hours ago
    Occams Razor spares none.<p>Everything will go to the simplest and most convenient, often both, despite the resistance of the complexity lovers.<p>Sorry MCP, you are not as simple as CLI&#x2F;skill&#x2F;combination, and no, you are not more secure just because you are buried under 3 level of spaghetti. There are no reason for you to exist, just like Copilot. I don&#x27;t just wish, but know you&#x27;ll go into obscurity like IE6.
    • j16sdiz13 hours ago
      Thanks for the 3x context usages because it need to follow the installation steps. and extra credit for the auth token leaks because it is sent in every call as context.
      • Aperocky4 hours ago
        Anything you said here just demonstrate that you don&#x27;t really understand the differences between MCP and CLI.<p>MCP is just wrapper on top of API layer that RCP to a worker&#x2F;daemon. That API layer itself can be the CLI. You get no more context usage, and no extra security impact, because fundamentally the model are the same, just without the fluff.<p>You are probably thinking of CLI as in &quot;oh I must pass everything and it is stateless&quot;, only some need to be like that.
    • lukewarm7078 hours ago
      i use mcp for security. you can have an airgapped agent that can still call online tools. for example, web search.<p>however it can&#x27;t get infected because there is no internet access.<p>the worst you can do is put your secrets in the web search box
      • Aperocky4 hours ago
        You can have that with CLI.<p>MCP is just a wrapper on top, there are no inherent differences other than complexity on top.<p>How do you think MCP work under the hood?
    • olalonde10 hours ago
      That&#x27;s &quot;Worse is better&quot; rather than &quot;Occam&#x27;s razor&quot;.
    • econ13 hours ago
      A webpage with a form should be good enough.
  • alexhans9 hours ago
    This frames MCP vs Skills as an either&#x2F;or, but they operate at different layers. MCP exposes capabilities and Skills may shape how capabilities are used.<p>Both are useful to different people (and role families) in different ways and if you don&#x27;t feel certain pain points, you may not care about some of the value they provide.<p>Agent skills are useful because they&#x27;re standardized prompt sharing but more than that, because they have progressive disclosure so you don&#x27;t bloat your context with an inefficietly designed MCP and their UX is very well aligned such that &quot;&#x2F;SkillBuilder&quot; skills are provided from the start and provide a good path for developers or non traditional builders to turn conversations into semi or full automation. I use this mental model to focus on the iteration pattern and incremental building [1].<p>[1] <a href="https:&#x2F;&#x2F;alexhans.github.io&#x2F;posts&#x2F;series&#x2F;evals&#x2F;building-agent-skills-incrementally.html" rel="nofollow">https:&#x2F;&#x2F;alexhans.github.io&#x2F;posts&#x2F;series&#x2F;evals&#x2F;building-agent...</a>
  • imron13 hours ago
    My biggest gripe with skills is that even clear and explicit instructions are regularly ignored - even when the skill is brief (&lt; 100 lines).<p>I’ll often see the agent saying it’s about to do something so I’ll stop it and ask “what does the xxx skill say about doing that?’ And it’ll go away and think and then say “oh, the skill says I should never do that”
  • jsw974 hours ago
    From the article: &quot;Sandboxing: Remote MCPs are naturally sandboxed. They expose a controlled interface rather than giving the LLM raw execution power in your local environment.&quot;<p>I think this is underappreciated. CLI access gives agents a ton of freedom and might be more effective in many applications. But if you require really fine granularity on permissions -- e.g., do lookups in this db and nothing else -- MCP is a natural fit.
  • fancyraccoon6 hours ago
    Really interesting post. The &quot;connectors vs manuals&quot; framing stuck with me because I think it points at something beyond the UX argument. A Skill that papers over an API loses the signal the friction was carrying. Working with a raw interface tells you something about the design.<p>The same thing plays out at the language layer. The pain of C++ multiple inheritance drove people toward better abstractions. If LLM&#x27;s absorb that friction before it reaches anyone, the signal that produces the next Go never gets felt by the people who could act on it.<p>Wrote about where that leads: <a href="https:&#x2F;&#x2F;blog.covet.digital&#x2F;a&#x2F;the_last_language_you_can_read.html" rel="nofollow">https:&#x2F;&#x2F;blog.covet.digital&#x2F;a&#x2F;the_last_language_you_can_read....</a>
  • ghm219915 hours ago
    For indie developers like myself, I often use chat GPT desktop and Claude desktop for arbitrary tasks, though my main workhorse is a customized coding harness with CC daemons on my nas. With the apps, b I missed having access to my Nas server where my dev environment is. So I wrote a file system MCP and hosted it with a reverse proxy on my Truenas with auth0. I wanted access to it from all platforms CharGPT mobile, desktop. Same for CC.<p>For chatgpt desktop and Claude desktop my experience with MCPs connected to my home NAS is pretty poor. It(as in the app) often times out fetching data(even though there is no latency for serving the request in the logs), often the existing connection gets invalidated between 2 chat turns and chat gpt just moves on answering without the file in hand.<p>I am not using it for writing code, its mostly read only access to Fs. Has anyone surmounted these problems for this access patterns and written about how to build mcps to be reliable?
  • lewisjoe13 hours ago
    <p><pre><code> &gt; ChatGPT can’t run CLIs. Neither can Perplexity or the standard web version of Claude. Unless you are using a full-blown compute environment (like Perplexity Computer, Claude Cowork, Claude Code, or Codex), any skill that relies on a CLI is dead on arrival. </code></pre> Incorrect observation. Claude web does support skills upload. I guess claude runs code_interpreter tool and filesystem in the background to run user uploaded skills. ChatGPT business plans too allow uploading custom skills in web.<p>I can see Skills becoming a standard soon. But the concern still holds. When you publish a MCP you liberate the user out of installing anything. But with skills what happens if the skill running environment don&#x27;t have access to the cli binary or if it isn&#x27;t in PATH?
    • simonw3 hours ago
      Yeah, regular web chat Claude and ChatGPT both have full container access (even on the free version, at least for ChatGPT) which can run CLI tools.<p>Both of them can even <i>install</i> CLI tools from npm and PyPI - they&#x27;re limited in terms of what network services they can contact aside from those allow-listed ones though, so CLI tools in those environments won&#x27;t be able to access the public web.<p>... unless you find the option buried deep in Claude for enabling additional hosts for the default container environment to talk to. That&#x27;s a gnarly lethal trifecta exfiltration risk so I recommend against it, but the option is there!<p>More notes on ChatGPT&#x27;s ability to install tools:<p>- <a href="https:&#x2F;&#x2F;simonwillison.net&#x2F;2026&#x2F;Jan&#x2F;26&#x2F;chatgpt-containers&#x2F;" rel="nofollow">https:&#x2F;&#x2F;simonwillison.net&#x2F;2026&#x2F;Jan&#x2F;26&#x2F;chatgpt-containers&#x2F;</a>
  • vamsikrishna212 hours ago
    I do agree on this, but i think this is just for now, as models get better reasoning, why cant sklls.md take place of MCP all together?
  • ookblah2 hours ago
    mcp is really easy for non-techies to understand. if i own the system, can install cli tools, cli + skills beat it every time and i can tweak, etc. if you&#x27;re asking someone else to do that there&#x27;s real friction. i&#x27;ll sometimes use mcp if i just want to get up and running am not watching context as much, then if they offer a cli i&#x27;ll just move to skill + that or write my own wrapper off the api.
  • s-xyz10 hours ago
    I never understood why there is a discussion about it, one or the other… both serve a different purpose and are complementary.
  • leonidasv15 hours ago
    This is the same as saying &quot;I still prefer hammer over screwdriver&quot;.
  • chris_money2028 hours ago
    I think the worse thing is when someone takes a clearly defined list of steps to do something and writes it as a skill rather than just having AI write it as a script. It’s like people have forgot what scripting is
  • qrbcards7 hours ago
    The comparison to app stores is interesting but I think MCP registries solve a different problem. App stores are for humans browsing. MCP registries are for agents discovering tools at runtime based on the task at hand. The user never browses — they describe what they need and the agent finds the tool.<p>That is a meaningful distribution shift. Products no longer need to be marketed to end users if an agent can find and invoke them directly. Skills require the developer to install them ahead of time, which means someone already decided this tool was relevant.
  • woeirua14 hours ago
    Anthropic says that Skills and MCPs are complementary, and frankly the pure Skills zealots tend to miss that in enterprise environments you’ll have chatbots or the like that don’t have access to a full CLI. It doesn’t matter if your skills tell the agent exactly what to do if they can’t execute the commands. Also, MCP is better for restricted environments because you know exactly what it can or cannot do. That’s why MCP will exist for some time still. They solve distinct problem sets.
    • nostrebored13 hours ago
      &gt; Also, MCP is better for restricted environments because you know exactly what it can or cannot do.<p>The continuous exploits of MCP despite limited adoption really makes this seem wrong.
  • Aperocky4 hours ago
    The most common mistake that I see here is people thinking only MCP can be bound to a server and store secrets and be called remotely<p>No, a CLI with RPC can do exactly that, just smaller. It goes lower in the exact same stack without the fluff.
  • Xenoamorphous11 hours ago
    I use both and don&#x27;t feel they&#x27;re mutually exclusive.<p>E.g. if I have some ElasticSearch cluster, I use a skill to describe the data, and if I ask the LLM to write code that queries ElasticSearch but to test it first it can use a combination of skill + MCP to actually run a query.<p>I think this model works nicely.
  • kohlerm7 hours ago
    Yeah well. MCPs are better for use cases where remote access is required, but for development use cases what you need in the majority of use cases is to manipulate local files. Skills are just the more natural solution here. You can argue whether Skills should come with more type information (MCPs are slightly better here), but otherwise it seems pretty clear to me that if you do not need remote access then MCPs are not really needed.
  • tomaytotomato10 hours ago
    As others have said I have found CLI tools much better<p>This is how I am structuring stuff in Claude Code<p>- Ansible setup github cli, git, atlassian cli, aws-cli, terraform cli tooling<p>- Claude hooks for checking these cli tools are authenticated and configured<p>- Claude skills to use the CLI tooling
  • iamsaitam4 hours ago
    Don&#x27;t they both solve different problems? This tribalism makes no sense.
  • michaelashley2911 hours ago
    100% MCPs truly give the agent tools and allow the agent to make better informed decisions given you can have configured the right MCP tools. Skills are good for knowledge and general guidelines. They give context to the agent, and I have seen some skills being excessively long that could into eat into the context window of the agent. This tool <a href="https:&#x2F;&#x2F;protomcp.io&#x2F;" rel="nofollow">https:&#x2F;&#x2F;protomcp.io&#x2F;</a> helps a lot with testing MCP servers before integrating into the agent workflow. You can even see the agent call different tools in real time and view the trace.
  • rakamotog9 hours ago
    There is one area where MCP typically has challenges - Not a technical challenge but a practical challenge.<p>Imagine you are creating an asset which requires multiple API calls and your UI is designed to go through a 10-12 step setup process for that asset. In practice even if we give one tool for LLM to one-shot it, or even if we break it down into 10-12 tools the points of hallucinations are much higher.<p>Contrast this with &quot;skills&quot; and CLI.
  • rd427 hours ago
    I think the key problem is that usage of MCP servers is not &#x27;baked&#x27; into the LLM training - but API&#x27;s and CLI&#x27;s are already a part of training. So to use your MCP server, the LLM has to use additional intelligence which could have been used to do the actual work instead.
  • choam24269 hours ago
    This tracks with my experience.<p>I started out building an MCP server for an internal wiki, but ended up replacing it with a simple CLI + skill because the wiki had no access control and the simpler setup was good enough in practice.<p>I think that&#x27;s the important boundary, though: once access control, auth, or per-user permissions enter the picture, I&#x27;d much rather have MCP as the interface than rely on local tooling conventions.
  • utilize18083 hours ago
    I think skills are just a marketing ploy. There is nothing preventing a MCP from serving skills.
  • briznad6 hours ago
    Skills are static; MCP servers is dynamic. Skills codify info and workflows, help decrease redundant instructions, and increase consistent outcomes. MCP servers allow access to changing resources across systems.<p>You may dislike MCP, and there are certainly valid arguments to be made there, but that doesn&#x27;t mean you can replace it with skills. If you could replace a given MCP server with a skill it would only indicate that someone misunderstood the assignment and chose the wrong tool in the first place. It wouldn&#x27;t indicate the superiority of one thing over the other.<p>This whole article, and it&#x27;s current rank on HN (#5), is making me feel like I took crazy pills this morning. A colleague suggests this Skills vs MCP discourse is big on Twitter, so maybe I lack the necessary background to appreciate this, but aren&#x27;t these different tools, solving for different things, in different ways? Is this parody? Am I falling into a bot engagement trap by even responding to this? The article certainly reads like LinkedIn drivel, with vague, emphatic opinions about nothing.
    • Falimonda5 hours ago
      Skills are compensation for out-of-distribution task assignment (and very valuable training data for model providers)<p>MCP are tools - might as well have just called it API for AI, but that ship has sailed.<p>It&#x27;s 100% apples and oranges!
    • raincole5 hours ago
      &gt; I took crazy pills this morning<p>You should feel so. Every time a thread about MCP on HN appears, half of the commenters obviously don&#x27;t even know what MCP actually is and how it&#x27;s used. Just right below someone suggests one should use &quot;an API and a text file&quot; instead of MCP (like, what do they think MCP is?).<p>On Twitter the ratio is even worse.
  • bhewes3 hours ago
    Nice someone who actual works with the systems. Thank you.
  • a9602063 hours ago
    I think so,I like only use Claude Code by MCP,let it connect
  • alienbaby7 hours ago
    Different techniques appropriate in different situations, I would decide on what&#x27;s appropriate given the goals you have. Whichg is nearly always the answer to X is a better way than Y arguments.
  • anshulbhide8 hours ago
    &gt;&gt;&gt;The core philosophy of MCP is simple: it’s an API abstraction.<p>That&#x27;s exactly the problem. As agents become better and can read API documentation themselves, WHY do you need an API abstraction?
    • senadir8 hours ago
      Because not everything run from a terminal.
  • mantyx8 hours ago
    Having developed mantyx.io, I believe that rest apis are still champion. MCP is nothing more than rest wrappers most of the time and skills are cli wrappers which in turn are rest wrappers.
  • ok_dad13 hours ago
    People in the comments still confused about “agentic development” vs. “agentic development”. One uses the cli best, while the other cannot use a cli very well.<p>The first is using agents locally to develop.<p>The second is developing an agent. Not necessarily for coding, mind you. Not even for just text sometimes.<p>They are different cases, MCP is great for the latter.
  • qalmakka11 hours ago
    CLI is massively superior to MCP in my experience. First, because I also understand what&#x27;s going on and do it myself if necessary. Second because it&#x27;s so much cheaper in terms of tokens it&#x27;s not even funny
  • fedeb958 hours ago
    I think both will stick around because they solve two different problems. 1) what are you able to do (skills) 2) which tools you have to do it (mcp)
  • medbar13 hours ago
    I still use vanilla Claude Code without MCP or skills, am I in the minority? Not trying to be a luddite.
    • tim-projects11 hours ago
      Me too just use AGENTS.md and it seems to work. I don&#x27;t understand what problem MCP is trying to solve and skills just sounds like something you can do in AGENTS.md<p>What am I missing out on?
      • throwpoaster6 hours ago
        Repeatability (see my response to op).
    • blitzar11 hours ago
      I would guess the top 10% of actual performers do the same - the people who talk about harnesses and chain multiple systems together etc will be mid table somewhere
    • throwpoaster6 hours ago
      Skills and MCP are useful for when you need to repeat specific processes, perpetually. Without them the task description and reasoning falls out of the context window, or is compressed, and the process fails.<p>An agent will eventually forget, or hallucinate, guardrails and requirements. Yes to AGENTS.md, but when you&#x27;re actively managing the whole context window in a long-running task you don&#x27;t want to just keep jamming stuff in there and hope for the best. Skills help budget tokens and stabilize around specific outcomes.<p>If your use case is not agentic, as you build a skill corpus you can begin having the model reason at higher and higher levels about the outcomes you&#x27;re aiming at.<p>Eg: I&#x27;m super lazy now and ask Claude to launch the project instead of just running the command myself. This is probably best done as a skill.
      • tim-projects2 hours ago
        AGENTS.md doesn&#x27;t fall out of the context window so you just put whatever commands you use in there. Every time it gets too big you rewrite it.<p>Never had an issue doing this
  • slhck10 hours ago
    Huh, I think the author might be deliberately ignoring how MCP works?<p>- &quot;CLIs need to be published, managed, and installed&quot; -- same for MCP servers which you have to define in your config, and they frequently use some kind of &quot;npx mcp-whatever&quot; call.<p>- &quot;Where do you put the API tokens required to authenticate?&quot; -- where does an MCP server put them? In your home folder? Some .env file? The keychain? Same like CLI tools.<p>- &quot;Some tools support installing skills via npx skills, but that only works in Codex and Claude Code, not Claude Cowork or standard Claude&quot; -- sure, but you also can&#x27;t universally define MCP servers for all those tools. You have to go ahead and edit the config anyway.<p>- &quot;Using a skill often requires loading the entire SKILL.md into the LLM’s context window, rather than just exposing the single tool signature it needs&quot; -- yeah, but it&#x27;s on-demand rather than exposing ALL MCP servers&#x27; tool signatures. Have you ever tried to use playwright MCP?<p>I just don&#x27;t buy the &quot;without any setup&quot; argument.
  • turlockmike14 hours ago
    Or use both. Remote MCPs are secure, CLI allows for programmatic execution. Use bash to run remote MCPs.<p>I built this to solve this exact problem. <a href="https:&#x2F;&#x2F;github.com&#x2F;turlockmike&#x2F;murl" rel="nofollow">https:&#x2F;&#x2F;github.com&#x2F;turlockmike&#x2F;murl</a>
    • nostrebored13 hours ago
      What about remote MCPs lend themselves to security? For instance, do you think that it is more secure than a traditional endpoint?
      • turlockmike13 hours ago
        MCPs are basically just JSON-rpc. The benefit is that if you have applications that require an API key, you can build a server to control access (especially for enterprise). It&#x27;s the same as REST apis, except by following a specific convention we can take advantage of generic tools (like the one I built) and means you don&#x27;t need to rely on poor documentations to connect or train a model to use your very specific CLI.
        • nostrebored13 hours ago
          But if you have customer facing APIs then all of these problems were already solved in an enterprise context. You can force an oauth flow from skills if you want.<p>I don’t think that CLIs are the path forward either, but you certainly don’t have to teach a model how to use them. We’ve made internal CLIs that adhere to no best practices and expose limited docs. Models since 4o have used them with no issue.<p>The amount of terminal bench data is just much higher and more predictable in rl environments. Getting a non thinking model to use an MCP server, even hosted products, is an exercise in frustration compared to exposing a cli.<p>A lot of our work is over voice, and I’ve found zero MCPs that I haven’t immediately wanted to wrap in a tool. I’ve actually had zero MCPs perform at all (most recently last week with a dwh MCP and opus 4.6, where even the easiest queries did not work at all).
  • mememememememo8 hours ago
    Like saying I prefer websites over CLI, or bicycles over canoes. Chisels over planes. Depends what you are trying to achieve.
  • hungryhobbit42 minutes ago
    Am I the only one who doesn&#x27;t trust remote servers?
  • bachback9 hours ago
    the best agent framework in my opinion is Pi. Pi avoids MCP thats a good thing. why assume that the planet will migrate from HTTP to MCP? no, instead lets assume we have client code we can call. we already have a rich ecosystem of HTTP services and packages. and if we assume a rewrite for agents we probably wouldn&#x27;t come up with MCP but something more powerful.
  • nodomain11 hours ago
    The whole article serves just to promote his SaaS.
  • pjmalandrino11 hours ago
    Not same tools, different purpose from my opinion
  • baq11 hours ago
    Remote MCP solve the delivery and update issues just like saas and browsers did for human users. Not much more to it really
  • heckintime13 hours ago
    AI tools for non technical users that can work on browsers and mobile app will be super powerful. I think MCPs are currently the best way to reach this audience.
  • latentsea13 hours ago
    Different tools for different jobs man... I prefer the right tool for the job, and both skills and MCP seem necessary. Do you also prefer forks over spoons?
    • blitzar11 hours ago
      &gt; Do you also prefer forks over spoons?<p>On the 8th day god created the spork.
  • pjmlp7 hours ago
    Complete in synch with the author MCP and A2A for the win.
  • vonneumannstan2 hours ago
    They seem like fundamentally different things.
  • nout14 hours ago
    Use both. These do different things.
  • bijowo167611 hours ago
    MCP pollutes the context, if you dont care about wasting context token for all MCP tools, go ahead and use MCP, but you should know that cli tool+skill can perfectly replace it with less token overhead and better matching due to skill&#x27;s front matter
    • anaisbetts8 hours ago
      Sure but on the flip side, skills not being in context means that for many harnesses, the model simply never finds them. Whether MCP or Skills are &quot;better&quot; depends extremely heavily on the context management functionality of your harness because if you use a relatively naive harness (i.e. one that implements MCP and Skills in a straightforward way), MCPs will generally be more effective, especially if your model is local-only (i.e. dumb), but at the cost of context.
    • miroljub11 hours ago
      That really depends on how your harness implements MCP client. There are implementations that don&#x27;t pollute context any more than CLIs, but if one uses only CC, he would never know.
  • TonyAlicea107 hours ago
    I prefer peanut butter over jelly.
  • jauntywundrkind14 hours ago
    I&#x27;ve remained leaning a bit towards MCP until lately. Both have pretty easy ways to call the other (plenty of cli API callers, and tools like mcp-cli for the reverse <a href="https:&#x2F;&#x2F;github.com&#x2F;philschmid&#x2F;mcp-cli" rel="nofollow">https:&#x2F;&#x2F;github.com&#x2F;philschmid&#x2F;mcp-cli</a>). Skills have really made progressive discovery if cli-tools much better, and MCP design has adapted likewise. I&#x27;ve lightly preferred MCP for formalism, for it feeling more concrete as a thing.<p>But what really changed my mind is seeing how much more casual scripting the LLMs do these days. They&#x27;ll build rad unix pipes, or some python or node short scripts. With CLI tools, it all composes: every trick it learns can plug directly into every other capability.<p>Where-as with MCP, the LLM has to act as the pipe. Tool calls don&#x27;t compose! It can read something like this tmux skill then just adapt it in all sorts of crazy ways! It can sort of do that with tool calls, but much less so. <a href="https:&#x2F;&#x2F;github.com&#x2F;nickgnd&#x2F;tmux-mcp" rel="nofollow">https:&#x2F;&#x2F;github.com&#x2F;nickgnd&#x2F;tmux-mcp</a><p>I&#x27;d love to see a capnproto capnweb or some such, with third party handoff (apologies Kenton for once again raising 3ph), where a tool call could return a result and we could forward the result to a different LLM, without even waiting for the result to come back. If the LLM could compose tool calls, it would start to have some parity with the composability of the cli+skill. But it doesn&#x27;t. And as of very recently I&#x27;ve decided that is too strong a selling point to be ignored. I also just like how the cli remains the universe system: if these are so isomorphic as I keep telling myself, what really does the new kid on the block really bring? How much is a new incarnation better if their capabilities are so near? We should keep building cli tools, good cli tools, so that man and machine benefit.<p>That said I still leave the beads mcp server around. And I turn on the neovim MCP when I want to talk to neovim. Ah well. I should try harder to switch.
  • raincole7 hours ago
    Seriously, the only drawback of MCP is its name. If it were named &quot;API discovery protocol&quot; (which is what it is) none of these debates would have existed.<p>API vs MCP sounds like a real debate, but it really isn&#x27;t. It&#x27;s &quot;API vs API discovery protocol.&quot; See how asinine it sounds if we call things for what they are.
  • avinashselvam14 hours ago
    skills and mcp help with entirely different things. sure most products add a skill on using their mcp so that model&#x27;s tool calling works good.
  • interpol_p4 hours ago
    We had a contention between MCP &#x2F; Skills for our product and ended up offering both. We built a CLI tool that could interface with the MCP server [1]. It seems redundant but our app is a coding app on iOS (Codea), and the issue with offering a plain MCP server meant that the agentic coding harness found it harder to do its job.<p>With the CLI the agent could check out the project, work on it locally with its standard file editing &#x2F; patching &#x2F; reading tools, then push the work back to device. Run and debug on device, edit locally, push.<p>With MCP the agent had to query the MCP server for every read and write and was no longer operating in its normal coding loop. It still works, though, and as a user you can choose to bypass the CLI and connect directly via MCP.<p>The MCP server was valuable as it gave us a consistent and deterministic language to speak. The CLI tool + Skill was valuable for agentic coding because it allowed the coding work to happen with the standard editing tools used by agents.<p>The CLI also gave us device discovery. So the agent can simply discover nearby devices running Codea and get to work, instead of a user having to add a specific device via its IP address to their agent.<p>[1] <a href="https:&#x2F;&#x2F;codea.io&#x2F;cli" rel="nofollow">https:&#x2F;&#x2F;codea.io&#x2F;cli</a>
  • the_axiom5 hours ago
    this is very good and correct
  • coolThingsFirst6 hours ago
    Can someone more enlightened in this area explain how this is used?<p>Is MCP for in-house LLMs or can it work with ChatGPT as well? As far as I know it&#x27;s a server with small self-contained task scripts. But don&#x27;t get how the coordination works and how it&#x27;s used.
  • seyz10 hours ago
    MCP versus Skills -&gt; wrong debate. MCP versus CLI -&gt; real debate.
  • throwpoaster7 hours ago
    They’re different things. You can have skills using MCP.
  • tpoacher13 hours ago
    &gt; Skills are great for pure knowledge and teaching an LLM how to use an existing tool. But for giving an LLM actual access to services, the Model Context Protocol (MCP) is the far superior, more pragmatic architectural choice.<p>There&#x27;s your answer. If you want to use local tools, use Skills. If you want to use services, use MCP. Or, you know, whatever works best for your scenario.
  • nathias7 hours ago
    MCP is too thristy
  • contextbloat10 hours ago
    &gt; Using a skill often requires loading the entire SKILL.md into the LLM’s context window, rather than just exposing the single tool signature it needs.<p>Isn&#x27;t this, like, the exact thing MCP is the worst at? You need to load the entire MCP into the context even if you&#x27;re not using the MCP&#x27;s relevant functions. Which is why some people put them on subagents, which is like, equivalent to putting the MCP behind a CLI function, at which point, why not just have the CLI function and selectively load it when yo- OH WAIT, THERE&#x27;S A NAME FOR THAT!
  • EugeneOZ9 hours ago
    &gt; Skills are great for pure knowledge and teaching an LLM how to use an existing tool. But for giving an LLM actual access to services, the Model Context Protocol (MCP) is the far superior<p>That&#x27;s it. For some things you need MCP, for some things you need SKILLs - these things coexist.
  • simianwords10 hours ago
    Yesterday I accidentally stumbled on a place where I could really appreciate MCP&#x27;s.<p>I wanted to connect my Claude account to my Notion account. Apparently all you need to do is just submit the notion MCP and log in. That&#x27;s it! And I was able to interact with my Notion data from my Claude account!<p>Imagine how hard this would be with skills? It is literally impossible because with skills, you may need to install some local CLI which Claude honestly should not allow.<p>If not CLI, you need to interact with their API which again can&#x27;t happen because you can&#x27;t authenticate easily.<p>MCP&#x27;s fill this narrow gap in my opinion - where you don&#x27;t own the runtime and you want to connect to other tools like plugins.
  • simianwords11 hours ago
    SKILLS.md or AGENTS are good concepts but they miss two crucial things that will make them much more usable. I predict that this will happen.<p>Each SKILLS.md will come with two hooks:<p>1. first for installing the SKILL itself - maybe install the CLI or do some initial work to get it working<p>2. Each skill may have dependencies on other skills - we need to install those first<p>Expressing these two hooks in a formal way in skills would help me completely replace MCP&#x27;s.<p>My concrete prediction is that this will happen soon.<p>Wrote more about it here: <a href="https:&#x2F;&#x2F;simianwords.bearblog.dev&#x2F;what-agent-skills-misses-now&#x2F;" rel="nofollow">https:&#x2F;&#x2F;simianwords.bearblog.dev&#x2F;what-agent-skills-misses-no...</a>
  • senordevnyc14 hours ago
    I love the idea of MCP, but it needs a progressive disclosure mechanism. A large MCP from a provider with hundreds or even thousands of tools can eat up a huge amount of your context window. Additionally, MCPs come in a bunch of different flavors in terms of transport and auth mechanisms, and not all harnesses support all those options well.<p>I’ve gone the other way, and used MCP-CLI to define all my MCP servers and wrap them in a CLI command for agent use. This lets me easily use them both locally and in cloud agents, without worrying about the harness support for MCP or how much context window will be eaten up. I have a minimal skill for how to use MCP-CLI, with progressive disclosure in the skill for each of the tools exposed by MCP-CLI. Works great.<p>All that said, I do think MCP will probably be the standard going forward, it just has too much momentum. Just need to solve progressive disclosure (like skills have!) and standardize some of the auth and transport layer stuff.
    • didibus14 hours ago
      I thought Claude Code and others do progressive disclosure for MCP now as well.<p>The article claims so:<p>&gt; Smart Discovery: Modern apps (ChatGPT, Claude, etc.) have tool search built-in. They only look for and load tools when they are actually needed, saving precious context window.
      • senordevnyc5 hours ago
        The article made a number of claims I know to be false, so I wouldn’t take it as gospel.
  • charcircuit15 hours ago
    This author does not realize that skills can call APIs. The idea that you have to build dedicated CLI apps is not true at all and invalidates the entire article.
    • woeirua14 hours ago
      No, the point was that you don’t have access to a CLI in every environment.
      • charcircuit13 hours ago
        You could have access to a web browser or web request tool instead.
        • woeirua13 hours ago
          There is no world in which an enterprise is not OK with an agent having access to a CLI but is OK with possibly getting prompt injected from a random web search.
    • j16sdiz13 hours ago
      He did. That&#x27;s what the &quot;you aren’t forcing the user to manage raw tokens and secrets in plain text.&quot; bit comes in.
    • CGamesPlay15 hours ago
      Can you clarify what exactly you mean? Skills are markdown files, so they definitely can&#x27;t call APIs or CLIs. Are you saying that a skill can tell the agent to use curl to call web APIs? Or something different?
      • hypercube3314 hours ago
        Technically they can at least how I&#x27;m using or abusing them - I ride windows so they have a generic powershell script bolted on to handle special API use through the skill to make it easier for the agent to call data up noted in the skill. does it lack full API details? absolutely. I have also a learning skill where if it has to go for a think &#x2F; fail &#x2F; try to figure something new out to grow a new skill or update an existing one.<p>skills to me suck when they are shared with a team - haven&#x27;t found the secret sauce here to keep these organic skills synced between everyone
      • mhalle14 hours ago
        <a href="https:&#x2F;&#x2F;agentskills.io&#x2F;specification" rel="nofollow">https:&#x2F;&#x2F;agentskills.io&#x2F;specification</a><p>* references&#x2F; Contains additional documentation that agents can read when needed<p>* scripts&#x2F; Contains executable code that agents can run.<p>* assets&#x2F; Contains static resources
      • nostrebored13 hours ago
        Skills can bundle scripts. Skills can express how to use curl. Skills can integrate with your fips keys if you want them to.
      • latentsea13 hours ago
        They almost certainly mean skills can tell the agent to use the api, and it can succeed at doing that.
    • leonidasv15 hours ago
      And call MCPs as well
  • polyterative9 hours ago
    why not both
  • lukewarm7078 hours ago
    i feel like giving an agent shell access and internet is batshit crazy.<p>that&#x27;s just me i guess.
  • lyime14 hours ago
    auth
  • mikemiles28 minutes ago
    [dead]
  • michaelksaleme1 hour ago
    [dead]
  • maxothex2 hours ago
    [dead]
  • Paul202614 hours ago
    [dead]
  • bustah6 hours ago
    [dead]
  • techpulselab9 hours ago
    [dead]
  • vomayank4 hours ago
    [dead]
  • edinetdb10 hours ago
    [dead]
  • jeremie_strand6 hours ago
    [dead]
  • ajaystream11 hours ago
    [dead]
  • spotlayn8 hours ago
    [dead]
  • 0xy4sh8 hours ago
    [dead]
  • aiedwardyi7 hours ago
    [dead]
  • ai_slop_hater15 hours ago
    [flagged]