102 points by goargp 72 days ago | 15 comments
jascha_eng 72 days ago
Is anyone actually using MCPs productively? I feel like everyone and their mother are releasing specialized ones but I have yet to find them really useful.

The stuff that's cool works with purpose-built tools like what cursor and claude code do to edit files, run commands etc. Bring your own function hasn't really been useful to me.

That being said I do believe that giving current relevant documentation into the context does help with results. I've just yet seen anyone do that very successfully in an automated fashion.

martypitt 72 days ago
I'm also fascinated by this.

People are scurrying to release MCP servers, and there's been a flurry of them put out - but it feels like we've skipped straight to the advanced tooling, on top of a stack that might not have found it's footing (yet).

I understand MCP, the problem it solves, and why it seems a good idea ... but outside of toy applications, are they proving to be useful?

noodletheworld 72 days ago
> I understand MCP, the problem it solves, and why it seems a good idea

I don't.

Can you ELI5 why it's a good idea for me?

I see what people are doing (1), and I see what the intent is (2), but no one seems to have an explanation of why it helps.

The original MCP documentation has some vague hand waving about (3) about having prompts that would help ... with, something. ...but I don't see it in practice.

?

Is this why people are asking "does this actually work?"

...because, its obviously not any better than the 'tool use' API that everyone was already using. It can't be. It doesn't do anything differently.

I would bet that it actual makes agents less capable when you overload their context with unrelated tools that aren't relevant. Right?

How can it possibly help?

It sounds a lot like it rests on the assumption that adding tools is a zero-cost zero-impact +capability action only, but that can't be true.

(1) - https://github.com/punkpeye/awesome-mcp-servers

(2) - https://modelcontextprotocol.io/quickstart/server#what%E2%80...

(3) - https://modelcontextprotocol.io/docs/concepts/prompts

pocketarc 72 days ago
> ...because, its obviously not any better than the 'tool use' API that everyone was already using. It can't be. It doesn't do anything differently.

Instead of you writing tool use code (which requires you to know how to write code, and to put in the effort), you spin up a server that provides that tool use code for the LLM.

As an example, you could hook up an "email" MCP that connects to your email provider and provides a "get_unread" tool and a "send_email" tool, and then go nuts having a chat with your LLM saying "hey let's work through our unread emails, bring up the first and then come up with a draft response for it, for me to review before sending". Hook it up with the "linear" MCP, and tell the LLM to also raise tickets with all the relevant details in your project management tool.

Of course you could write those tools yourself, and control all of the prompting, etc, but the point of MCP is so you don't have to write anything.

That's the ELI5, I think. "pre-built tools for the tool use API".

fragmede 72 days ago
> ...because, its obviously not any better than the 'tool use' API that everyone was already using

I don't think that's true, but even if it's not, there's such a thing as being too early to market, and that a very similar product just launched later is a success when the original isn't. There are some differences; it's like how FTP is different from HTTP but also it isn't.

mogili 71 days ago
Yes. I am using a mcp that provides Claude desktop file editing and terminal tools to work like Claude code but using my pro subscription instead of pay per use expensive API
mritchie712 72 days ago
We built an MCP for our own API at https://definite.app. Here's how I use it:

We often help customers set up their "data model". We use Cube (open source semantic layer) for this and need to write many files in a mix of SQL and YAML that defines what tables and columns they'll see in the UI.

This can be pretty tedious, but claude / cursor is very good at it. You feed in the schema (via an MCP) and can provide some questions the user wants to answer. Here's how it might look on one question:

1. The user wants to answer "Which channels from hubspot generate the highest LTV customers? Which channels have the lowest churn?"

2. The agent will search the schema for relevant raw data and start editing the data model. In this example, we might need to add a join between hubspot and stripe.

3. Once it has the raw data and an idea of how they relate, we'll create metrics and dimensions that answer the question.

We'll add these abilities to our built-in agent once we perfect a few things, but I see MCP's as a good testing ground for things you ultimately want to build into your own agent.

cube2222 72 days ago
I'm frequently constructing context based on up-to-date docs using curl + html2markdown[0] and custom css selectors, which is extremely tedious. MCP servers for docs would be very useful for me.

That said, I don't really expect the AI itself to come up with docs to read (maybe some day). I want it predominantly so I can manually reference it in my prompt (in e.g. the Zed assistant panel) like `/npmdocs packagename packageversion`.

But even for AI "self-driven" use-cases, I primarily see the value in read-only MCP servers that provide more context, just in an "as-needed" way, instead of me putting it there explicitly.

[0]: https://github.com/JohannesKaufmann/html-to-markdown

johnisgood 72 days ago
What exactly is Apidog though?

I feed LLMs documentation in case of obscure languages and it more or less works (with Claude).

cube2222 72 days ago
I have no idea, I was talking about a theoretical generic npm docs mcp server, which I’ve just realized this is not.
johnisgood 72 days ago
I see. Thanks for the html-to-markdown though, I used to just copy paste either the whole HTML or select all text on a website and feed that into LLMs but it is such a waste of context and it does not work as well. Claude has a specific format and simonw has a project that converts source code (?) to the format Claude "likes". I think it supports more than just Claude (?).
tanepiper 72 days ago
I'm still struggling at the moment to see where they fit in to the distributed nature of working with LLMs and other data sources.

For example, depending on the model I use in LMStudio - some can use tools, but it doesn't yet seem to support MCP - on the other hand, cursor does - but then either it's servers or via stdio - the setup isn't entirely clear.

Also it seems that giving random access to a LLM, even if local, via an intermediate server is ripe for abuse unless you know what it's doing?

consumer451 72 days ago
Yes, I use Supabase MCP all day long. Checking schema against the DB instead of migration files is great. However, things like this are game changing:

> Logged-in as user project-admin@domain.com, I cannot see the Create Project button on the Project list screen. Use MCP to see if I have the correct permissions. If I do, verify that we are checking for the correct codes in the UI.

Making Cursor/Windsurf data-aware is amazing for data related debugging.

Leynos 72 days ago
I wrote MCP servers for git, pytest, and ruff, and those have been fantastic.
jascha_eng 72 days ago
Do you use them in Claude Desktop or Cursor? Or something else? I played around with the file-system MCP a bit, and it was nice sometimes but then Claude started trying to edit files when I simply wanted it to spit out a chunk of code and wire it up myself... so I disabled that again.

It also tried to read files when I was putting in the context myself (because I knew it was too specific for a few read files ops to find the context it needed) that's just... not very practical.

Leynos 70 days ago
I use these in Roo Code in VSCode with Claude Sonnet 3.7.
tinodb 65 days ago
Can you share them or link to them?
samuelbalogh 72 days ago
yes, super helpful - sequential thinking, DB access, browser automation, etc
jascha_eng 72 days ago
What exactly is your setup and use-case? Are you using Claude Desktop/Cursor? What kind of prompts does it allow you to do?

I tried a bit with the filesystem-mcp server but claude ended up using it when I didn't really want it to, and even tried to write files instead of simply responding in code fragements that I wanted to wire up myself.

I feel like once your project has any meaningful size no MCP server is going to be able to actually find the relevant context by itself, so you're better of providing it yourself (at least via the @ syntax in cursor)

smokel 72 days ago
MCP (Model Context Protocol) is an open protocol that standardizes how applications provide context to LLMs.

https://modelcontextprotocol.io/

LeafItAlone 72 days ago
I usually think these types of comments don’t add anything and are often duplicative of what’s in the project documentation. But this project fully assumes you know what MCP is. Would be nice if they added a link to it in their readme.
koakuma-chan 72 days ago
!! Not to be confused with Minecraft Coder Pack !!
taspeotis 72 days ago
>£€ Not to be confused with Media Codec Pack €£<
koakuma-chan 72 days ago
This only works if your API is built on their "Apidog" commercial API building platform. I wonder how this compares to just scraping documentation from a regular documentation website.
goargp 72 days ago
You can directly read Swagger or OpenAPI specification files without Apidog projects, like this:

npx apidog-mcp-server --oas https://petstore.swagger.io/v2/swagger.json or npx apidog-mcp-server --oas ~/data/petstore/swagger.json

koakuma-chan 72 days ago
Oh so it's like this? Sorry, I was confused because your npmjs page explicitly says that the tool is for Apidog projects.
andybak 72 days ago
In which case the post title is misleading and should be edited. I'm flagging this.
goargp 72 days ago
Hey, just clarify the case. You can directly read Swagger or OpenAPI Specification (OAS) files.
andybak 72 days ago
OK. Unflagged.

So this is about web APIs? I dislike that the broad term "API" has started to be used as shorthand for "HTTP API" (and probably REST-ish HTTP APIs specifically). It wastes people's time trying to figure out if something is of interest to them. Not everyone in tech is building web apps.

goargp 72 days ago
Fair point. I think the title can be more accurate by changing API into "REST API". Since I cannot edit the title as of now, I hope mod could change the title to be more accurate.

For more context, this tool can generate an MCP Server from an OpenAPI/Swagger file that describes a REST API. This does not include SOAP APIs, GraphQL, or other APIs that use the HTTP protocol.

imtringued 72 days ago
Nobody on this planet builds REST APIs. It's not worth explaining what REST actually means. It is a waste of time.

HTTP API is the correct phrase.

andybak 71 days ago
That's why I tend to say "REST-ish" because there is a difference between those and things like GraphQL or RPC style APIs which can also use HTTP.
koakuma-chan 71 days ago
HTTP APIs are APIs that rely on HTTP semantics (so not GraphQL). And the only difference between what you call "REST-ish" and "RPC-style" is "POST /messages" vs "POST /createMessage" anyway.
epaga 72 days ago
Still looks flagged to me...would be a bummer for someone's Show HN to get flagged down due to a misunderstanding.
72 days ago
nextts 72 days ago
Don't flag yet. OP replied
khvirabyan 72 days ago
Very cool, almost all of the API platforms offering API abstractions can now offer one MCP server to connect to all of the existing API integrations. It's very interesting to see how things will progress from here. One example is Zapier released their own MCP server (https://zapier.com/mcp). How the adoption will grow over time is very interesting to see, vendors such as GitHub will provide their own official MCP or the community will kick in and will support the integration + documentation generators might as well generate the MCP server. Great work!
hariharasudhan 72 days ago
want to run it on my local and not from your servers via token. after all the docs are already open
musha68k 72 days ago
What have you actually built and deployed using this "vibe coding" methodology?

So far, I've mostly seen proof-of-concept applications that might qualify as decent game jam entries - contexts where experimental implementation is expected ("jank allowed").

I'm curious about production applications: Has anyone developed robust, maintainable agent systems using this approach that didn't require significant refactoring or rewrites even?

What's been your experience with its viability in real-world environments beyond prototyping?

Mtinie 72 days ago
What software development world do you live in where significant refactoring and rewrites aren’t commonplace? Every project I’ve ever had insider knowledge about started as a prototype and then was extensively reworked until it met the needs of the team…until the next set of requirements dropped and the cycle repeats.

If agent systems allow teams to spin up and test prototypes faster than they previously could do without the agents, isn’t that a useful and valuable step in the right direction?

Addendum: After rereading my comment I realize it may come off as argumentative when it was intended to offer a perspective of not requiring more from the current state of the tech than it offers. “Early days” and all that :)

cess11 72 days ago
Is this faster than the CLI project management tools we already have and things like git checkout -b? How?
Mtinie 72 days ago
This particular MCP server? No idea. I suspect it is just a different way, rather than a better way.

For tasks like that I personally use pre- and post-commit scripts to run all of my test validation, coverage, Ruff, etc. and output the results as “Current Code Quality” summary reports that are shared with my agents for context.

musha68k 72 days ago
Prototypes unfortunately often stick. IMO refactoring someone else's code is less efficient vs whatever code I wrote myself. I'm pondering net productivity.

Have you personally deployed anything based on agent results (predominantly)? I'm just trying to gauge if the current state ("early days") is actually worth investing time and money in from a professional perspective.

In my experience, hype often precedes actual usefulness by a significant margin.

TL;DR are we there yet?

Mtinie 72 days ago
I have an API project in development where the majority of the code has been written through Claude Code. It offers an integration of the Congress.gov and GovInfo.gov APIs, and analysis tools for their respective (somewhat overlapping but not always) public information.

I put significant effort in constructing the product & project requirements beforehand and have built a number of simple tools as I ran into roadblocks or inefficiencies, so I can attest to there being a large amount of hype. I can also comfortably state that without the use of LLMs, I would not be this far along.

Having a background in software development (UX side) and a deep interest in technology gives me some insight in to which questions to ask and when to “throw a flag” and ask the LLM to explain why it’s doing something. I don’t believe the hype around one-shot enterprise applications, but the current state of AI programming is usable if people take the care they hopefully would with their own output.

musha68k 72 days ago
Cool, thanks for sharing!
simonw 72 days ago
Can you clarify what you mean by "agent-based" development, "vibe coding" and "maintainable agent systems" here?

(All three of those are terms with very vague definitions, I'm interested in hearing which of those definitions are starting to take root.)

wanderingbort 72 days ago
I like the original construction of “Vibe Coding” [0] that I will attempt to make concise:

Vibe Coding is an RECREATIONAL activity where a user and an AI collaborate on creation of some artifact AND the user accepts ALL feedback and suggestions from the AI.

https://x.com/karpathy/status/1886192184808149383

simonw 72 days ago
Yeah, I like that definition too: https://simonwillison.net/2025/Mar/19/vibe-coding/
franky47 72 days ago
I saw the domain and thought that NPM itself built an MCP to let agents read package docs and type definitions to stop hallucinating APIs that don't exist. Sadly, no.

We have .d.ts for machines (tsc), we have JSDoc & README.md for humans, can we get those LLMs to actually stick to those sources of truth, without having to do the work a third time (like llms.txt / cursor rules)?

Matsta 72 days ago
Looks cool, the only one similar I've seen so far that is similar is: https://github.com/cyberagiinc/DevDocs

But every-time I've tried to run DevDocs, I've had issues running it. Either the scraper or the MCP server fails to run.

nindalf 72 days ago
Is this useful? It seems really specific to their paid tool.

What would be more helpful is an MCP that exposed devdocs.io or similar. Cache selected languages/framework documentation locally. Then expose an MCP server that searched the local files with some combination of vector/BM25 search. Expose that as a tool to the LLM.

shiraayal 72 days ago
There is this repo converting fastapi to mcp servers without needing it to be a part of apidog here: https://github.com/tadata-org/fastapi_mcp
z3t4 72 days ago
Have you obfuscated/minified the code? Why!?
tb1989 72 days ago
I think this article explains it all:

MCP Isn’t the USB-C of AI — It’s Just a USB-Claude Dongle

https://dev.to/internationale/mcp-is-not-ai-usb-c-its-usb-cl...

Nonetheless, I think your work is very good and it looks like a very useful dongle

zipy124 72 days ago
I'm not even sure comparing it to USB is helpful, since USB is just a transport protocol, not communication protocol. Just because two devices have usb ports doesn't mean they can be connected together, otherwise we wouldn't need device drivers. So in this instance it's more of a unified device driver, rather than a port.
tb1989 72 days ago
I totally agree. As someone with an EE background, this metaphor makes me a little physically uncomfortable. Considering that the developers and users of mcp are almost all programmers who need to use cli, I really don't understand why they don't tell the truth to programmers.
Leynos 72 days ago
That article makes very little sense.

The protocol itself sits on top of JSON-RPC, and the specifications are there for anyone to implement. There's nothing specific to Claude about it.

There are various MCP client and server implementations available that are also unrelated to Anthropic.

tb1989 72 days ago
- The official example strongly promotes the Anthropic API, which is on GitHub. This is clear evidence.

- There is no clear explanation of the coupling between the system prompt and the tool call. Even if it mentions the open source Gemma or Deepseek, it would be much better.

The official attitude makes it difficult to trust this project.

The point you made is exactly the cunning part. Anyone can copy it, but without official support, it is simply impossible: this is pure community exploitation

Leynos 72 days ago
If you want an LLM to use a tool, you just need to implement a parser in your LLM client that extracts the tool call from the LLM's response, then give the LLM a syntax it can use to make the tool call.

For example, in Roo Code:

``` TOOL USE

You have access to a set of tools that are executed upon the user's approval. You can use one tool per message, and will receive the result of that tool use in the user's response. You use tools step-by-step to accomplish a given task, with each tool use informed by the result of the previous tool use.

# Tool Use Formatting

Tool use is formatted using XML-style tags. The tool name is enclosed in opening and closing tags, and each parameter is similarly enclosed within its own set of tags. Here's the structure:

<tool_name> <parameter1_name>value1</parameter1_name> <parameter2_name>value2</parameter2_name> ... </tool_name>

For example:

<read_file> <path>src/main.js</path> </read_file>

Always adhere to this format for the tool use to ensure proper parsing and execution. ```

72 days ago
Sterling9x 72 days ago
[dead]
zaleria 72 days ago
documentation is for humans.

let the LLMs read code.

LeafItAlone 72 days ago
Going into AI-assisted development, I thought this would be the winning strategy. I’ve since learned it’s not.

LLMs have a limited context window. It’s can’t hold a full moderately sized project in context, let alone the code of the referenced libraries. Some tools have documentation that exceed the context window.

So I’ve found that adding documentation and adding that to the context helps significantly. I even often let the AI tools generate my documentation where possible (and then manually edit). For me and the tools I use, this has helped significantly.

phito 72 days ago
Code does not contain all the information of a feature.
fosk 72 days ago
If code does not have all the information, what processes the features?
phito 72 days ago
There's often a lot of "business" human knowledge required to use a feature properly. That's usually what ends up in documentation.
vbezhenar 72 days ago
Sometimes documentation is preferred source of truth, especially when multiple implementations exist. For example when you're writing code for POSIX API, you don't want to tie your code to glibc specific implementation, portable code is the whole point of POSIX.
lagrange77 72 days ago
Actually, programming languages are also for humans.
krystofee 72 days ago
Think about it, what do you read when integrating APIs, code or documentation?
smokel 72 days ago
Code typically describes only the what and how. The why is usually left in people’s heads and, if you are lucky, in documentation.
manojlds 72 days ago
Isn't code for humans as well then?
anon1094 72 days ago
01111001 01100101 01110011
krystofee 72 days ago
when you present this to LLM, it will be confused the same as a human reading this
LeafItAlone 72 days ago
px43 72 days ago
ChatGPT 4.5:

Ooh, a binary puzzle—fun! Let’s decode it:

01111001 01100101 01110011

In binary, each byte (set of 8 digits) represents an ASCII character. Let's break it down:

01111001 = y

01100101 = e

01110011 = s

So, it spells out "yes"!

Got any more mysteries to solve?

novalis78 72 days ago
Claude wasn’t confused at all by that