A founders overview of Model Context Protocol (MCP) and how we can leverage it for our AI businesses
My personal challenge for 2025 is to make a 100% AI automated business. MCP (Model Context Protocol) is slowly working its way into my challenge toolkit, letting me connect my IDE/AI chats to my databases, filesystems and other tooling.
But is it really worth founders learning how to use MCP? How big a part will it play in tomorrow’s AI architecture?

I think as part of AI literacy it’s important that you at least skim this post; aspects of MCP (or things like it) will likely play a part in our collective AI business future, (and you can leverage it today).
Founders MCP Playbook
What is MCP?
Model Context Protocol is basically a way to organize the things you feed into an AI model so it “gets” what you want more clearly.
…a new standard for connecting AI assistants to the systems where data lives, including content repositories, business tools, and development environments.
Anthropic
Think of it like writing a to-do list for a super-smart employee who needs clear instructions: you break down the context, guidelines, goals, and other details into neat sections instead of throwing everything into one giant blob of text.

There are two obvious benefits straight away here:
- By following a structured approach and giving more context, AI models are much more likely to give you accurate, appropriate responses.
- It forces organisation – I’ve found that by forcing context like this I can much more easily interact with the system as a whole, and also explain it more easily to others.
Specifically for founders it’s key to be aware of MCP as it can provide a kind of ‘portable context’ kit that you and your team can use when interacting with AI, (when well documented).
That’s on top of the benefit of every AI call your team makes being better contextualised, and so more effective by default, (again, when set up smartly).
Why MCP Matters to Founders
With the AI world in such chaotic flux it’s sometimes hard to see where things are going to be in a year’s time. We already had to juggle runways, product market fit, and find sustainable growth – but now AI has come along and shook everything up.
Amidst this evolutionary transition we must ground ourselves in what we do well while allowing ourselves space to explore the future. Because in the end, that’s what we should be building.

MCP is the latest flash in the pan, yes, but it’s a first stab at a piece of the AI architecture which will be with us in years to come. It starts to answer the question; How do we give the AI model more contextual awareness?
For now, it means:
- Efficiency gains: Faster access to greater context, allowing us and our teams to more rapidly develop, create, test, and automate.
- Market Edge: Right now adopting as much AI as you can afford to adopt is going to give you an edge against competitors, (especially large incumbents).
- Structure for stability: In early-stage business we are often scrappy. It’s a great thing, but it can be hard to scale. By formalising context we make inroads into a more scale-ready future, while simultaneously getting the other benefits mentioned.
- Some increased risk: While everyone preaches the pioneering benefits of MCP, it’s key to remember that wrongly set up this can give lots of access to your systems to AI/others. I recommend an MCP vetting process, and off-machine backups 😅.
New here? Welcome! This is the journey of building a 100% automated AI business in 2025. You’re jumping in after we’ve already kicked things off, so you might want to catch up first.
Check out these key posts to get the full story—and don’t forget to subscribe for updates and exclusive perks:
Concrete Examples of MCP in Workflows
Here’s a few concrete examples of how a startup might use MCP in its workflows. I tilt these examples towards software startups, but you can also use MCP’s in AI chats and in automation workflows.
MCP & Github – Code Assistant
You might be already using Cursor or Windsurf and know how it’s often a quite useful AI pair programmer (until you have to tell it something 5 times).

I use Github as my repo storage, and until MCP I am the bottleneck between Cursor and my overall code projects. I direct Cursor to help me as I see fit, once I’m already working on an issue or spitballing new features.
But with model context protocol I can now let Cursor see related issues, recent commits, and (in theory) get it to operate on my code base in a larger, more holistic way.
The Github MCP enables AI to do basically anything you can do in Github:
- Create repositories
- Update files
- Read back commits
- etc.
I’m still exploring this tentatively – I’m not ready to let Cursor go full metal on my repo’s.
MCP & MySQL – DB Assistant
Another quick code example – if you’re like me you’re still using MySQL or one of its siblings, and often a new project starts with a hand-drawn scrap of paper mapping out the data architecture.
Since Cursor I’ve been taking these maps, creating the DB creation SQL, and then copy-pasting it into a comment at the top of a ‘dal.php’ (database access layer) file. Then I poke Cursor and ask for CRUD functions for the database.
This works surprisingly well.
But even that isn’t fast enough for the future.

With MCP & MySQL (with an MCP like dbhub) I can literally just skip the copy-paste, or some people might even like to write their object classes and get Cursor to deal with the rest (via the MCP).
MCP & Notion / Google Drive – General Document Store
Here’s the example I’ve been most philosophically interested in since I started this crazy AI business challenge; How to give AI actors crystal clear vision of my personal business ethics & approaches as a founder. How do I encapsulate my thinking into a robust context which won’t eventually lead to AI misbehaviour?
… I’m still working on the answer to this if you have any ideas 😅.
But in the meantime MCP has created an even easier way for us to connect our ‘document store’ to our work in progress workflows.
As a founder you can literally:
- Write every commander’s-intent style business value, approach, outlook, target etc.
- Write todo lists for AI staff, teams, and swarms
- Cite Github issue links
- Spitball a new feature
- … all into a Notion page or a Google doc
- Connect that to your: IDE/AI Chat/AI automations via MCP
- Have a better informed hierarchy of AI tools
In this, MCP is going to help my challenge no end.
How to Use MCP (My Process)
Right now there are two main places I recommend using MCP. Whether you are a software founder or not, using at least one of these gives you a good head start on the competition.
MCP servers might seem overwhelming if you’re not used to local servers, node/docker etc. but I promise you that if you stick with the tutorials you’ll quickly ‘get it’.
MCP and your AI Chats
Right now I suspect you’re chatting to ChatGPT, Claude, or one of the other GPTs. I find myself leaving a permanent tab open to o1 (ChatGPT) / Claude and dipping in and out of it throughout the day.
But to get better context, (and so more practical results, and less need to repeat yourself), you can use the Claude app and connect MCP servers.
Personally I use Claude for more creative stuff (like I’m testing it for writing some copy as part of my AI Directory Maker), so for now I’ve only connected my ‘document store’ (Google Drive), and am exploring the best way to use it.
The guide on the on the official GDrive MCP server is good, but it lacks a few steps, here’s what I did to get Claude to see my doc store:
- Clone the root repo from here
- Follow steps 1-7 on this guide
- (Optionally) create a Google account which only has access to 1 directory in your Google Drive
- Go to Audience -> Test users from the Google API console, and add yourself/your restricted user
- Run
npm install
, thennpm run prepare
from within the/src/gdrive
directory - Run
node ./dist auth
and Auth through google - Add the JSON from the guide above to your
claude_desktop_config.json
file and run Claude!

MCP and your IDE: How to use MCP’s in Cursor
The first place a lot of us will use MCP is in our IDE. Currently I’m using Cursor, and am only using the Github MCP server.
Installing this was even easier than Google Drive into Claude. The instructions on the repo are straightforward and it’s great that with Github tokens you can be super granular.

Make sure you’re on Cursor v0.47+ though, and also that you enable Agent mode in the chat.

Then you can get it to interact with your repo fully! I’m still thinking about how to work this into my own workflows; let me know in the comments if you have any smart use cases.

My Favourite MCPs
I’m going to keep exploring the booming collection of MCP servers available, probably tilted towards IDE use, but also in the context of these automation workflows that are becoming the backbone of the AI businesses I’m making for this challenge.
So far I’m loving these MCP Servers, (I’ll come back and update this list!):
- Github: In Cursor this lets me be super lazy and read/manage issues, but it’s got a lot more value I’m working through
- Google Drive: As mentioned above, I’m dabbling with how to store the ‘commander’s-intent’ of my AI businesses in one place. Later I expect to be able to have AI agents evolve their own prompt backbones!
- DBHub: This is letting Cursor see the actual database behind the scenes. So far it’s speeding up my initial app setup, but again, I suspect I’ll find more ways to utilise it
- Slack: I messed about with Cursor slacking me various updates, I use slack everyday as my go-to messenger. Looks promising
- Spotify: There’s already an MCP server for Spotify, but I could not get it working – I put this here as a placeholder, because I can see some awesome synergies between work & music that need to be leveraged!
Have you found any cool model context protocol servers which you’re using? Let me know in the comments.
MCP Risks & Pitfalls
Now it’s all well and good empowering your AI assistant, but this post wouldn’t be complete unless I made you aware of the potential risks of using MCP servers (in your IDE or anywhere).
Permissions Scopes
It goes without saying, but be super careful when you’re authorising these MCP servers, it may be tempting to give them full access, especially when your head is deep in solving some problem or other.
Giving them too many permissions or too large a scope can cause all sorts of issues, not to mention opening up serious vulnerabilities.
- Always scope-down permissions. Give the least possible to achieve your goal.
- Always store your auth tokens somewhere safe, add them to
.gitignore
and do not share them accidentally any way.
Best practice is to check on your MCP servers often, or at least at commencement & delivery/deployment. Get in the habit of cleaning up after yourself, and culling old tokens.
… you wouldn’t want to accidentally expose keys which let bad actors delete files, wipe databases, steal your crypto wallets etc.
… and you don’t want any AI to do so either.
Confused AI
Provided you’ve given just the right permissions scope you can go ahead and leverage your new connected AI. But as you go you may notice that all the new powers confuse your processes more than they benefit them.
Be a context master, and specify/restrict what your AI can see to just the right context to get the job done.
What’s Next After MCP?
For now you could say that MCP is another ledge on the mountain we’re climbing called pragmatic AI implementation. It’s great to see protocols like this being established amongst what can seem like the wild west of LLMs.

Personally I think it’s going to take me a few weeks to fully see the place MCP servers play in my overall AI business. It’s totally common sense that we want and need a way to wire in various external services into our workflows, but I see MCP as an early stage solution.
MCP and the AI Business Challenge
As to MCP servers and this AI business challenge, I think the bigger conversation needs to be about context. I’ll leave you with a question which I’m also asking myself:

What context do our AI agents need in order to do good business for us?
I’d love to hear your experiences and thoughts on MCP in the comments.
–
Next week: An update on AI Directory Maker and a post about Blending AI tools, subscribe below to get the update:
Leave a Reply