Which AI Agents are Valuable in a 100% Automated Company in June 2025?
Last week I posted about how all of this automating I’m doing for this challenge doesn’t feel like AI. The reddit comments after that post agreed with me almost unanimously:


AI Agents are currently just scripted chains.
This poses the question; where do we best leverage them? Can AI “Agents” act as managers?
AI Agents are Chained Prompts
Quick Update on AI Directory Experiment
Last week I got the semblance of Agent 1 and 2 (Prospector & Builder) stood up and working.
That’s 50% of the way to having an automated web Directory business:

This week I’m going to spend a lot of time refining the first two flows to get the output up to scratch.
The progress so far is both exciting, and demoralising.
E.g. I love the combo of Business API + Make.com + my Human Queue. It’s so sweet to have entered only a keyword, and having had AI’s gather keywords, generate brand ideas, and even make basic logos; then letting me pick with 1 click:

But it really doesn’t feel like AI. It feels like a very slick automation which contains lots of AI calls. When I reply to my AI manager agent prototypes I can feel their hollowness; not to mention their cringe-worthy levels of sycophancy.

You can read more about this first experiment here, (full rundown on this first proto-swarm soon.)
New here? Welcome! This is the journey of building a 100% automated AI business in 2025. You’re jumping in after we’ve already kicked things off, so you might want to catch up first.
Check out these key posts to get the full story—and don’t forget to subscribe for updates and exclusive perks:
How AI Agents Currently Work
At their core, AI agents are sophisticated orchestrations of prompts; essentially, structured instructions guiding LLMs to perform tasks. While it sometimes seems like these agents are more complex, they are mostly executing well-designed prompt architectures.
The Anatomy of an AI Agent
AI Agents currently seem to operate on one of these:
- Simple Prompt Execution: One or two straightforward levels of nested instruction, e.g. System Prompt + User Prompt
- Chained Prompts: A sequence where the output of one prompt becomes the input for the next. For instance, extracting key points from a text, then using those points to generate a summary, (automatic chain-of-thought prompting)
- Conditional Prompting: Dynamic branching versions of the above, where the next prompt depends on the previous output. For example, if sentiment analysis detects negativity, so the agent might initiate a prompt to draft a response addressing concerns

This is likely what’s driving nearly all of the ‘AI Agent’ tools you may have used – to a more or less decent outcome. They’re pretty much just prompts.
Where I’m using AI Agents Currently
The best 2 examples I’ve seen of close-to-expectation “AI Agents” are in Relevance AI & Make. These two AI Agent setups seem most robust for workflow automation, and I could get the agents to do some work with a depth I’ve not seen in other tools, (when they worked – I’m looking at you Relevance).
But as per the Reddit chorus, more often than not, these agents have to stay flat to the floor, or risk falling over.
- Give them one tool – (e.g. an API to call) and a fairly shallow task (query keywords, return the low-hanging fruit) – they’ll do okay
- Give them several tools – you’re less likely to be able to get a consistent output
- Give them handfuls of tools and a complex task – even if you’re methodical with your prompts, you’re likely going to get frustratingly close-to-great answers, then absolute clangers, in no rational sequence
…I digress.
I’m currently using AI agents on Make, to do keyword research and assist with domain prospecting:

- Keyword research = 1 tool (Google Keyword API) + 1 task (find me the 20 lowest competition keywords as close to the provided keyword as possible)
- Domain Prospecting = 0 tools + 1 task (invent some madcap-but-audience-appropriate domain name ideas)
… both of which I suppose I could recreate by chaining a few prompts together outside of Make.
Where I’m not using AI Agents
I’m not using AI Agents as managers anymore.
See the following ridiculous Make scenario workflow:

This basic workflow outperforms a prompted agent…because where you can use conditional statements it’s still so much more reliable than LLM based agents (which, reflecting historic human nature, are overall inconsistent).
I’m not using AI Agents with 3+ tools anymore
I’ve dabbled with fuller agents but they just don’t seem ready. They time out and do nothing. They blow API budgets in milliseconds. They work randomly which gives you false hope. If you have any success with multi-tool agents, please do comment below and point me in a good direction!

When will AI Agents be Smart Enough to be Managers?
As artificial intelligence transitions from AI hype to AI augmentation to realistic AI Agents I hope to be able to leverage them reliably as manager agents.
I suspect there has been progress on this already this year, and I will re-hunt for it when I hit the need to build a ‘Product manager’ for this directory business experiment.
Certainly as we get more and more capable models, I’d hope that true AI Agents become an easy go-to (alongside some form of biz API and human queue). If they don’t, this challenge is going to be bloody difficult to achieve!

What’s your take? Are any AI agent frameworks smart enough to bother with yet? How are you using “AI Agents”? Let me know in the comments.
Leave a Reply