Cross-Platform Memory for AI Workflows
This week has been a week of experiments. Yesterday I played about with ways to accept human input into automation workflows, and today I’ve hustled up a quick and brutal experimental swarm memory.
ChatGPT’s Memory and Context Window
LLMs and workflow tools already have memory. At its simplest, ChatGPT truncates chat history to have some awareness of an ongoing conversation. This is limited by the models context window.
Workflow tools (like Make or n8n) have a ‘per flow’ memory which lets later workflow nodes reference data from earlier steps IO.

These basic types of memory are invaluable, and without them our interactions with AI would be even more frustrating than they can be.
…but scaling that to a more persistent swarm, operating across platforms and timings, means we’re going to need something more robust.
I’ve talked about using Google Drive as a store for ‘business parameters’; things like goals, audience profiles, style and ethics guidelines etc. but that’s a step further than I’m looking today.

First Step Swarm Memory Experiment
To get started we’re only talking about a persistent context document which will be retrievable and updatable by multiple agents who pick up and work on a ‘project’.
Certainly this could be a single Google doc.
But that wouldn’t give me:
- Version control -> Optimisation opportunities
- In-API events tracking without extra calls
- The piece of mind I have from not handing over Google permissions
For now I’ve written a simple system which uses API endpoints to create, read, and update ‘project files’.

Anatomy of an Automation Project File
These simple project files are stored in the DB behind my business API. They’re not complex:
- Unique identifier per-project carried across workflows
- Markdown of key data used throughout project (e.g. keywords to target, common jargon)
- Version controlled changes logger per agent and workflow stage
- Capable of being locked down by section, to agents, API key etc.
Let’s say that this gives my swarm a collective pigeon level memory, with the added bonus of being usable across different platform workflow executions. Like 4 pigeons teamed up.
What’s next for Profit Swarm AI Collective Memory?
Yes you could in theory just pass along JSON or plain text output from one agent to another, but what if something goes wrong? What if you hit a context window limit?
Workflow automation tools are getting better at offering debugging output. It just bugs me to have data stored in a per-run container in a still-evolving system somewhere.
So here we are.

I’m going to keep experimenting with different approaches here; ‘project file’ memory logs may one day offer my swarm the collective memory of a bucket of parrots.
Leave a Reply