I keep thinking about my weekend experiment with Microsoft Copilot and my manuscript. The experience itself was enlightening.
One thing I’ve realized: it was the first time I systematically used AI without restrictions.
Work AI vs Personal AI
My work day involves regular AI use for any number of things.
As a technical writer by trade in a docs-as-code environment, my regular tools include:
- Terminal
- Git
- VSCode
- GitHub
There is a mix of internal AI tools and approved commercially available tools. I built some Claude Code Skills that automate routine tasks, and fill gaps I’ve discovered during the course of my workday.
I built some other Claude Code Skills that review files against the various style guides we have to follow, along with templates and other rules and requirements.
Style guides. Templates. Linting rules. Product rules. Corporate guidelines.
My manuscript doesn’t have any of that.
Using AI without restriction
My manuscript doesn’t have a template. It doesn’t follow a specific style guide like AP Style or the Chicago Manual of Style.
It borrows from them. It borrows its structure from that of a symphony, with a mix of play writing and script writing within code.
Software development theories, like the Extreme Programming method of agile, show up in the manuscript. There’s a playfulness in parts, too.
To use AI for my manuscript, I had to figure out how to use AI without the restrictions my brain has become accustomed to through AI use at work.
In a way, it was kind of freeing. And a bit “deer in the headlights.” I had to find my way. I knew what I was after, but I was unsure if Copilot, or any AI tool, would be able to deliver.
Using AI through restrictions like style guides, templates, and other requirements also made me wonder if there are other aspects of the process that do have such requirements. That made me wonder about testing some agentic ideas, or some skills.
Running wild thinking about potential Agents and Skills
I’ve used Copilot now, and I already use Gemini and Claude Code at work. I’ve used ChatGPT for silly things, but hesitate to use it for much else.
Since I’ve been an Evernote user since the dawn of time, I tried Evernote’s AI and was overwhelmingly disappointed. Notion’s AI, on the other hand, is amazing. I gave Notion a select of Evernote Notebooks, and through a series of prompts, it has gone through them and it has given me a bird’s eye view of all the data I collected as I went about rewiring my brain, as well as connections between Notebooks and Notes, and summaries of collections.
Some of the gaps in my manuscript identified by Copilot can be filled by information Notion’s AI pulled from a few thousand Notes scattered across a few Notebooks. It’s pretty amazing.
Now I wonder: Can I create a Claude Code Skill to pull it all together?
If I put all the info into VSCode, install the Claude Code plugin, can I build a Skill that will help me find what I need from Notion to address the gaps identified by Copilot so I can draft and revise my manuscript?
Would an agent be beneficial?
Can I go further, and build an agent, or a Skill, for querying potential editors, agents, and publishers?
Can I transform my public creative writing site into an interactive adventure for my manuscript?
Using AI without restrictions I’ve become accustomed to through work has opened up a sea of endless possibilities. So much exposure and use of AI tools through work has provided a foundation on which I can build, cultivate, and continue to pursue my curiosity.