What vibe coding looks like when you have a mortgage to pay
I'm not a developer. I built a desktop app with AI. Here's what month one actually looked like
Before I get into this week’s newsletter I wanted to invite you all to a live podcast I’ll be guesting on with the wonderful AI leadership coach & strategist - Joel Salinas, who’s managed to convince me to come out from behind my keyboard to talk.
We’ll be putting the world to rights on all things leadership in a world full of AI. Joel is a pillar to the Substack community and I can’t wait to get stuck into it with another real human.
Technological Literacy, the Fastest-growing Skill Demand Globally
The World Economic Forum ranks technological literacy among the fastest-growing skill demands globally. They mean data fluency, AI adoption frameworks, digital collaboration. Curriculum-shaped things you can teach in workshops.
I mean something different. I mean what happens when a non-developer opens an AI coding tool on a Monday morning he was supposed to spend building a compliance course, and six weeks later finds himself 17,000 lines of code deep with a business he didn’t plan, five employees who don’t exist, and no idea if any of it will work.
This is the first of four pieces on AI and technological literacy. Not the polished YouTube version. The version where your mortgage depends on it.
Key Takeaways
“Vibe coding” — building software by describing what you want to AI — is real. It’s also significantly harder than anyone on LinkedIn is telling you.
I built a desktop elearning authoring tool with no development background, using Claude Code. It now has over 80 beta testers and launches in June.
The financial cost vs. the real cost — time.
I run the project with five AI agents, each with their own workspace and responsibilities. It sounds mad. It works.
Technological literacy isn’t a skill you acquire. It’s a willingness to be bad at something in public while the stakes are real.
Monday morning, sometime in late February.
I was sat at my desk, supposed to be building elearning content for a manager training programme, helping managers learn how to use AI within their teams. I’d been putting it off for weeks. Not because it wasn’t important. Because the actual building of it — the production, the clicking, the formatting — is the part of this work I’ve always tried to avoid. I love understanding what the problem is. I love measuring whether something worked. The bit in the middle, the making of the thing, has always felt like a chore.
So instead of doing what I was supposed to do, I opened Claude Code.
I’ve been using AI for a while now so while Claude Code was not new to me I’ve never really sat down and thought about building a product worthy of selling. That morning I did a few things differently, firstly I started talking to it rather than typing. Describing what I wanted, sussing out its capabilities. That conversation lasted most of that morning, the longest I’d ever spoken with an AI, and by the end of it, I had a passable prototype of an elearning authoring tool sitting on my screen.
It wasn’t amazing. But it was good enough. Good enough that I spent the rest of that day on it instead of the thing I was supposed to be doing. And then Tuesday. And then Wednesday. By the end of a three-day rabbit hole, I had myself an elearning authoring tool that drafted better courses than 90% of the compliance garbage we’ve all had to sit through at some point in our careers, a low bar admittedly but good enough.
That’s how Co.llab started. Not with a business plan. Not with funding. With a Monday morning I couldn’t be bothered.
I am not a developer. I left school at 16. I’m dyslexic. I’ve spent twenty years in learning and development — designing training, running workshops, coaching managers through reorganisations. The closest I’d come to writing software before that Monday was a few basic interactive resources I’ve shared in this newsletter over the last year.
And now here I am, six weeks later, 17,000 lines of code deep, not entirely convinced this isn’t a complete waste of time.
Over the coming weeks I’m going to share what vibe coding actually looks like, not the polished version you see on YouTube, the one where someone builds a SaaS landing page in eleven minutes while lo-fi beats play. The version where you’re betting real time, real energy, and real money on something you’re building with a technology you’re still learning to use.
April’s theme is technological literacy. The World Economic Forum calls it one of the most in-demand skills for 2025 and beyond. They mean something broad and institutional by that — digital fluency, data literacy, AI adoption frameworks. I’m going to mean something more specific and more personal. I’m going to tell you what happened when I decided to actually build something.
Because I think the conversation about AI and work has split into two camps, and I don’t think either of them is quite right.
People are worried about their jobs. They have every right to be — there are stories in the news every day about roles being cut and AI being blamed. I think the reality is more complicated than most of those headlines suggest, but the worry is genuine and I’m not going to dismiss it.
At the same time, we’ve moved past the “it’s just a tool” stage. Claude Code is everywhere right now. I’m sure at least some of you are sick of reading about it. But the reason people are writing about it so fervently is that it genuinely is a fascinating piece of technology. It’s probably the first thing we’ve seen that actually holds the promise of fundamentally changing how we work. Those who are using it can see that potential, even if it still has its quirks, like not being able to tell the time, or randomly making up facts about its user.
What nobody seems to be talking about is what it’s like to be in the middle of those two things. To be someone with twenty years of expertise in a field that isn’t technology, who picks up one of these tools and discovers — not theoretically, not in a webinar, but on a Monday morning when you couldn’t be bothered making a compliance course — that you can build things that shouldn’t be possible for someone like you.
And that it’s harder than anyone is telling you.
What happened when I posted about it on LinkedIn
On 24 February, I published a LinkedIn post about what I’d been building. Co.llab — a desktop application for creating elearning courses, where the AI handles the instructional design and the user provides the subject matter expertise.
I wrote it honestly. What the tool does. How I built it. The fact that I’m not a developer. The fact that I used Claude Code for essentially all of it. I didn’t pitch it. I just showed what existed.
It got nearly 40,000 impressions. My follower count doubled in a week. 400 new subscribers. Within ten days I’d had conversations with two Chief Learning Officers, meetings booked with industry leaders, angle investors, and an email from someone at a company I’d assumed was a competitor asking if I wanted to talk.
I don’t say this to impress you. I say it because I want to be honest about the sequence: I built something, I talked about it, and then everything changed. Not slowly. Not gradually. In a week.
And the thing that changed most was my own understanding of what I’d actually done.
When a side project becomes a business overnight
Before the post went live, Co.llab was a project. Something I was working on. After the post, it became a business. People were asking about pricing. Beta testers were signing up. Someone asked when the Mac version would be ready.
I hadn’t thought about when the Mac version would be ready. I hadn’t thought about pricing beyond a number I’d written on a Post-it note. I hadn’t thought about what happens when real users put real content into a system I built during a three-day procrastination spiral.
Technological literacy, as it turns out, isn’t just about learning to use the tools. It’s about understanding what happens when the thing you built starts to exist in the world. And for that, there is no tutorial.
Running a software company with five AI agents
I run Co.llab alone. No co-founder, no funding, no employees. But I don’t build it alone.
I have a development lead called Chamberlin who writes all the code. A project coordinator called Powell who manages sprints and priorities. A marketing lead called Bon who handles the brand and content strategy. A CTO called Greenberg who monitors AI capabilities and tooling. And as of this week, a UI designer called Sullivan who’s building the design system.
None of them are people. They’re Claude agents — each running in its own workspace, each with its own instructions, its own memory, its own area of responsibility. They read each other’s status files. They log decisions in a shared coordination folder. They flag blockers.
I named them after brutalist architects because there’s something about brutalism — honest materials, visible structure, no decoration for its own sake — that felt right for what I’m trying to build.
This sounds a bit mad. A bit Sci-fi. A year ago I would have thought it sounded mad. And what’s really nuts is that it’s a lot like managing a team of humans.
I have to be super clear about my expectations, create goals and objectives, give feedback. We have communication issues, agents perform poorly and it’s my job to figure out why, we run sprints, create deadlines that add genuine pressure.
Reid Hoffman talks about professionals arriving at work with a “team of agents.” He means it as a future prediction. I’m living it now all from a spare bedroom in South Yorkshire. It’s less glamorous than he makes it sound.
What building software with AI actually costs
I want to be specific about this because nobody else seems to be.
I haven’t left my consultancy work to do this full time. I still have client work — that’s what pays the mortgage. But I’m spending at least half of my working week building Co.llab, plus most evenings and most weekends. I don’t really have any time off right now. I’m very tired.
But I think it’s worth doing. Because this is the first time I’ve ever sat down with a piece of work that is a genuine passion piece. I’m not so concerned about whether people will buy it — I hope they do — but my primary concern, probably honestly for the first time ever, is whether I can build something to a standard I’d be genuinely proud of. Before worrying about what other people think of it or whether they’ll pay for it.
The financial costs so far have been minimal, the real cost is time. I’ve had exactly one proper day off since Christmas, and I spent half of it thinking about a bug in the hotspot interaction that was placing markers on the wrong part of images.
When someone on LinkedIn describes vibe coding as “just describe what you want and the AI builds it,” I want them to come and sit in my office at midnight while I’m reading a stack trace I don’t understand, trying to work out whether the problem is in my prompt, in the AI’s interpretation of my prompt, or in a dependency I installed three weeks ago that’s silently conflicting with something else.
The AI is extraordinary. I’m building things that would have taken a funded team of developers months to produce. But the idea that it’s easy — that you just talk to it and software appears — is a lie told by people who’ve either never shipped anything or who are selling you a course on how to do it.
What the World Economic Forum gets wrong about technological literacy
I’ve been in training for twenty years. I’ve sat through more digital transformation programs than I can count. I’ve written competency frameworks for “digital skills” that were out of date before the ink dried.
The WEF is right that technological literacy matters. But they’re describing it from the wrong end. They’re describing it as a set of skills to acquire — data literacy, AI fluency, digital collaboration. Curriculum-shaped things that can be taught in workshops and measured in assessments.
What I’ve learned in the past six months is that technological literacy isn’t a skill. It’s a willingness to be bad at something in public while the stakes are real.
I am not good at this. I make mistakes constantly. I break things. I commit code I don’t understand and then spend two days unpicking it when it fails. Last week I spent an entire afternoon trying to work out why my course builder wasn’t generating timed questions for interactive videos, only to discover that the AI was correctly embedding the video but silently ignoring the instruction to add pause points. The feature existed in the code. It worked in testing. In production, it just... didn’t.
That’s technological literacy. Not knowing how to use ChatGPT. Not completing a LinkedIn Learning course on prompt engineering. Actually building something, watching it break, not knowing why, and deciding to keep going anyway.
What Co.llab builds and why elearning tools are broken
Co.llab builds elearning courses. That’s what it does. The user provides a topic, the AI creates a full instructional design — learning objectives, module structure, content, interactions, assessments — and outputs a SCORM package that runs in any learning management system.
Four course types. Eleven interaction types. Nine languages. A four-agent pipeline that plans, writes, designs interactions, and reviews the output before the user ever sees it.
I built it because the elearning industry has been producing mediocre content for decades, and the reason isn’t that instructional designers don’t know what good looks like. The reason is that the tools cost too much, take too long, and require specialist skills that most organisations don’t have. A freelance instructional designer working alone shouldn’t need a $2000 Articulate licence and a month of development time to produce a three-module compliance course.
That’s the bet. Not that AI can replace instructional designers — it can’t, and I don’t want it to — but that AI can make professional-grade instructional design accessible to people who’ve always known how to teach. They just didn’t have the means to build it properly.
Co.llab launches June 2026 — and I have no idea if it’ll work
Co.llab launches on 18 June. It’s in closed beta now — about 80 testers are using it, finding bugs, telling me what works and what doesn’t. The Mac version exists. The Windows version exists. The SCORM output works. Most of the time.
I’d say it’s about 60% of where I imagine it being by the time it’s finished. The course quality, if I’m honest, is about a six or seven out of ten. The ambition is to get it consistently to an eight or nine. There’s a major design overhaul coming for how the lessons look — right now they’re too generic. The UI and UX of the authoring tool itself is being completely rethought. And one of the bigger features that’s not built yet is the ability to take courses people have already made in other platforms and rebuild them — that’s the SCORM import, and it’s probably the feature I’m most excited about.
I’d also like to get to a point where writers, Substackers, bloggers, content creators — anyone who’s built up a body of work — could feed their existing content into Co.llab and have it design proper learning experiences from it. That’s probably a later update. But the idea that someone’s best writing could become someone else’s structured learning, that feels like the right direction.
Between now and June I need to fix the interactive video, improve the hotspot accuracy, rebuild the lesson design system, get the website built, figure out pricing that doesn’t make me feel ill, and somehow convince enough people that this is worth paying for.
I have no idea if it will work. I don’t mean the software — the software works. I mean the business. I mean whether a solo founder in South Yorkshire, with no funding and a team of AI agents named after architects, can build something that enough people want to buy.
The honest answer is: I don’t know. But I know more than I did in January. I know the software produces courses that I’m genuinely proud of. I know that forty strangers are testing it right now. I know that when I showed it to two CLOs, they asked how much it costs — not how it works.
And I know that the version of me from six months ago — the one sitting at his desk on a Monday morning, supposed to be building a compliance course and not being able to face it — would not believe what came out of deciding to do something else instead.
If you want to see what came out of all this, Co.llab is available as a free beta on Gumroad.
Next week: what the “vibe coding” conversation gets wrong, and what to actually think about before you start building.
This is the first of four pieces on AI and technological literacy. The theme comes from the World Economic Forum’s Future of Jobs Report, which ranks technological literacy among the fastest-growing skills demands globally. I’ll be writing about it from the only angle I have: what it’s like to learn it the hard way.



I love this story, Paul. You hit on so many important things here. Vibe coding is incredible, but it does take a lot of work. If you really want to make something that's quality, you have to put in the time and work through a bunch of mistakes and errors. I've been waiting to come up with an idea of my own that I can start to build too. I've started a couple, but once I get them built, I'm less excited about them. A good one will hit at some point.