AI Development 3 min read

AI in Practice: Speaking at Exeter Science Park

Last week I was back at Exeter Science Park, this time presenting on what AI-augmented development actually looks like in practice — moving beyond the demos and into the reality of using AI tools in a production software company.

Why I Wanted to Do This Talk

A year ago, almost to the month, I stood in a similar room at the Science Park and talked about how Ad-MOTO uses AI to process advertising data. That was about AI as a product feature — something we build for clients. This time, I wanted to talk about something different: how AI is changing how we build software itself.

In the past 12 months, the tools available to developers have transformed. We’ve gone from AI being a novelty — generating the odd function, autocompleting some boilerplate — to it being a genuine force multiplier in our daily workflow. But the reality is more nuanced than either the hype merchants or the sceptics would have you believe.

What We Actually Use

At Exe Squared, every developer now uses AI-assisted tools as part of their standard workflow. We use Claude for code review, architecture discussions, and complex problem-solving. We use Cursor and GitHub Copilot for in-editor assistance. We’ve integrated AI into our documentation workflow and our testing process.

The key insight? AI doesn’t replace developer judgement — it amplifies it. A junior developer with AI assistance can be dramatically more productive, but only if they have enough understanding to evaluate what the AI produces. An experienced developer with AI assistance can tackle problems that would previously have taken days in hours. The multiplier effect is real, but it requires a foundation of genuine skill.

The Pitfalls

I was honest about the failures too. We had an early experiment where we let AI generate an entire module with minimal human oversight. It worked perfectly in testing and failed spectacularly in production — because the AI had made reasonable-looking assumptions about data formats that didn’t match reality. The code was clean, well-structured, and completely wrong.

That taught us that AI-generated code needs the same rigorous review as human-written code. Perhaps more, because AI code tends to look confident and competent even when it’s subtly incorrect. We now treat AI as a very capable junior developer: it can do impressive work, but everything needs review by someone who understands the domain.

The South West Tech Scene

What I love about presenting at these events is the calibre of the audience. The questions were sharp and practical: How do you handle IP concerns with AI-generated code? What’s the actual productivity gain you’ve measured? How do you prevent your team becoming dependent on tools that might change pricing or availability?

The South West tech community doesn’t get enough credit. There are companies out here building serious technology — not just startups chasing funding, but established businesses solving real problems. Events like the Science Park AI series bring that community together, and I’m grateful to be part of it.

If you missed the talk and want to discuss any of these topics, get in touch. We’re always happy to share what we’ve learned — the successes and the mistakes.

Let's build something great

Tell us about your project and we'll get back to you within one working day. No hard sell, just a straight conversation about what you need.

Start a conversation