Skip to content

Helping SaaS teams ship AI features users actually adopt

Adoption is where most B2B AI features break down

  • Your AI feature is live but adoption is flat

  • Users try it once and don't come back

  • It's hard to tell whether the gap is technical or product

  • Demos pass, but real workflows expose what's broken

Book Free Intro Call

Portrait of Alfred Persson, AI product engineer

About me

Hi, I'm Alfred. I'm a freelance AI product engineer and I work with B2B SaaS teams to build AI features that actually get used.

If you've shipped an AI feature and adoption isn't where you hoped, that's where I come in. Whether the gap is technical or product is rarely obvious. Usually it's both, and the fix lives somewhere in between.

I studied interaction technology and design at Umeå University, a programme that combines software engineering with UX and usability. After that I moved into backend engineering, where I spent five years building blockchain applications. That meant code I couldn't patch after deployment and real users moving real money, the kind of software that has to stay reliable under unpredictable conditions. That mix of understanding what users need and knowing how to build reliable systems is what I bring to AI work now.

The difference between an AI feature that gets adopted and one that gets ignored usually comes down to how it handles uncertainty and whether it fits how users actually think. Those are the decisions I focus on.

Why work with me?

UX + Engineering Background · I studied interaction design alongside software engineering. I build features that fit how users actually think, not just how the system works. That means fewer "it works but nobody uses it" situations.

5 Years in Production Systems · I spent five years building blockchain applications. I know what it takes to ship software that has to stay reliable under real conditions, not just pass a demo.

Pipeline to Integration Spec · I build the retrieval pipeline, LLM orchestration, and API layer. I also spec how each response should behave in the UI, especially confidence states and when the feature should hand off to a human.

Designed Around Failure · Accuracy metrics don't matter if users don't trust the output. I focus on how the feature handles failure and communicates uncertainty, so users get a useful next step instead of a confidently wrong answer.

How I work

AI Feature Audit · A structured review of an existing AI feature, looking at both the system and how users experience it, with a prioritized fix list. You get a findings report with a maturity score, prioritized fixes, and an advancement roadmap, plus a walkthrough session with your team. Usually 2 weeks.

Feature Build · Architecture, retrieval, generation, API, and a clear integration spec that defines exactly how the feature should behave in the UI. Typically 4–8 weeks.

Integration & Iteration · I embed into your team part-time (usually 2–3 days a week) to build, ship, and refine AI features alongside your existing engineers. Good for teams that have ongoing AI work but not enough in-house experience to move fast.

Observability & Improvement · Setting up evaluation frameworks so you can measure whether your AI feature is actually helping users. Includes metrics, tracing, and a feedback loop you can use after I'm gone.

Frequently asked questions

What kind of products do you work on?

B2B SaaS products with user-facing AI features. Things like support bots, AI-assisted workflows, intelligent search, document processing. The common thread is that there's an AI component that end users interact with directly, and it needs to work well enough that people actually trust and use it.

What's your background?

Short version: interaction design and software engineering at Umeå University, then five years building blockchain applications. The mix I work from is production engineering plus user-side thinking about whether anyone actually uses the result.

Do you work as a contractor or on fixed-scope projects?

Both. Part-time embedded work makes sense if you have ongoing AI work and want me iterating alongside your engineers. A fixed-scope engagement makes sense if you want a specific feature built, shipped, and handed off. We can figure out what fits during the intro call.

What does a typical engagement look like?

Depends on the scope. A feature audit takes 2 weeks. A new AI feature build is typically 4–8 weeks from architecture to deployment. Ongoing integration work is usually 2–3 days per week. We start with a free intro call to scope the work and make sure it's a good fit.

What's your tech stack?

Python, FastAPI, OpenAI, Claude, LangChain/pydantic-ai, ChromaDB, Pinecone, pgvector, sentence-transformers, PostgreSQL, and Docker.

Let's grab a virtual coffee

Want to see if we're a good fit? Let's have a chat. Book a free 30-minute intro call and we can talk through what you're working on.

Book Free Intro Call