Helping SaaS teams ship AI features users actually adopt
Most AI features fail at adoption, not accuracy
-
Your AI feature is live but users aren't really engaging with it
-
Users ignore it, work around it, or just don't trust the output
-
You're not sure if the problem is technical, UX, or both
-
You need someone who understands both the pipeline and the user experience
-
You want AI that handles failure well, not just the happy path

About me
Hi, I'm Alfred. I'm a freelance AI engineer and I work with B2B SaaS teams to build AI features that actually get used.
Most of my clients come to me because they've shipped an AI feature that isn't getting the adoption they expected. Users ignore it, work around it, or don't trust it. Usually it's not obvious whether that's a technical problem or a product problem. In my experience it tends to be both, and the fix lives somewhere in between.
I studied interaction technology and design at Umeå University, a programme that combines software engineering with UX and usability. After that I moved into backend and distributed systems, where I spent five years building blockchain infrastructure that had to stay reliable under unpredictable conditions. That mix of understanding what users need and knowing how to build reliable systems is what I bring to AI work now.
What I've found is that the difference between an AI feature that gets adopted and one that gets ignored comes down to a few things: how it handles uncertainty, how it communicates its limits, and whether it fits how users actually think. Those are the decisions I focus on.
Why work with me?
-
UX + Engineering Background
I studied interaction design alongside software engineering. I don't just build features that work technically. I build features that fit how users actually think. That means fewer "it works but nobody uses it" situations.
-
5 Years in Production Systems
I spent five years building distributed systems and blockchain infrastructure. I know what it takes to ship software that has to stay reliable under real conditions, not just pass a demo.
-
Pipeline to Integration Spec
I build the retrieval pipeline, LLM orchestration, and API layer. I also spec how each response type should behave in the UI, including confidence states, escalation flows, and error handling.
-
Focused on Adoption, Not Just Accuracy
Accuracy metrics don't matter if users don't trust the output. I care about how the feature handles failure, how it communicates uncertainty, and whether users have a clear path forward when the AI can't help.
How I work
AI Feature Audit · A structured review of an existing AI feature. What's technically broken, what's behaviorally broken, and what to fix first. You get a prioritized list of changes, not a 40-page report. Usually 1–2 weeks.
Feature Build · Architecture, retrieval, generation, API, and a clear integration spec that defines exactly how the feature should behave in the UI. Typically 4–8 weeks.
Integration & Iteration · I embed into your team part-time (usually 2–3 days a week) to build, ship, and refine AI features alongside your existing engineers. Good for teams that have ongoing AI work but not enough in-house experience to move fast.
Observability & Improvement · Setting up evaluation frameworks so you can measure whether your AI feature is actually working for users, not just technically running. Includes metrics, tracing, and a feedback loop you can use after I'm gone.
Frequently asked questions
What kind of products do you work on?
B2B SaaS products with user-facing AI features. Things like support bots, AI-assisted workflows, intelligent search, document processing. The common thread is that there's an AI component that end users interact with directly, and it needs to work well enough that people actually trust and use it.
What's your background?
I studied interaction technology and design at Umeå University, a programme that combines software engineering with UX and usability. Then I spent five years building distributed systems and blockchain infrastructure. Now I'm applying that combination of UX thinking and production engineering to AI features.
Do you work as a contractor or on fixed-scope projects?
Both. Some teams bring me in part-time to build and iterate alongside their existing engineers. Others prefer a fixed-scope engagement: build this feature, ship it, hand it off. We can figure out what makes sense during the intro call.
What does a typical engagement look like?
Depends on the scope. A feature audit takes 1–2 weeks. A new AI feature build is typically 4–8 weeks from architecture to deployment. Ongoing integration work is usually 2–3 days per week. We start with a free intro call to scope the work and make sure it's a good fit.
What's your tech stack?
Python, FastAPI, OpenAI, Claude, LangChain/pydantic-ai, ChromaDB, Pinecone, pgvector, sentence-transformers, PostgreSQL, and Docker.
How do you communicate progress?
Weekly async updates with a clear summary of what shipped, what's next, and any decisions that need your input. I'm on Slack or whatever channel your team uses for day-to-day questions. No surprise invoices or scope creep. If something changes, we talk about it first.
-
Let's grab a virtual coffee
Want to see if we're a good fit? Let's have a chat. Book a free 30-minute intro call and we can talk through what you're working on.