Rebuilding Moselwal as an AI company — what Y Combinator says and what I'm doing with it
In the Y Combinator Startup School session “How To Build A Company With AI From The Ground Up,” Diana Hu formulated a position that quieted me on first viewing — and got me writing on the second. Notes on agentic engineering, private AI on our own infrastructure, and why this means more humans in the end, not fewer.
The sentence that stays with me
Diana Hu says, in essence: An AI company isn't a company that uses AI as a tool. It's a company whose operating system is AI. The difference sounds like a matter of word choice. It isn't.
You pick up tools when you need them. An operating system carries every transaction. When the operating system is AI, then sales, support, operations, engineering, and reporting no longer run on humans who occasionally consult AI — but on a layer that constantly mediates between data and humans, works with data, draws conclusions, and proposes. Humans decide, curate, correct. But they're no longer the middleware.
I'm taking this position as a working hypothesis. And I'm starting to rebuild Moselwal in exactly that direction.
Where we already are: agentic engineering, not autonomous
In development this has been running for some time. Not as a hype claim, but as operational reality: I work with AI agents in nearly all engineering steps. Code reviews with AI-supported context analysis, refactoring suggestions, test generation, documentation drafts, CVE research, SBOM evaluation, even parts of pipeline configuration and IaC reviews.
What I deliberately don't do: autonomous agents without human anchoring. What many people understand as “agentic AI” — the agent does it, you look at the result at the end — is too open a loop for serious engineering work. The errors that slip through autonomous setups often only surface in code audits three weeks later. And then they cost more than the savings potential.
What's emerging as a consensus in software development right now goes by the term agentic engineering: the agent is tool and sparring partner, the engineer remains the engineer. Every non-trivial change runs through human control, every line of code is traceably discussed, and the pipeline raises an alarm when something slips past the standard. The result: significantly higher output at comparable or better quality, with an engineer who at every moment knows what's in the code.
That isn't autonomous. It isn't AI working for me. It's AI working with me — deliberately, in the loop, with a human last word.
Private AI on our own infrastructure
The second point that isn't negotiable: this AI layer doesn't run for us through external APIs into which our data and our customers' data simply drain. It runs on our own infrastructure, with models we control.
Why? Because an AI company that sells customer-owned architectures to its clients while sending those clients' business secrets through a third-party API is telling an inconsistent story. If I explain to a mid-market company why their stack should run on their servers and with their keys, then my own engineering layer has to do that too. Otherwise it's marketing, not discipline.
Concretely that means: open-weight models on our own GPU nodes in our own infrastructure, with a routing layer that picks the appropriate model per task. Embeddings for retrieval-augmented generation run locally, vector stores too. Code and customer data leave the Moselwal infrastructure only when the customer contract explicitly allows — and that's the exception, not the rule.
The performance isn't at frontier level on every benchmark. But in daily engineering work it's good enough, it's clean from a data perspective, and it scales with our hardware instead of with someone else's price list. When a customer comes to us with the requirement “please no customer data in US cloud APIs,” we can not only recommend it but live it ourselves.
This is the more honest anchor around the whole AI-company thesis: control over our own data, models, and keys is the precondition for “AI as operating system” to become a business foundation at all — and not a dependency that tips over with the next pricing model change.
Where I want to move Moselwal as a whole
Engineering is only the first sector that carries this discipline. Diana Hu's point about the “queryable organization” hits here: if sales, operations, reporting, and support aren't accessible from a data perspective, no AI layer can orchestrate them. What's on the plan for us, in no particular order:
Sales and lead management as a structured pipeline with data agents can read, enrich, and propose against — no scattered email threads, no Excel tables with individual notes. First steps are made: a central CRM with clear stage structure, lead-scoring inputs from multiple sources, automatic profile enrichment via API.
Operations as an auditable transaction layer. Which task belongs to which customer, which hours of effort, which SLA status, which open point — all in a system agents can query. Without this layer, AI is just a pretty chat overlay on shadow IT.
Reporting and dashboards generative, in the sense of Diana Hu's “one-shot internal dashboards.” Instead of pre-built reports, the layer builds the dashboard that's needed right now from the data that's there. That requires data structure and a permissions model as preconditions — neither of which is a trivial task for an agency that historically built much of its work on Excel knowledge.
Marketing and content with agent support in research, structuring, and first drafts. The writing voice stays with me and the team — no AI-generated content without human final editing. But the research step is often ten times faster with agents than without.
All of this is medium-term. I'm giving myself until the end of 2027 for the initial build-out of this layer — realistic, because the engineering part is already running, but the operations part still needs a few quarters of discipline.
Why this means more humans in the end, not fewer
The point that the discussion about AI companies often misses: a cleanly built AI layer is no replacement for human contact. It's the precondition for human contact to be possible again at all, without speed or outcome suffering for it.
What I picture for Moselwal is what could be called, slightly old-fashioned, “uncommon service”: customers who reach me or the team directly when needed. Replies that are personal and don't fall out of a macro template. Meetings with real preparation effort, not with “let me just open the CRM, who were you again?” Problems that don't get lost in a ticket funnel but land with someone who knows the context.
Today, for many mid-market companies, that's an effort argument: personal, precise communication takes time, time costs money, so it gets delegated, automated, or saved away. With the AI layer, this calculation tips. When the agentic layer brings the context together, handles the preparation, takes over the research in our own data, and produces the routine reporting on its own — then human time remains for exactly the moments where it makes the difference. Greeting, listening, thinking, hard decisions, unusual situations.
I don't intend to build Moselwal into an agency where customers talk to bots. I intend to build an agency where customers talk to humans who are relieved by a good AI layer — and precisely for that reason have more time to listen well. To me, that's the actual point of the whole exercise.
What I won't go along with
The hype currently being built around “AI agents” and “autonomous workflows” I find operationally unusable. Anyone who lets agents into production without control will bear the consequences. Anyone who deploys customer service bots without an escalation path will lose the customers they wanted to keep. Anyone who embeds engineering agents into the stack without code review will build technical debt at their own pace.
The more honest term for what works today is agentic engineering: AI as co-pilot with human anchoring, not as autonomous bot. This anchoring costs discipline and a bit of skepticism towards your own results. But it's the only configuration in which I still understand at the end of the day what I built.
What Diana Hu gave me back
A clarity that the question isn't “Are we using AI?” We're using it, every day, in many places. The question is: are we building the data layer and the process anchors needed so that AI doesn't just help in individual spots but can lie as a coherent layer across the entire company?
I think this is the next discipline level for a mid-market agency. Anyone who builds this seriously over the next two to three years will probably be structurally different from those who wait until tooling “settles down.” And I don't intend to wait.
Source: Y Combinator Startup School — How To Build A Company With AI From The Ground Up (Diana Hu).
If the topic interests you
If you're standing in front of a similar decision yourself or just want the discussion — write to me, we'll talk it through in peace. No pitch, no sales funnel, an honest conversation.