5 min read

Software can do anything — but should it be allowed to do anything?

I write code almost every day. An honest reflection on where technical feasibility stops and responsibility begins.

Software can do anything. That isn't a marketing promise, it is a fairly sober statement of fact. Anyone with the time, the means and the motivation can model almost anything imaginable in code. The interesting question, the one that has occupied me for years, is therefore not what software can do. It is what software should be allowed to do. I write code almost every day. New features. New tools. New integrations. And I notice more and more often that the most important decisions I make are not the ones at implementation time, but the ones about what to leave out.

The moment something tips

There is a moment in software development that is rarely documented. A function is finished. It works. You could now extract the last few percent: store one more data point, add one more piece of tracking, take one more small shortcut to make the user experience smoother. In that moment something tips. The software remains technically correct, but it becomes ethically different. I believe that this is precisely the moment in which most of the questionable features we see online are born. Not out of bad intent, but out of convenience.

Tracking and analytics integrations are a good example. It is technically trivial to capture every click, every scroll depth and every pause. It is also usually not necessary. I notice in myself, and in teams I work with, how quickly this 'let's collect it, you never know' becomes a normality you no longer remember when a data subject access request lands.

Why 'allowed' is a different question from 'capable'

The word 'allowed' first sounds legal. But it isn't only that. At its core lies the question: what is the consequence of this function, if it works the way I'm building it now? For the individual user. For the collective of users. For the teams who have to maintain it. For people who aren't in the room, but who will, in case of doubt, have to live with it somewhere.

I notice that my practice has shifted in recent years. My default used to be: 'Let's build it — let's see what happens.' My default today is: 'We're not building it — unless we can clearly say why it has to exist.' That is a very unspectacular shift in perspective, but it has real effect in practice. It removes data from databases where it doesn't belong. It removes logging that nobody reads but that, when in doubt, works against you. It prevents features that only exist because the competition has them too.

Three guiding questions that help me

I use three guiding questions in projects whenever a decision tips. First: would I myself feel comfortable if this function were applied to me without being asked? Second: could I explain this function in one sentence to someone with little technical understanding, and would they understand it without afterwards feeling differently about themselves? Third: if this feature becomes a scandal in a year's time, how would I explain it then?

These questions are no substitute for data protection law, security review or team discussion. They are a kind of personal compass, and I have noticed that in 80 per cent of the borderline cases they already decide things very well. The remaining 20 per cent belong in the conversation with other minds anyway.

Responsibility lies with those who build

I often hear the argument: 'We're just developers, the business makes the decision.' I understand the impulse, but I think it's wrong. The moment you write code is a moment of decision. The shape a function takes in the end — which fields, which limits, which default values — is decided in the editor, not in the corporate steering committee. That is not a comfortable truth, but it is an honest one.

This applies especially in a time when AI systems are taking over much of what used to be human judgement. If I build an agent that makes decisions, then I bear the responsibility for the rule set within which it does so. The agent itself does not bear it. We can keep saying so for as long as we like.

My conclusion

Software is not allowed to do everything just because it can. That sounds almost like a Bible verse, but for me it is the most honest summary of what 23 years of software development have taught me. The really good products I have had to do with in my life are not the ones that could do the most. They are the ones that stopped at the right places. The ones that said: this is where we stop, because otherwise it tips. We're not storing this here, because we don't have to. We'd rather ask the user here, because it concerns them.

That, I believe, is the real craft. Not to build more than is necessary. Not to deliver less than is expected. And to renegotiate the difference every day.