To entrust the decision to a machine: where we accept algorithms and where we still seek the human

16.03.2026 | Analysis

From the diagnosis in the hospital to the credit and the sentence - algorithms already participate in decisions that change fates. In some areas, people are willing to accept the "cold" accuracy of the machine, in others they insist on a human face, empathy and responsibility.

Снимка от Ehécatl Cabrera, Wikimedia Commons (CC BY-SA 4.0)

A few years ago, "algorithm" sounded like something distant – a word from the world of programmers, social networks and advertising systems. Today, algorithms decide whether to grant us a loan, how much to pay for insurance, what diagnosis to check first in the hospital, which candidates to reach a job interview, and even what precautionary measure to give a defendant. The paradox is that the more their decisions affect our lives, the more sharply the question arises: "Who do I trust more – the human or the machine?"

"The algorithm as a second opinion" in medicine

In hospitals, algorithms are increasingly entering the role of an assistant – artificial intelligence systems that read imaging studies, offer probable diagnoses or suggest which patients are at higher risk of complications. Doctors describe these tools as "clinical navigation": a machine that reviews thousands of cases in seconds and offers hypotheses that would otherwise take hours. For rare diseases or complex combinations of symptoms, such a system can save time, and sometimes lives.

However, few patients are willing to "just believe" a screen with statistics. For trust, it is not enough to know that the algorithm is accurate – it is important who is behind it, who uses it and who will be responsible. That is why people most easily accept the machine as a second opinion, and not as the last instance: "I don't mind the doctor using an algorithm to compare and check. But I want the person to explain to me, to make the decision and to stand behind it with their name."

Loans and finance: between "impartial" mathematics and hidden prejudices

In banks and fintech companies, algorithms have long been the invisible "judge" behind credit decisions. They evaluate income, payment history, even our spending pattern and in seconds say "yes" or "no". For customers, this is often convenient – there are no uncomfortable questions, no subjective mood of an employee, the decision is quick. And in many cases, people accept the machine as more fair, because they believe that "numbers don't lie".

But it is here that the other side emerges – the algorithm works on data, and the data often carries old inequalities within itself. If entire groups of people have historically received fewer loans or worse terms, the machine can "learn" this and repeat it in the form of an "objective assessment". For a person who has received a refusal without explanation, the feeling is strange: "I was not rejected by a person, so that I could ask why. I was rejected by a system to whose motive I have no access."

When the court and the police get into the game

The most sensitive reactions arise when algorithms enter areas such as justice and the police. In some countries, systems are already used that assess the risk of a person committing a new crime and assist in decisions on precautionary measures, suspended sentences or supervision. In theory, this should make the process more consistent – instead of a subjective assessment "I like/I don't like", the judge receives a risk calculated according to the same scheme for everyone.

In practice, many people feel uneasy at the thought that "some algorithm" participated in the decision whether someone should be detained. Here, concepts such as justice and dignity come to the fore: "When it comes to a sentence, I don't want to be just an entry in a table. I want someone to hear me, to see the context, to assess beyond the numbers." Even if the algorithm is statistically more consistent, the lack of a human face makes trust fragile.

Where we are more comfortable trusting a machine

The interesting thing is that in safer areas, people often have no problem leaving the decision to an algorithm. Recommendations on streaming platforms, targeted ads, proposed routes in navigation – there, the mistake costs the most time or disappointment, but not health or freedom. In such cases, the machine is often perceived as better than the person: "The application knows, it monitors the traffic, I will not do better than it."

The situation is similar with some forecasts – weather, transport, trend analysis. Here the algorithm has an advantage – it processes huge arrays of data, for which a person simply does not have the capacity. Mistakes are accepted more easily: "This is a forecast, not a prophecy." The lower the stake for our personal life, the easier we are willing to trust a machine decision.

What we need to believe in an algorithm in the "serious" areas

When it comes to medicine, justice, loans or work, trust in algorithms is built on a few simple but difficult conditions. The first is accuracy – to know that the system demonstrably fails less often or at least no more than the person. The second is fairness – to be sure that it does not discriminate on hidden grounds and does not reproduce old prejudices in the form of an "objective result". The third is explicability.

Many people say: "I am ready to accept decisions in which an algorithm participates, if someone can humanly explain to me how it reached them." We don't need to understand every formula, but we need meaningful answers – what factors are taken into account, what weighs the most, what can we change to get a different result. Without such an explanation, the machine feels like a closed black box, and it is difficult to believe in black boxes.

Why we want "a person in the loop"

Even in situations where people acknowledge that algorithms are more accurate, many insist on having a living person "in the loop" – someone to watch, assess, and take responsibility. The reason is not just emotional. The person can see context that does not enter the data – personal history, sudden circumstances, nuances in behavior. In addition, we have an intuitive need to know who to turn to when something goes wrong.

In this sense, hybrid models are most acceptable to most people: an algorithm that does the heavy work – sorts, evaluates, offers – and a person who makes the final decision and explains it. Another thought is repeated in conversations: "Algorithms have no conscience. They just do what they are trained to do. That's why we need a person who knows when to say: here the machine is wrong."

What trust in algorithms says about us

The way we accept or reject algorithms actually says less about technology and more about society. Where institutions are weak, people find it harder to believe that the algorithm will be fair – because they fear that it will be "tuned" for someone's benefit. Where the health or justice system is already suffering from mistrust, an additional "black box" often sounds like another level of lack of transparency.

On the other hand, in areas where we are used to chaos, corruption or random decisions, sometimes the machine is seen as a chance for more order: "I prefer an algorithm if that means everyone will be evaluated by the same rules." Trust in algorithms goes hand in hand with trust in the people and institutions that create, control, and correct them.

The future: not "machines instead of people", but "machines with people"

In the end, the question is not whether algorithms will make decisions for us – in many areas this is already happening. The real question is how and under what rules. Where are we willing to trust them completely, where do we want a person to have the last word, and what mechanisms for control and appeal will we have when the machine choice turns out to be unfair.

Perhaps the most mature answer is not in the extremes of "people only" or "algorithms only", but in the honest combination of the two. Machines can bring speed, consistency and analysis of huge data sets. People – context, compassion and responsibility. Trust in algorithms will be all the stronger, the more we manage to keep these two roles – instead of expecting the machine to become a person or the person to work like a machine.