LLMs and Humans
- On April 6, 2026
- AI
Just few years ago, computers lived in their own contained environment which was decoupled from human to human interactions. As a result computers could not and did not interfere with human to human discussion on any possible medium: papers, social media, etc. Modern computers largely mediated human interactions indirectly, rather than participating autonomously within them. But, this has changed once Large Language Models (LLMs) appeared in our life just few years ago. From now on, computers can and do take active part in the transactions that were previously between humans only. In other words, computers are no longer viewers but they become (key?) players in what was once human to human interactions.
Alan Turing, the “father” of computer science has invented, back in 1950, the “Turing test“. This test was originally called imitation game and it tested the ability of machines to exhibit intelligent behavior equivalent to humans. A machine passes this test if a human evaluator, having a text discussion, cannot reliably distinguish if his/her peer is the machine or another human. No doubt that LLMs today can successfully pass the Turing test in many common scenarios. Is this good or bad? Actually both as elaborated below.

The good side is that LLMs are now helping many people around the world to do their job better and faster, for example:
- Students can now access material much faster and have the potential to learn more efficiently.
- Industries like call centers are using LLMs to provide faster and more accurate first-level support to customers. These bots can offer service 24/7. These bots do not replace the professional call center agents but they can surely reduce the load from them by resolving simple issues.
- LLMs also lower language barriers, improve accessibility for people with disabilities, and democratize access to expertise.
The bad side contains potential risk that our society did not encounter in the past.
- Impersonation. When LLM agents do not reveal their non-human identity and you think your party is another human. These impersonating agents might try to influence your political views, social views, economic views, religious views etc. These impersonating agents can be spotted today mainly in social media. In the wrong hands these impersonating LLM agents can break the fabric of society. Once they play an active part in human to human interaction without revealing their true identity, they can become a game changer in how our society develops.
- Another bad side of LLMs was revealed by researchers that found that LLMs might not be so safe as you might want them to be. More details can be found in the following references: Core Values and Manager Bench.
This shift calls not only for technical safeguards, but also for social norms and policies that preserve trust in human communication.
