Creating a solid character is difficult. Within storytelling, creators need to consider tone of voice, character growth, visual design and so much more. Within interfaces, the question becomes even more complex. There is a wide range of interactive characters today, including avatars, robots and chatbots.
These character interfaces are increasingly driven by generative AI, leading to unexpected user interactions and complex user experiences. Important is that users grow an emotional connection to character interfaces. Characters are a bridge in interfaces, fostering a sense of understanding and empathy. Even though these entities are not real, users create parasocial relationships with them. These relationships can be helpful, but also difficult and even harmful.
In a time when chatbots are more frequently used, we rightly ask ourselves what this means for our culture, our experiences and our education. What is an opportunity, and what should be concerned about?
Relationships and Community
The past years, I wrote about the complexity of human-chatbot relationships, for instance:
- How do we relate to digital companion, and what problems do they solve?
- How do chatbots add to transmedia stories and characters?
Chatbots provide all types of output, from images and information to ideas and stories. But that is not their most important characteristic, I would say. They provide the idea of a relationship, and that’s what makes them fundamentally different from Google, Wikipedia or other interfaces. These parasocial relationships are attractive in our current day and age. More communication happens digitally, so a chatbot blends with that landscape. Just like online celebrities, digital friends and avatars, they blend in our increasingly digital world. We do not look at them that different from other human beings.
Some users consider chatbots their friends, and I do not want to downplay those feelings. If there’s one thing that I realized by studying cosplay, gaming and chatbot, it’s that characters make us feel. These bots provide inspiration, companionship and community in a time when we are often isolated. For young people, bots can be helpful because they are clear, polite and social. This is also their danger, since critical thinking is easily outsourced.
This trend has been going on for a while. However, the attraction for users was different a few years ago, when chatbots were more uncommon. They were a gimmick, a marketing trend, an experiment. Users were surprised, enjoyed these conversations and felt they deepened character interaction. Chatbots were more limited, so they were not used to outsource an entire process, essay or therapy session.
In recent years, chatbots have quickly surfaced as a common technology in education, work and marketing. They are now a part of our daily life, and we cannot imagine certain tasks without them. The rapid advancement of ChatGPT and its competitors, like Co-Pilot, led to this massive adoption. And yes, there’s even fan art of these tools.

Loneliness, Isolation and Hallucination
However, we should be careful. While users may develop genuine feelings towards bots, bots are not real and not the same as relationships with human. For instance, chatbots reply at lightning speed and when we communicate with them a lot, it reinforces our idea that communication should be on-demand. It adds to the idea that we cannot miss out on a message or mail, should respond promptly and should always be available to each other. Just like Netflix, we want everything to be real-time. But that’s not possible nor desirable for many reasons.
Another problem is bias. Generative AI is far from a neutral technology. It’s based on past patterns, historical data, and tends to augment our existing biases (e.g. sexism, racism). These biases in generative AI have been well-documented already for years in AI studies. Think of books like Weapons of Mass Destruction and Algorithms of Oppression which rang the alarm years ago based on predictive text and automated rankings on Google, Facebook and other tools.
Chatbots have many other questionable elements as well and intensify existing problems in our society. Think of:
- Isolation, worsened by chatbot use and artificial intimacy
- Positive tone of voice, reinforcing bad behavior
- Chatbots intensifying fake news and false information, as widely reported on for instance by BBC
- Ethical conflicts, such as creating chatbots of dead persons or celebrities, such as this recent Agata Christie chatbot giving lectures
- New communities shaping around AI fantasies and hallucinations, such as these cults and AI gurus reported by Rolling Stone
Behind these “chatbot problems” are wider “cultural problems”, like how we treat our digital data, how we consider death in our culture and mourning, or the increased loneliness of young people.
These harmful effects of chatbots are often mitigated by tech companies, who frame these entities as loyal assistants, helpers, colleagues and even coaches. A word like Co-Pilot alone means that there’s someone there to help you drive. Those discourses are especially toxic, as The Guardian reported recently, since they reinforce the accountability and likeability of the AI, rather than critical distance.
Recent takes on ChatGPT and other companion bots also stress the sustainable impact. Each prompt takes quite some energy, roughly the equivalent of 5 minutes of using our laptop. The energy that goes into the servers maintaining this technology is huge, as reported in Plan Be Eco. Though the industry definitely works on this, it is a concern to bear in mind.
Different Users, Different Responses
Despite these characteristics, I would not immediately draw the conclusion chatbots are completely evil. We need to improve them, make them more sustainable and also set far more parameters to their design. Once upon a time, chatbots were procedural and scripted, not AI technology. We can design these entities in different ways.
Important to check is who uses chatbots and for what purpose, as with other technologies. As reported in Nature:
‘An emerging line of thought is that an AI companion can be beneficial or harmful, and that this might depend on the person using the tool and how they use it, as well as the characteristics of the software itself.’
Coming from STS studies, I could not agree more. We need a social constructivist perspective, meaning we look at their role in our culture, society and personal use. Studies show that technological use is not deterministic or an effect, but deeply related to our identity and habits. It’s related to the way that we use a technology as well as our learnings around that technology. Experiences of seasoned users and pioneers are vastly different from young users, who may not have a high level of media literacy yet. That’s why I’m also most worried about vulnerable groups, such as youth, who may lack the context and perspective to judge the information and relationships provided by chatbots.
However, a big overarching problem looming in the background is data capitalism. We are quite easygoing with providing our data to private companies. They use it without our knowledge as training data for AI; they use to optimize their services and ads; they build their empires around our free labor. As Nick Couldry writes, the internet developed into a private space and it’s in our best interest to reclaim it.
If we really want to change this space, we need to design the internet for the better. That’s where ideas like value-center design, platform socialism or a public internet come in. It’s not enough to say we radically alter or ban these tools, we need to reclaim our space, our data and our relationships within these spaces. Tech properties take our labor. However, our data is not their content, training data or property.
