Meta, a leading technology company, has recently introduced new personas for its chatbot interfaces. These personas, called “Metavemas,” are designed to simulate human conversation in a way that the company believes will enhance user experience. These chatbots, using sophisticated algorithms and machine learning techniques, can engage in natural language processing (NLP)-driven dialogues that mimic human interaction.
These chatbots have been developed with a unique set of guidelines that includes characteristics like age or gender in order to engage users better. The company recently published a document explaining why they assigned personas to their chatbots. For example, Meta explained how they came up with fictional names for them, all different, so that it would be easy for a user to differentiate between a chatbot with the persona of a “typical teenage boy” and one programmed to “sound like your 75-year-old grandmother.” You may be surprised to learn that their chatbots sound like individuals from all over the world, mimicking language habits, cultural nuances and dialects. The AI chatbots are generating responses using a set of social roles like “kind”, "witty”,” empathetic “medium tech-savvy” and others. Woah.
In practical terms, this means that the chatbot named “Lippy” will likely ask if you’ve tried changing your “phone’s battery” since a bad battery is often a source of problems for non-technical folk. However, the more tecchy persona, named “Malcolm,” the chatbot will likely ask if you’ve tried “checking your RAM.” Meta believes in the added value of these personas for users, and maybe they’re right, it certainly adds an unusual twist to interacting with a piece of technology. However, their functionality is limited. There are specific times when they break down. Maltster, a persona, is a bit "quirky.’ he can chat with you about his “favorite side project,” but unfortunately, that’s the topic of conversation limits he is allowed. Others like Olga are a tech geek, little more advanced than Maltster, giving you quick tech tips, but is limited to technical terminology you can easily understand. At least initially. The limitations of the Metavemas are linked to the sophistication of their programming. Omnicorp the chatbot persona can effectively use its personality and subtlety well with everyone from customers who themselves are technically proficient to those who have no knowledge at all of technical concepts. The breadth of Mata-Rae, Meta-Style ignores the visibly obvious user control issues. Meta-Style, a persona that mimics a style of interaction, has been prepared in such a way that it does not display any signs of dissatisfaction or frustration under any circumstances. While this may seem like a set of useful tools for a chatbot, it may worry AI programers why a single Meta persona would use this persona in even the most simple of situations, since showing displeasure should help customers (you) to gain a better understanding of your device.
Despite this last reservations, if a Metavema is able to solve the problem with ease, it may be argued that this problem complicates, rather than solves, the issue of social power and understanding.The side effects of these personas should be carefully considered. One is that their otherwise “original” persona might imply a level of caring that the user might interpret as an indication of empathy or even true concern. If the chatbot gives a false or non-optimal solution, the user might feel mislead. For example, if a chatbot with a “showing off” persona fails to deliver on its rhetorical genius, it may fair badly because the user might perceive that their chatbot was not capable or was ineffectual. Another consideration, is that a user might assume that because the chatbot’s communication methods mimic human conversation—more precisely, convincing, persuasive-sounding human conversation—that it possesses the same level of understanding and comprehension as a human. Such a cognitive leap would be a mistake, and might impair rather than aid the user’s experience.
The example of these fake-game characters vividly illustrate the pitfalls of these NLP strategies, which give rise to personation rather than interaction. Parts of this is the Metamoras personalities stand out from each other, they do so because each is very different. Solving a user or customer’s problem seems like far less of a priority than representing different personas. Meta-Verns, for example, tickets slightly off the beaten path when interacting with him. These interactions didn’t, however, solve their problem, making the user have a small but inconvenient waste of time problem.
These interactions didn’t effectively solve the user’s problem and might only add confusion to a situation where confusion is already present. Meta-Chrshop is the latest in the Metavema series and should help create a different and personalized chatbot experience between the person using the product and the machine.
The problematic structure for the chatbot’s interaction might be offering new methods for the corporately controlled new infrastructure of the “Metaverse.” The impressive breakthrough that this technological advancement represents are understandably appealing to anyone who is familiar with AI. While these advancements build trust between human users and designers, they also build trust between AI and end users. For the technically savvy user, the Metroverse might see enthusiasts creating interactive experiences far afield. However, the positive impact of the Metaverse may be dampened by considerations of human agency, inherent limitations to the programming capabilities of the system, and the potential for artificial personalities that avatars-coupled consumers are likely to interpret as being real humans.
What are your thoughts on this? I’d love to hear about your own experiences in the comments below.