Marketers are increasingly experimenting with AI-driven personas, creating multiple "twins" of AI engines that present as different races or genders. These virtual personas appear as chatbots, digital avatars, or "virtual influencers," tailored in appearance, voice, or backstory to resonate with specific demographics. While a persona that looks or sounds familiar may foster trust and identification, it also blurs lines of authenticity and can veer into manipulation or stereotype reinforcement. This report explores the psychological effects of such AI twins on audiences, ethical issues of representation and authenticity, real-world marketing cases, and broader social implications, particularly for marginalized communities.
People tend to respond more positively to personas they identify with or find relatable. Studies in social psychology indicate that homophily – perceiving someone as "like you" in attributes such as ethnicity or gender – can increase trust and persuasive power. Recent experiments confirm this pattern with AI personas: a 2024 study found that highly human-like virtual influencers who shared the audience's racial background elicited significantly more trust than foreign-appearing or less lifelike avatars. An AI spokesperson designed to look demographically similar to its audience can tap into identification and familiarity, thereby boosting credibility.
However, these effects are complicated by social biases. Research on AI voices shows that gender congruity and stereotypes shape trust: users tend to trust an AI voice more when its gender aligns with expected social roles and matches the user's own gender, whereas mismatches can erode trust. This suggests audiences bring pre-existing biases that influence how an AI's presented identity is received. Marketers who alter an AI's persona may unintentionally reinforce those biases by selecting voices or avatars that "fit" stereotypes to gain user confidence.
Audience biases can lead to disparate responses. An AI chatbot perceived as belonging to a marginalized race or gender might face the same prejudices a human would. For example, a Black-sounding virtual assistant might be taken less seriously or subjected to abusive language by biased users – a phenomenon researchers have analogized to "digital microaggressions" in human-AI interaction. Conversely, a persona coded as a dominant group might be afforded more default trust. These dynamics mean that simply swapping an AI's outward identity can change user interactions in ways reflecting societal bias.
Despite knowing an AI persona is not human, many users form parasocial bonds – one-sided relationships of trust, affection, or loyalty similar to those with celebrities or fictional characters. Virtual influencers are prime examples: millions follow computer-generated personalities like Lil Miquela or Lu do Magalu, engaging with them as if they were real creators. Studies show that strong parasocial relationships enhance trust and positive attitudes toward an influencer, whether human or virtual.
Fans often discuss these AIs' "lives," leave supportive comments, and take advice from them. Even when followers know an influencer like "Shudu" is a CGI fabrication, they still praise "her" beauty and style rather than crediting the human artist behind her. This illustrates how effective the illusion of persona can be – audiences relate to the AI character as an autonomous social actor, producing real emotional investment. Marketers leverage these parasocial ties: a relatable AI twin can cultivate user loyalty similar to a personable brand ambassador. The flip side is that this trust arises from a relationship ultimately based on fiction, raising questions of transparency and manipulation.
A core ethical issue is the authenticity of AI personas that mimic human identities. By design, a race- or gender-altered AI twin performs an identity it doesn't truly inhabit – it is a marketing construct. This can be deceptive if not disclosed. Regulators have noted ambiguity around virtual influencers and disclosure: current advertising rules require human influencers to tag sponsored posts, but enforcement becomes murky when the "influencer" isn't human at all. Ethicists argue audiences deserve to know whether they're interacting with a real person or algorithm, especially as AI agents become more realistic.
Authenticity is also at stake in AI endorsements. A virtual persona praising a product cannot actually have tried it or been affected by it. This makes their endorsements inherently inauthentic – essentially scripted marketing messages. Some see this as manipulation: the AI twin wears an identity costume to create personal connection, then delivers advertising under that friendly guise. The ethical line is crossed if consumers take actions based on false impressions of the AI's sincerity or personal experience. Some brands try to be upfront by explicitly labeling personas as virtual and limiting claims requiring human experience, but the potential for manipulation remains a key concern.
Designing an AI's race or gender persona can easily slip into stereotyping through deliberate choices or unconscious creator biases. If developers decide their Latina AI character should be sassy and passionate, or their Black male AI persona should speak in AAVE and wear certain fashions, they may be baking in caricatures of those identities.
A notorious example is FN Meka, a virtual rapper character given dark skin, face tattoos, and "street" style by non-Black creators. FN Meka's algorithmically generated lyrics even included the N-word. Critics lambasted this as a gross collection of Black stereotypes – "an amalgamation of gross stereotypes, appropriative mannerisms complete with slurs." The project was quickly canceled amid accusations it amounted to a modern-day minstrel act. Activists pointed out that white creators controlling a Black avatar using racial slurs and tropes is disturbingly akin to blackface minstrelsy.
Even well-intentioned representation can misfire if done superficially. Gender stereotypes are concerning: many virtual assistants default to young female voices and friendly, submissive personas – a design choice UNESCO warned "reinforces gender biases" by positioning women as obliging helpers. An ethically sound approach would avoid using race or gender as mere "skins" or gimmicks, instead giving AI personas respectful, well-rounded portrayals. Unless members of represented communities are involved in design, even subversive efforts can ring hollow or miss cultural nuances.
The practice of deploying diverse-looking AI personas raises a thorny question: Does it truly increase representation, or is it appropriating images of identity without empowering real people from those groups? While having more female or minority-appearing "faces" in marketing looks like progress, if those faces are manufactured by companies not owned or run by community members, it can become performative diversity.
Cultural critic Lauren Michele Jackson observes regarding CGI model Shudu Gram (a dark-skinned virtual model created by a white man) that this token presence may "produce increased 'representation' but furthers the reduction of Black personhood," ultimately "insulting the very notion of representation." The image of a Black woman is put forward, yet no actual Black women benefit behind the scenes – their personhood is essentially simulated and controlled by others.
In Shudu's case, her creator took inspiration from Black women's beauty and African cultural symbols, garnering praise on hashtags like #BlackIsBeautiful. But once revealed that a white photographer was behind Shudu, many felt this was cultural appropriation or "racial plagiarism." The creator was capitalizing on the visual look of Black women without hiring or compensating any real Black models. Similar critiques have been leveled at other virtual influencers: Lil Miquela's racially ambiguous design blending Black and Latina features has been described as "appropriating mixed-race features" and blurring Instagram's "Black/Brown women" influencer category.
Unlike human brand ambassadors, AI personas have no agency or rights – which is why companies like them (they never stray off-message or demand fair pay). But there are still human stakeholders: the people whose data, likeness, or labor make the AI possible. Real people often serve as models for motion-capture or provide voices for avatars, yet they remain invisible. In virtual influencer production, "stand-in models and mocap performers are often invisible and poorly compensated."
Shudu's lifelike poses were achieved by superimposing her 3D image onto photographs of real models of color. Those women's bodily labor and expressions were essential to Shudu's creation, but audiences see only Shudu, not the models, and their contribution is largely erased. This echoes historical patterns where white creators captured performances of Black artists and mapped them onto fictional characters without acknowledgment.
Even AI character voices can be exploited. In FN Meka's case, the AI rapper's vocals were provided by a Black rapper who was never properly paid or credited – he revealed creators "ghosted" him after using his voice. This raises serious consent concerns: was he fully informed how his voice would be used in an arguably problematic caricature? From an ethical standpoint, using someone's image or cultural likeness in an AI without permission is disrespectful at best, and at worst perpetuates exploitation – especially when involving marginalized identities being commodified by those in power.
Perhaps the most famous virtual influencer, Lil Miquela is a CGI Instagram model portrayed as a 19-year-old Brazilian-American (mixed-race) woman. Since 2016 she has amassed over 2 million followers and partnered with fashion brands like Prada and Calvin Klein. Miquela's persona embraces street-fashion and social causes; she even "interviewed" celebrities and released music.
Audiences, especially Gen Z, responded with huge engagement, treating Miquela as a genuine personality. However, her racial ambiguity and creation by a startup run by two male tech founders have drawn academic critique. Scholars argue her design performs racial identity in a calculated way – adopting Black/Brown aesthetics without lived experience. Public controversies have been relatively mild, though one noteworthy incident involved a Calvin Klein ad where Miquela kissed a female supermodel, prompting backlash accusing the brand of queer-baiting with a virtual persona.
Shudu was launched on Instagram in 2017, presenting as a stunning dark-skinned South African model. She gained fame when Fenty Beauty reposted an image of Shudu "wearing" its lipstick. Initially celebrated as a symbol of Black beauty in high fashion, Shudu became controversial when it emerged her creator is a white British man (Cameron-James Wilson).
The reveal sparked anger and debate. Many Black women felt a real Black model should have gotten those opportunities. Think pieces labeled Shudu a form of "digital blackface," arguing that a white-owned avatar profiting off Black appearance is problematic. Wilson defended Shudu as art inspired by Black supermodels, but critics noted the discomforting echo of "white audiences indulging fascination with blackness without interacting with actual Black people." The Shudu case is now a staple in discussions of representation in AI – a cautionary tale about diversity optics versus real inclusion.
FN Meka was an AI character with the appearance of a Black male cyborg rapper – green braids, face tattoos – created by two non-Black entrepreneurs. His TikTok videos showing an exaggerated hip-hop lifestyle garnered over 10 million followers. Capitol Records signed FN Meka in 2022 as the "first AR artist" on a major label.
The signing provoked immediate backlash. Industry activists decried FN Meka as "a direct insult to the Black community... a collection of gross stereotypes." The avatar used the N-word in songs and posted an image of itself being beaten by virtual police – perceived as tasteless co-opting of Black trauma. Within days, Capitol dropped FN Meka and issued an apology. Observers unanimously likened the project to digital minstrelsy, with one critic noting: "Instead of donning black makeup, white owners can now create their own Black artists from scratch."
In a more positive example, Brazilian retail giant Magalu created "Lu" as a virtual avatar in 2003, initially just to demonstrate products online. Over time, Lu evolved into a full-fledged influencer and friendly face of the brand. She appears as a Brazilian woman with medium brown skin and dark hair, designed to represent Magalu's diverse customer base.
Lu posts content reviewing gadgets, sharing fashion tips, and advocating social causes on social media, where she has amassed over 15 million followers. She's enormously popular in Brazil, with many consumers seeing her as an approachable, tech-savvy friend. Many didn't realize for years she wasn't real. Magalu's marketers note that "in Brazil, Lu is not a sales gimmick, but an influencer in the true sense." While Lu hasn't faced appropriation critiques, some analysts question whether her corporate-owned "life" blurs lines between authentic engagement and marketing ploy.
Across Asia, virtual influencers tailored to local cultures have risen. Rozy in South Korea and Imma in Japan were explicitly designed to reflect contemporary youth ideals in their countries. These characters have generally been met with curiosity and enthusiasm, especially among younger, tech-savvy consumers. Rozy secured advertising deals reportedly worth millions, and initially many viewers didn't realize she was CGI.
Opinion writers highlight that virtual influencers never court scandal and can be perfectly tuned to brand values – advantageous for companies. Yet they note that no matter how "humane" these personas act, they "are incapable of forming authentic relationships." The concern is that replacing human voices with AI ones might deprive industries of genuine diversity and creative spontaneity.
Beyond individual brands, race- and gender-altered AI personas carry broad social implications. If most AI assistants are given submissive female personas, society's implicit bias associating women with assistant roles is perpetuated. If virtual characters of certain races are consistently depicted with stereotypical features, they could normalize those stereotypes. Conversely, there's opportunity to consciously design AI personas that counter stereotypes – but this requires social awareness and input from represented groups.
The impact on marginalized communities is concerning. Some see AI "diversity" experiments as double-edged: they could increase visibility of minority identities in media, but visibility without agency is hollow. Black, Indigenous, and other marginalized creators have fought for representation to tell their own stories and benefit from opportunities. Virtual stand-ins created by others short-circuit that progress. In worst cases, companies might choose controllable avatars over hiring real minority talent, displacing actual minority labor.
The phenomenon raises questions of cultural ownership and identity. Identities that have historically been weaponized or commodified could become corporate intellectual property as AI personas. This echoes past cultural appropriation but in high-tech form. Some argue if members of marginalized groups create their own AI personas, it could be empowering – allowing access to industries from which they've been excluded. However, concern remains that mainstream appetite might favor polished, controversy-free AI versions over real, lived experiences.
Another implication is potential for performative social messaging via AI personas. Virtual influencers like Miquela post about Black Lives Matter or Lu advocates for LGBTQ+ rights. Critics ask: when a corporation's AI figure takes a stance, is it sincere advocacy or calculated brand strategy? Since AI has no autonomy, any "stance" is ultimately the brand's. This could trivialize genuine voices on important issues, amounting to co-optation of activism.
The use of race- or gender-altered AI personas in marketing sits at the intersection of innovation, psychology, and ethics. These tailored personas can capture attention, engender trust through identification, and cultivate rich parasocial relationships. Yet those effects rely on identity cues that carry deep social significance and history. When identity is treated as plug-and-play for AI, it opens doors to ethical pitfalls – from reinforcing stereotypes to cultural appropriation to deceptive consumer engagement.
Moving forward, authenticity, transparency, and inclusivity should be guiding principles. Companies using AI personas should involve diverse voices in crafting those personas and be forthright with audiences about their nature. Audiences may need to adjust their trust calibration – enjoying AI twins' creativity and utility while remaining aware of the strings behind the avatar.
The trend forces us to confront what we value in human representation. Is the goal simply to appear diverse, or to empower genuine diversity? The psychology and marketing potential is undeniable – these personas can make us feel seen, comforted, or persuaded. The ethical mandate is ensuring they don't simultaneously make invisible the real people whose identities inspire them. As the line between virtual and real blurs, society must insist that respect for truth and cultural integrity remains clearly in focus, even if the face on screen is digitally drawn.
As we navigate the complex intersections of technology, identity, and ethics explored in this report, many individuals find themselves seeking deeper understanding of authentic representation and genuine human connection. If you're interested in exploring questions of identity, consciousness, and ethical living from a spiritual and holistic wellness perspective, consider visiting EjiogbeInstitute.com.
The Ejiogbe Institute offers comprehensive services including:
In a world where AI personas can simulate human identity and connection, the Institute provides a space to explore genuine self-discovery, authentic relationships, and spiritual growth. Whether you're grappling with questions raised by this report about representation and authenticity, or simply seeking personal growth and healing, their experienced practitioners offer personalized support.
Visit EjiogbeInstitute.com to learn more about their services, read additional insights on wellness and spirituality, and discover how you can cultivate deeper authenticity in your own life and relationships.
Q: What exactly are AI "twins" in marketing?A: AI "twins" are multiple versions of the same AI system that are given different racial, gender, or cultural identities through their appearance, voice, or backstory. For example, a company might create several chatbots using the same underlying AI technology, but present one as a young Black woman, another as an older white man, etc., to appeal to different target audiences.
Q: Why do companies create these different personas?A: Research shows people tend to trust and engage more with AI personas they perceive as similar to themselves. Companies hope that by matching an AI's apparent identity to their target audience's demographics, they can increase trust, relatability, and ultimately sales or engagement.
Q: Are virtual influencers like Lil Miquela actually AI?A: It depends on the specific case. Some "virtual influencers" are primarily CGI characters whose content is created by human teams, while others incorporate AI for generating responses or content. The term "AI personas" in marketing can include both fully AI-driven characters and human-created virtual characters that use AI elements.
Q: What's the main ethical problem with these AI personas?A: The core issues are authenticity and cultural appropriation. When a company creates an AI that appears to be from a certain racial or gender group without involving people from that community, it can perpetuate stereotypes, take opportunities away from real people, and essentially commodify identities. There's also the question of transparency – users should know they're interacting with an AI, not a real person.
Q: How can I tell if I'm interacting with an AI persona versus a real person?A: Look for disclosures – ethical companies should clearly label AI personas as virtual or artificial. Be suspicious of personas that seem too perfect, never make mistakes, or post content 24/7. Check if the "person's" backstory seems vague or if they never reference specific personal experiences that can be verified.
Q: Are there any positive examples of AI personas?A: Yes, Lu do Magalu from Brazil is often cited as a positive example because she was designed to represent the local community authentically, has been transparent about being virtual, and focuses on helpful content rather than exploiting stereotypes. The key is involving diverse voices in creation and maintaining respect for the cultures represented.
Q: Could AI personas actually help reduce bias and increase representation?A: Potentially, but only if done thoughtfully. AI personas could normalize seeing people of different backgrounds in various roles (like AI doctors or teachers of different races). However, if not done carefully, they could reinforce stereotypes or create "fake" diversity that doesn't actually benefit real people from those communities.
Q: What should I do if I encounter an AI persona that seems problematic?A: You can report it to the platform where you found it, leave feedback explaining your concerns, or choose not to engage with the content. Supporting real creators from diverse backgrounds is also a positive alternative.
Q: Will regulation address these issues?A: Some regulations are beginning to emerge requiring disclosure of AI-generated content, but the field is evolving rapidly. Industry groups and ethics organizations are also developing guidelines, but comprehensive regulation is still catching up to the technology.
Q: How will AI personas affect real influencers and content creators?A: This is a major concern, especially for creators from marginalized communities who have historically had fewer opportunities. AI personas could potentially take work away from real people, which is why many advocate for ensuring these technologies supplement rather than replace human creators, and that diverse communities benefit from the technology rather than being exploited by it.