Anastasia Braitsik is a global leader in SEO and data analytics who has spent years dissecting the intersection of digital marketing and consumer psychology. As the wellness industry pivots toward synthetic representation, her expertise offers a vital lens into how virtual personas are not just changing how we sell, but how we define truth in a digital-first economy. She provides a nuanced perspective on the shifting boundaries between technological innovation and consumer protection, helping us understand the mechanisms behind the screens.
This conversation delves into the drastic cost reductions provided by AI-generated avatars and the profound ethical questions surrounding fabricated cultural identities used to sell health products. We navigate the psychological “trust gap” identified in recent psychological studies, the logistical nightmares of enforcing state-specific laws on global entrepreneurs, and the dangerous legal gray zones created when synthetic influencers promote products that result in public health crises.
AI-generated figures like “Melanskia” are reaching hundreds of thousands of followers while marketing wellness products. How do these synthetic personas change the cost structure for startups compared to hiring human talent, and what are the specific ethical risks when an influencer’s cultural identity is entirely fabricated?
The shift toward synthetic influencers is fundamentally an economic play driven by the sheer efficiency of automation. For an entrepreneur like Josemaria Silvestrini, who operates from Shanghai, utilizing AI allows him to manage over three dozen independent creators and their avatars simultaneously without the logistical overhead of human talent. When you look at an account like Melanskia, which has amassed over 300,000 followers, the cost savings are staggering because you eliminate the need for high-end photography, travel, and the unpredictable nature of human contracts. This “A.I.-ified” business model allows startups to experiment with different “vibes” and aesthetics instantly, seeing what resonates with a target audience without spending thousands of dollars on a single human photoshoot. However, the ethical risks are profound because these personas often misappropriate cultural identities—such as the Amish lifestyle or Buddhist traditions—to manufacture a false sense of purity or ancient wisdom. By fabricating a persona like a Buddhist monk living in Tibet to sell fiber supplements to 125,000 followers, brands are essentially weaponizing cultural archetypes to bypass the consumer’s natural skepticism, which feels like a new, more deceptive frontier of marketing.
Consumers often prioritize a sense of authenticity when buying supplements like “Modern Antidote.” Since studies suggest people struggle to identify AI-generated faces, how does this “trust gap” impact purchasing behavior, and what specific psychological tactics make these virtual personalities so effective at driving sales?
The efficacy of these virtual personalities lies in their ability to perfectly curate an image that mirrors the consumer’s aspirations. When a product like Modern Antidote is sold for $50, the price point suggests a premium quality that consumers want to believe in, and the AI allows the creator to design a face that looks inherently trustworthy. According to a study from the British Journal of Psychology in early 2026, people significantly overestimate their own ability to distinguish a human face from an AI-generated one, which creates a massive vulnerability for fraud. This “trust gap” means that consumers are making health decisions based on a perceived connection with a persona that literally does not exist, yet feels more “real” than a traditional advertisement. We see tactics where these avatars are placed in serene, natural settings or given soothing English accents, which triggers a sensory response of calm and reliability in the viewer. These subtle psychological cues are designed to bypass the analytical brain and go straight to the emotional core, making the supplement feel like a necessary lifestyle upgrade rather than an untested chemical product.
New York is implementing legislation requiring the disclosure of “synthetic performers” in advertisements by June 2026. What practical challenges do regulators face when trying to enforce these laws across international borders, and what step-by-step verification methods should platforms adopt to identify AI creators?
Enforcement is a logistical labyrinth because the digital economy does not respect geographic boundaries. While New York Governor Kathy Hochul has signed the nation’s first law requiring disclosure, a 28-year-old entrepreneur running a business from China may feel entirely insulated from these local regulations. The challenge is that once an ad for a “synthetic performer” is live, it can reach anyone, anywhere, regardless of where the creator is physically located. To combat this, platforms must move beyond simple self-reporting and adopt more rigorous verification methods, starting with mandatory watermarking or metadata tagging that identifies AI-generated content at the file level. Secondly, social media platforms should implement “Proof of Humanity” protocols for accounts that cross a certain follower threshold, such as Melanskia’s 300,000, requiring a video-link verification or government ID for the actual operator. Finally, there needs to be a standardized “Synthetic Disclosure” badge that is hard-coded into the interface of the advertisement, ensuring that even if a brand tries to hide the nature of their influencer, the platform’s own detection algorithms flag it for the consumer.
Brands have used AI avatars to promote supplements that were later recalled due to safety issues like salmonella contamination. When an untested product is marketed by a non-existent person, how does this complicate legal liability for the company, and what metrics should be used to measure the impact on consumer safety?
The use of AI avatars creates a dangerous “liability shield” that separates the brand from the physical consequences of their products. Take the case of Ambrosia Brands and their Rosabella line, which used a variety of TikTok AI avatars to promote moringa supplements that were eventually recalled after a salmonella outbreak. In a traditional scenario, a human influencer might be held accountable for making false health claims, but you cannot sue a digital file, which leaves the consumer feeling abandoned when things go wrong. This complication makes it difficult for regulators to pin down exactly who is responsible for the “voice” of the brand, especially when dozens of synthetic creators are used interchangeably. To measure the impact on safety, we should look at the “deception-to-harm ratio,” which tracks how many people purchased a product specifically based on a fabricated identity before a safety incident occurred. Furthermore, we must evaluate the “redress accessibility,” or the ability for a harmed consumer to reach a real person for compensation, which becomes nearly impossible when the face of the brand is an untouchable, non-existent entity.
What is your forecast for the use of AI influencers in the health and wellness industry?
I predict that by the end of the decade, the health and wellness industry will see a total bifurcation: on one side, we will have premium brands that use “Human-Only” certifications to prove their authenticity, and on the other, a massive flood of AI-driven budget brands that use thousands of hyper-niche avatars to target every possible demographic. As the technology behind deepfakes becomes indistinguishable from reality, the $50 supplement market will likely become saturated with these synthetic personalities, making it harder for the average consumer to know what is real. We will see a shift toward “identity-as-a-service,” where companies lease out the likenesses of highly effective virtual monks or Amish-style figures specifically because they have a high conversion rate. Ultimately, I believe the “trust gap” will lead to a major consumer backlash, eventually forcing federal-level regulations that treat AI influencers with the same strict scrutiny as pharmaceutical advertisements. The novelty of the “A.I.-ified” business model will eventually give way to a desperate need for human accountability as more people realize that the “perfect vibe” cannot replace a safe, tested product.
