Medvi, a fast-scaling telehealth company, has emerged as a headline case study in this context.
Reportedly operating with only two employees, the company exemplifies the disruptive potential of AI-driven, new-generational business models. The website was built using ChatGPT, Claude, and Grok, MidJourney and Runway for ad creatives, and ElevenLabs for customer support voice. A network of agents then connected everything together into an automated system.
Medvi’s rapid rise has attracted widespread attention, with renowned media outlets such as The New York Times citing it as a success story. Well, this case proves something about AI. While it supports fast scaling and business model innovation, its limitations can create hidden risks that become apparent only post-acquisition, often undermining trust, performance assumptions, and expected synergies.
About Medvi: The pricing gap
Based on market statistics, the GLP-1 weight-loss drug market highlights rapid scale-up, generating approximately $401 million in revenue in 2025 and projected to reach around $1.8 billion in 2026.
Medvi has built itself around a clear pricing gap in this sector that enabled rapid growth and strong early demand. The company positioned itself at the intersection of AI-driven patient services and remote care delivery, aiming to enhance the efficiency of medical consultations through automated, human-like digital interactions. However, recent discussions have shifted focus from the company's tech potential to emerging concerns about the credibility of its service model, the transparency of its AI capabilities, and whether its reported outcomes are verifiable and rooted in real clinical evidence.
Among the issues discussed in connection with Medvi are reports indicating the use of artificially constructed engagement signals.
These include claims of fake user profiles created to simulate platform activity, AI-generated imagery intended to enhance perceived credibility, and curated digital interactions designed to present a more mature user base. Investigations and independent analyses further suggest that Medvi and its affiliates used AI to generate more than 800 fictitious doctor personas and social media profiles. Reported examples include ‘Wade Frazer, MD’, whose medical credential was removed after journalists raised questions, as well as profiles displaying AI watermarks. Additional cases cited in media coverage reference fabricated or questionable identities such as ‘Professor Albust Dongledore’.
While these claims remain part of ongoing scrutiny, they have nonetheless shaped external perception of the company’s credibility.
The role of digital signals
This business situation is analogous to the Uncanny Valley effect. When a digital healthcare platform looks almost human on the surface, small signs of inconsistency in behaviour, identity, or interactions can start to show. As a result, users may feel uneasy.
In Medvi’s case, the tension does not come from human-like visuals alone, but from a broader set of ‘almost real’ digital signals. This includes user profiles that appear real but are difficult to verify, interactions that look like normal user activity but seem coordinated at a system level, and an overall platform experience that sits between a real healthcare service and a constructed digital environment.
When viewed through a digital due diligence (DDD) lens, different digital signals show a mixed picture of the digital brand performance.
Community-driven platforms such as Reddit show hundreds of posts with distinctly mixed sentiment, largely reflecting real-world user outcomes. On RealReviews, the company holds a 3.5 rating based on 24 reviews, including several that describe it as a scam. The Better Business Bureau (BBB) reports an F rating with over 400 complaints, primarily related to billing issues and refund disputes. On ConsumerAffairs, the company has a 3.4 rating across approximately 1,800 reviews, mainly showcasing feedback on customer service and billing issues, refunds and punctuality.
At the same time, reports suggest instances of chatbot hallucinations, which may contribute to inconsistencies in overall sentiment. While platforms such as Trustpilot show a relatively strong average rating of around 4.4 stars across more than 12,000 reviews, there is also negative feedback. Some users report unauthorised charges, difficulties cancelling subscriptions, and fulfilment issues, including incorrect prescriptions or, in some cases, products not being delivered at all.
These types of digital signals are often not visible in traditional financial analysis. However, they can be detected earlier through digital due diligence. In some cases, this happens even before they enter broader public discourse or appear in mainstream media narratives, including early positive coverage in outlets such as The New York Times.
By systematically analysing a company’s digital footprint, user sentiment, and external signals, investors can spot inconsistencies and reputational risks at a much earlier stage, prior to capital deployment. In this way, digital due diligence provides an early-warning layer and can help avoid situations where weaknesses are only discovered after acquisition, when remediation becomes more complex and costly.
Register NowDigital Due diligence
Start your digital journey in the right direction
We’re curious in which car do you see yourself taking this roadtrip. If you need a helping hand figuring out the road to grow, check out our Consultancy services and don’t hesitate contacting us.
Take me thereDigital Feasibility Audit
Explore the true potential of your digital endeavours
When was the last time you lost sight of a seemingly minor detail that kicked back? While thinking about it, check out how we can help you with uncovering the digital opportunities and risks that were not evident from the start.
Take me there