Google Likeness Beta: The Reality of Android XR Avatars and Compatibility
- Aisha Washington

- Dec 13, 2025
- 7 min read

The race for digital identity in extended reality has shifted gears. While Apple focused on headset-based scanning for its Personas, Google is taking a distributed approach with the launch of Google Likeness. This feature, currently in beta, aims to solve the "lonely headset" problem by projecting a photorealistic version of the user into standard video calls.
Understanding Google Likeness requires looking past the visual novelty and examining the practical application. Google isn't just trying to make a 3D model; they are trying to bridge the gap between a user wearing a headset and colleagues sitting on standard laptops using Zoom or Google Meet. The implementation relies heavily on existing mobile hardware, specifically high-end Android phones, to handle the heavy lifting of biometric scanning.
Setting Up Google Likeness: Practical Steps and Requirements

Before diving into the architectural differences between Android XR and its competitors, we need to look at how this actually works for the end user. This is not a theoretical future update; the Google Likeness beta is establishing the workflow right now.
The setup process bifurcates the hardware. You don't scan your face with the headset. You scan it with a phone. This decision solves several ergonomic nightmares associated with headset-based capture, but it introduces strict hardware gates.
Supported Devices for the Likeness Beta App
To generate an Android XR avatar, you cannot currently use an iPhone or an older Android device. The scanning process relies on specific camera capabilities and processing power found in recent flagships.
You will need one of the following to perform the initial scan:
Google Pixel: Pixel 8 or newer models.
Samsung Galaxy: S23 series or newer.
Samsung Foldables: Z Fold5 or newer.
If you own a headset running Android XR but use an unsupported phone, you currently cannot generate a high-fidelity likeness. This hardware lock suggests that Google is leveraging specific depth-sensing or image-processing pipelines inherent to these specific chipsets.
The Scanning Workflow
The Google Likeness creation process is noticeably less awkward than the competition's. Apple Vision Pro users are familiar with the fatigue of holding a heavy headset at arm's length to capture their face. Google removes the headset from the equation entirely during setup.
Users download the Likeness beta app from the Play Store. The phone is held in front of the face, similar to setting up FaceID or taking a selfie video. The app guides the user to turn their head at various angles to capture depth and texture maps. Once the scan is complete on the mobile device, the data packages are synced to the Android XR headset.
This separation of "capture device" and "playback device" is critical. It implies that Google Likeness is designed as an ecosystem play, rewarding users who are already deep inside the Android hardware garden.
Google Likeness vs. The Competition: A Design Philosophy Shift

The arrival of Google Likeness invites inevitable comparisons to Apple’s Persona. While the end goal—a convincing digital twin—is the same, the engineering philosophy differs radically.
Ergonomics of the Scan
We touched on the physical ease of using a phone versus a headset for scanning. This matters for accessibility. Holding a 600-gram headset steadily at a specific angle is difficult for many users. Holding a phone is second nature. By offloading the capture to the Likeness beta app, Google ensures better lighting and more stable angles, which should theoretically result in higher-fidelity textures for the Android XR avatars.
The 2D vs. 3D Divide
The most significant divergence is in how the avatar is presented. Currently, Google Likeness functions primarily as a photorealistic virtual webcam. When you enter a call using the headset, the system generates a 2D video feed of your avatar.
This contrasts with the spatial ambitions of some competitors who try to place a volumetric 3D head into a virtual room. Google’s approach is pragmatic. By treating the Google Likeness avatar as a 2D video stream, it instantly becomes compatible with every major communication platform:
Zoom
Google Meet
Messenger
Discord
You do not need the receiving party to be in a headset or using specialized software. To them, you just look like you are on a webcam. This backward compatibility is a strategic move to make Android XR avatars usable in enterprise environments immediately, rather than waiting for a "spatial meeting" standard to emerge.
Google Likeness Technical Analysis: Sensors and Driving the Image
Generating the static mesh is only half the battle. Bringing the Google Likeness avatar to life requires real-time data from the headset. The static scan provides the skin, but the headset sensors provide the muscle movement.
Driving the Animation
The realism of the avatar depends entirely on the headset's internal sensor array. The system uses:
Internal cameras for eye tracking (gaze direction, blinking, squinting).
Downward-facing cameras or sensors for mouth and jaw tracking.
This real-time telemetry is mapped onto the mesh created by the Likeness beta app. The result is a synthetic video feed that mimics your current facial expression. Reports indicate the lighting and texture work is "impressive," managing to avoid the worst parts of the uncanny valley, though skepticism always remains until users see it in uncontrolled lighting environments.
The Hardware Gap for Smartglasses
This sensor requirement creates a clear dividing line in the Android XR hardware roadmap. Full-featured headsets (like the upcoming Samsung XR device) will likely have the necessary eye and face tracking hardware to drive Google Likeness.
However, lighter form factors, such as AR smartglasses (like the XREAL Ultra series), generally lack internal face-tracking cameras to save weight and battery. This means Google Likeness will likely remain a feature exclusive to "Pro" tier headsets. If the device cannot see your eyes or mouth, it cannot drive the avatar. We may see a tiered system where lighter glasses use static Memojis or audio-driven avatars, while full headsets get the photorealistic treatment.
Limitations of the Current Google Likeness Beta

While the "virtual webcam" approach is smart for compatibility, it comes with limitations that users need to understand.
The Lack of Spatial Presence
Because Google Likeness is currently outputting a 2D feed, it does not support true "spatial meetings." If you are in a virtual room with other headset users, you are effectively a floating flat screen, not a 3D head sitting at the table.
This limits the immersion for fully virtual collaboration. You cannot lean in to whisper or look around a 3D object with your colleagues seeing your head rotate in 3D space. You are a flat video window. For Google Likeness to truly compete as a metaverse identity, Google will eventually need to enable volumetric transmission of these avatars, likely requiring significantly more bandwidth and processing power than the current 2D implementation.
The "Uncanny Valley" Risk
Any photorealistic virtual webcam faces the uncanny valley problem. If the eye tracking lags by even 50 milliseconds, or if the mouth movement desynchronizes from the audio, the illusion breaks. It shifts from "impressive" to "disturbing" very quickly.
The reliance on phone-based scanning also introduces variables. If a user scans their face in poor lighting using the Pixel 8 face scan, the resulting texture map might look muddy or washed out in the headset. Apple controls the scanning environment by using the headset's own high-end LiDAR and cameras; Google is trusting the user's phone handling skills.
Future Implications for the Android XR Ecosystem

The rollout of Google Likeness signals that Android XR is moving from a development concept to a consumer-facing product stack. The integration with standard communication apps like Meet and Zoom is the key takeaway here.
Google is positioning the XR headset not just as a gaming console or a spatial computer, but as a premium accessory for the remote worker. If you can attend a board meeting from your living room while wearing a headset, but appear on screen as if you are in a well-lit studio wearing professional attire (assuming the avatar allows clothing customization), the utility of the device skyrockets.
We can expect the Likeness beta app to evolve. Future updates will likely include more granular control over lighting, background blurring (already standard in video calls), and potentially the ability to "dress" the avatar digitally. For now, the focus is on getting the face right.
Google Likeness is a beta product, but it is a functional one. It prioritizes utility over sci-fi ambition. By making the avatar a standard video feed, Google ensures that early adopters of Android XR headsets aren't talking to themselves—they are talking to everyone else, on the devices everyone else already owns.
Frequently Asked Questions about Google Likeness
Q: Can I use an iPhone to set up my Google Likeness avatar?
No. The scanning application is currently exclusive to specific Android devices. You need a Pixel 8 or newer, or a Samsung Galaxy S23/Z Fold5 or newer to perform the initial face scan.
Q: Does Google Likeness work in 3D for spatial meetings?
Currently, no. The beta generates a 2D video feed that acts like a standard webcam. It is designed for compatibility with 2D video conferencing apps like Zoom and Google Meet, rather than 3D holographic meetings.
Q: Do I need to scan my face every time I use the headset?
No. You only need to use the Likeness beta app once to generate the mesh and texture map. After the initial sync, the headset uses its internal sensors to animate that saved model in real-time.
Q: Will Google Likeness work on AR smartglasses?
It is unlikely for most current smartglasses. The feature requires internal cameras to track eye and mouth movements. Most lightweight smartglasses lack these sensors, making them incompatible with the animation requirements of the avatar.
Q: Is the avatar data stored on the cloud or the device?
Google generally processes biometric data on-device for security, syncing directly between the phone and headset. However, users should check the specific privacy permissions in the beta app, as beta products sometimes collect diagnostic data differently than final releases.


