Top 5 Smart Glasses Unveiled at CES 2025

CES Smart glass

CES 2025 was a showcase of technological marvels, and smart glasses took center stage this year, redefining the boundaries of augmented reality (AR) and wearable tech. 

Here are the top 5 smart glasses that left the biggest impression at the event:

1. Halliday Smart Glasses

Halliday Smart Glasses

Halliday introduced smart glasses featuring an ‘invisible’ display and an AI assistant.

Weighing just 35 grams, these glasses offer real-time translations, teleprompter functionalities, and proactive AI assistance. They are designed to anticipate user needs, providing relevant information seamlessly.

Pre-orders are expected to begin at the end of CES, with shipping anticipated before March 2025. The price is estimated to be between $399 and $499.

2. Xreal One Pro

Xreal unveiled the One Pro AR smart glasses, featuring a new camera module.

These glasses aim to enhance the augmented reality experience with improved display technology and functionality.

3. Rokid Glasses

Rokid showcased their latest smart glasses, which include a heads-up display and built-in AI assistant.

These glasses are designed for all-day productivity, offering features like live translation and teleprompter capabilities.

Even Realities G1​

4. Even Realities G1

Even Realities introduced the G1 smart glasses, featuring a discreet design with a hidden green monochrome heads-up display. 

These glasses focus on providing essential information without obstructing the user’s view, catering to professionals seeking seamless integration of technology into their daily routines

5. Loomos.AI Glasses

Loomos.AI presented smart glasses that integrate ChatGPT for AI assistance. These glasses offer functionalities such as 4K photo and 1080p video capture, aiming to provide a comprehensive wearable tech experience.

 

These new launches at CES 2025 highlight the industry’s commitment to advancing smart glasses technology, focusing on integrating AI, enhancing user experience, and making devices more accessible and practical for everyday use.

Conclusion

Smart glasses have taken a significant leap forward in 2025, combining cutting-edge technology with practical applications. 

Whether you’re a professional, gamer, or tech enthusiast, this year’s CES offerings provide something for everyone. As these devices hit the market, the future of AR and wearable tech looks brighter than ever.

How to Enhance Image using Flux Space?

Enhance images using flux space

Using FluxSpace involves working with its framework to perform text-guided image edits using rectified flow transformers. Here’s a general guide on how to get started with FluxSpace:

flux space image

1. Explore the Official Resources

Visit the FluxSpace website to access the framework’s documentation, code repository, and example projects. This is crucial for understanding the installation requirements and setup process.

2. Set Up Your Environment

    • System Requirements: Ensure you have a compatible system with Python installed, along with essential libraries like PyTorch.
    • Clone the Repository: Clone the FluxSpace GitHub repository to your local machine
				
					git clone https://github.com/FluxSpace/fluxspace.git
cd fluxspace

				
			
  • Install Dependencies: Use pip or conda to install the required libraries listed in the requirements.txt file:
				
					pip install -r requirements.txt

				
			

3. Prepare Your Inputs

    • Image Input: Ensure you have the image you want to edit, saved in a compatible format (e.g., JPG or PNG).
    • Text Input: Write a descriptive text prompt indicating the changes you wish to apply. For example:
      • “Change the car to a truck.”
      • “Make the background snowy.”
      • “Convert the person’s hair color to blonde.”

4. Run the FluxSpace Framework

    • Pretrained Models: Load the pretrained rectified flow model provided by the FluxSpace repository. These models are optimized for different domains such as objects, human faces, or scenes.
    • Editing Command: Use the provided scripts to apply your edits. For example:
				
					python edit_image.py --input_image path/to/image.jpg --text_prompt "Convert car to truck" --output_image path/to/output.jpg

				
			

5. Fine-Tune Your Results

FluxSpace allows for coarse and fine-grained adjustments:

    • Coarse Edits: Apply global changes using the pooled representation approach.
    • Fine-Grained Edits: Use specific attributes guided by the attention outputs for more targeted modifications.

6. Evaluate and Iterate

    • Review the output images generated by FluxSpace and refine your text prompts or parameters as needed to achieve the desired results.
    • Experiment with different domains and scenarios to explore the full potential of the framework.

7. Explore Advanced Features

    • Integration: Integrate FluxSpace into larger workflows for tasks like automated content generation or visual asset enhancement.
    • Customization: Customize the framework to align with specific use cases or domains by adjusting its attention mechanisms or using your datasets for domain-specific improvements.

8. Stay Updated

How to use Runway new expand feature?

Runway's new expand feature

The Expand Video feature allows you to seamlessly transform your video’s aspect ratio from portrait to landscape or vice versa, making it suitable for different platforms and audiences. Here’s a detailed guide to help you through the process:

1. Generate an Initial Video

  • Open your Runway Gen-3 Turbo workspace.
  • Create or upload the initial video you want to expand.
  • Ensure the video is ready for aspect ratio transformation and aligns with your creative goals.

2. Download the Initial Video

  • Once your initial video is generated or uploaded, download it to your local system.
  • This ensures you have a backup of the original content in case you need to make further adjustments.

3. Open Gen-3 Turbo in Runway

  • Log in to your Runway account and navigate to the Gen-3 Turbo workspace.
  • This feature is part of Runway’s Generative AI tools, designed for high-speed and high-quality video editing.

4. Select the “Expand Video” Feature

  • From the Gen-3 Turbo toolbar, choose the Expand Video option.
  • This feature allows you to modify the video’s aspect ratio without losing important details or compromising its quality.

5. Choose Portrait or Landscape Aspect Ratio

    • Decide on the desired aspect ratio for your video:
      • Portrait (Vertical): Ideal for platforms like Instagram Stories, TikTok, or YouTube Shorts.
      • Landscape (Horizontal): Best for platforms like YouTube, Facebook, or widescreen presentations.
    • Select the appropriate option in the Expand Video settings.

6. Generate the Expanded Video

    • Click Generate to process the video.
    • The AI will intelligently expand the video’s content to fit the chosen aspect ratio, filling the new areas seamlessly to match the original style and context.
				
					Tips for Using Expand Video Effectively
				
			
      1. Frame Composition: Ensure your subject is centered in the original video to make the expansion process more natural.

      2. Text Prompts: Add text prompts during the process to guide the AI if you want specific details or effects in the expanded areas.

      3. Preview Before Finalizing: Always preview the generated video to ensure the expanded areas look consistent with your creative vision.

      4. Use a CDN: If your video needs high availability across different platforms, consider uploading the final product to a Content Delivery Network (CDN).

10 Use Cases in VR For Military Training

VR for Military Training

For any country, Military training is one of the most important things which keep the soldier ready for any situation.

As a CEO of Twin Reality. I definitely like to talk about VR Industrial Training which includes Military trainings

Coming from a background where I saw soldiers always go for training in various part of the country I can say that VR has potential use case in Military Training

So directly starting from some of the most important use cases which are:

1. Combat Simulations:

				
					Use Case: Soldiers undergo realistic combat scenarios in a controlled environment.


				
			
				
					Purpose: Prepares soldiers for high-stress situations, such as urban warfare or ambushes, without the risks of real combat.

				
			
				
					Benefit: Enhances decision-making and situational awareness under pressure.
				
			

Virtual Reality (VR) provides a safe and controlled environment for soldiers to experience realistic combat scenarios. These simulations include ambushes, urban warfare, and open-field battles, allowing soldiers to develop decision-making skills and situational awareness. 

By immersing trainees in high-stress situations, VR helps them learn how to respond effectively under pressure without the physical risks associated with live training exercises.

Example: Soldiers might engage in a VR simulation of a hostage rescue mission in a dense urban environment, facing dynamic challenges such as navigating complex layouts and identifying threats.

2. Tactical Mission Planning

				
					Use Case: Teams rehearse missions using VR to visualize terrains, plan routes, and execute strategies.

				
			
				
					Purpose: Allows for detailed preparation of operations in unfamiliar or hostile 

				
			
				
					Benefit: Reduces risks by simulating various outcomes and contingencies.
				
			

VR enables military units to plan and rehearse missions with unparalleled detail. Using virtual replicas of real-world terrains, teams can visualize operational strategies, identify potential obstacles, and rehearse coordinated maneuvers.

This allows soldiers to adapt quickly to changing scenarios during actual missions.

Example: A team preparing for a jungle rescue operation can use VR to practice navigating dense foliage and executing stealth tactics.

3. Flight Training

				
					Use Case: Pilots use VR flight simulators to learn aircraft controls, maneuvers, and emergency protocols.
				
			
				
					Purpose: Provides cost-effective training without the need for expensive equipment or risking lives.

				
			
				
					Benefit: Accelerates skill acquisition while reducing resource usage.

				
			

Pilots use VR flight simulators to practice handling aircraft, mastering controls, and responding to emergencies. These simulators replicate real-world conditions, including weather variations, mechanical failures, and combat scenarios. VR flight training is cost-effective and eliminates the risk of accidents during training.

Example: A fighter pilot practices evasive maneuvers in a VR cockpit while dealing with simulated enemy attacks and system malfunctions.

4. Vehicle and Tank Operations

				
					Use Case: Drivers and operators train on VR simulators for armored vehicles, tanks, and submarines.
				
			
				
					Purpose: Familiarizes operators with controls and real-time vehicle dynamics.

				
			
				
					Benefit: Lowers maintenance costs and risk of equipment damage during training.

				
			

VR is used to train operators of armored vehicles, tanks, and submarines. Trainees familiarize themselves with vehicle controls, weapon systems, and operational procedures in realistic virtual environments. This minimizes wear and tear on actual equipment and reduces training costs.

Example: Tank operators can practice firing weapons, navigating rough terrain, and coordinating with other vehicles in a VR battlefield scenario.

5. Medical Training in Combat Zones

				
					Use Case: Combat medics practice treating injuries in VR-simulated battlefield environments.

				
			
				
					Purpose: Enhances preparedness for handling wounds, triage, and evacuations under combat conditions.
				
			
				
					Benefit: Saves lives by improving readiness for real-life medical emergencies.

				
			

Combat medics train in VR environments that simulate battlefield injuries and emergencies. These scenarios range from treating gunshot wounds to performing triage under fire. VR enhances medics’ ability to provide life-saving care in high-stress situations.

Example: A VR scenario places medics in a simulated battlefield where they must stabilize multiple casualties while under fire.

6. Parachute Jump Training

				
					Use Case: Soldiers simulate parachute jumps and landing techniques using VR.
				
			
				
					Purpose: Reduces risks associated with live jump training by allowing trainees to practice in a safe environment.
				
			
				
					Benefit: Builds confidence and refines technique before real-world practice.VR simulates parachute jumps, allowing trainees to practice exit techniques, freefall maneuvers, and landings. This reduces the risks associated with real-life training jumps and helps trainees build confidence before actual deployment.
				
			

VR simulates parachute jumps, allowing trainees to practice exit techniques, freefall maneuvers, and landings. This reduces the risks associated with real-life training jumps and helps trainees build confidence before actual deployment.

Example: Soldiers experience a simulated jump from an aircraft, complete with wind resistance and variable landing terrains.

7. Improvised Explosive Device (IED) Training

				
					Use Case: Soldiers identify, diffuse, and handle IED threats in VR environments.
				
			
				
					Purpose: Prepares personnel for high-risk bomb disposal scenarios.
				
			
				
					Benefit: Enhances safety and efficiency in bomb disposal missions.
				
			

IED detection and disposal is one of the most dangerous tasks in military operations. VR allows soldiers to practice identifying, approaching, and diffusing explosive devices in a safe virtual environment.

Example: A soldier trains in a VR scenario where they must locate and disarm an IED hidden along a convoy route.

8. Naval Operations Training​

				
					Use Case: Sailors practice ship navigation, maintenance, and combat scenarios using VR.
				
			
				
					Purpose: Familiarizes trainees with ship operations and maritime warfare strategies.
				
			
				
					- Benefit: Reduces dependency on live vessel training, cutting costs and risks.
				
			

VR enhances naval training by simulating ship navigation, maintenance, and combat scenarios. Trainees can practice responding to emergencies such as fires or flooding, as well as combat scenarios like enemy ship engagements.

Example: A naval crew practices navigating through a storm while coordinating damage control efforts and engaging in simulated combat.

9. PTSD and Stress Management

				
					Use Case: VR is used for therapy to help soldiers manage PTSD symptoms by exposing them to controlled, therapeutic simulations of combat scenarios.
				
			
				
					Purpose: Aids mental health recovery through exposure therapy and relaxation techniques.
				
			
				
					Benefit: Supports long-term well-being and mental resilience.
				
			

VR is increasingly used in therapy for soldiers dealing with Post-Traumatic Stress Disorder (PTSD). Controlled VR environments expose veterans to combat scenarios in a therapeutic context, helping them process and overcome traumatic experiences.

Example: A soldier revisits a simulated version of a past combat event in a safe setting, guided by a therapist to reduce anxiety and build coping mechanisms.

10. Cybersecurity and IT Defense Training

				
					 Use Case: Simulates cyber-attacks on military networks to train personnel in defensive and counter-attack strategies.
				
			
				
					Purpose: Prepares teams for handling real-world cyber threats in a safe virtual space
				
			
				
					 Benefit: Builds robust IT defense skills critical for modern warfare.
				
			

With the rise of cyber warfare, VR is used to train military personnel in defending against cyber threats. Simulations include scenarios like hacking attempts, system breaches, and countermeasures.

Example: A VR training exercise simulates a cyber-attack on a military base, where trainees must identify vulnerabilities, stop the attack, and secure the network.

Conclusion:

These use cases demonstrate how VR is revolutionizing military training by offering immersive, safe, and cost-effective solutions. It not only enhances skill development but also ensures better preparedness for real-world challenges. Let me know if you’d like me to expand on any specific use case or create related content!

Architecture of virtual reality system

Architecture of Virtual Reality System

When I first experienced Virtual Reality (VR), I remember feeling like I had stepped into the future. 

It wasn’t just about the game I was playing but more about the way everything came together—hardware, software, and all those unseen pieces working in harmony to transport me into another world. 

So, when people ask me about the architecture of Virtual Realtiy system, I always think of it as layers—each doing its part to make that experience possible. Let me break it down for you.

Architecture of virtual reality system​

1. The Hardware Layer: Your Gateway to Another World

The first thing that hits you in a VR setup is the hardware, right? We’re talking about the headset, controllers, and all those sensors that track every move.

a) VR Head-Mounted Display (HMD):

    • Provides stereoscopic 3D rendering of the virtual environment.
    • Examples: Oculus Rift, HTC Vive, PlayStation VR.
    • Features include head tracking, motion sensors, and display for each eye.

b) Input Devices:

    • Handheld Controllers: Devices like joysticks, VR controllers (e.g., Oculus Touch, Vive Controllers) allow users to interact with objects in the virtual world.
    • Gloves or Haptic Devices: Provide tactile feedback (haptic feedback) to simulate the sense of touch.
    • Body Tracking Sensors: Full-body sensors or suits that track the user’s physical movements and map them to their avatar in the virtual world.

c) Tracking Systems:

    • External Sensors (Positional Tracking): Cameras or external base stations to track the user’s movement in the physical space (e.g., HTC Vive Lighthouse sensors).
    • Inside-out Tracking: Sensors built into the headset that track the environment without external cameras.
    • Eye Tracking: Some VR systems include eye-tracking technology for gaze-based interaction.

d) Computational Power:

    • PC or Console: High-performance hardware is required to render VR experiences in real-time, with powerful GPUs (e.g., NVIDIA, AMD) and CPUs to process the VR environment.
    • Mobile VR: Lower-end VR experiences can run on mobile devices, using smartphone hardware for rendering (e.g., Google Cardboard, Oculus Go).

2. The Software Layer: Where the Magic Happens

The Software Layer

Now, hardware is just one part. All that amazing stuff is useless without the right software to back it up. At the heart of it all is the rendering engine—this is where the VR magic really comes to life.

a) Rendering Engine:

    • The core of the VR system, which renders the virtual world in real-time.
    • Examples: Unity 3D, Unreal Engine.
    • Handles lighting, shadows, textures, and physics calculations to create a realistic 3D environment.

VR SDKs (Software Development Kits):

    • Provides libraries and APIs that interface with VR hardware, enabling developers to build VR experiences.
    • Examples: Oculus SDK, SteamVR SDK, OpenVR.

c) Graphics API:

    • Low-level software that interfaces with the GPU to handle rendering tasks.
    • Examples: OpenGL, DirectX, Vulkan.

d) Virtual Environment and Asset Management:

    • 3D Models: The objects and environments in the virtual world.
    • Textures and Materials: Surface details of virtual objects (e.g., smooth, rough, shiny).
    • Audio: 3D spatial sound to enhance immersion.

e) Physics Engine:

    • Simulates real-world physics like gravity, collisions, and object interactions.
    • Examples: NVIDIA PhysX, Havok Physics.

f) User Interface (UI) and Interaction Management:

    • Provides mechanisms for users to interact with virtual objects.
    • UI elements like buttons, menus, and other interactive objects are placed within the VR environment.
    • Gesture Recognition: System recognizes user gestures and translates them into actions within the VR world (e.g., grabbing or moving an object).

3. Interaction Layer: Bridging Reality and Virtual Reality

When I first started tinkering with VR systems, I quickly realized the most important thing wasn’t just what you saw but how you interacted with it. This is where the interaction layer kicks in. 

Imagine this: You reach out with your hand to grab something in the virtual world, and the system has to translate that motion into something the computer understands. 

a) Input Processing:

    • Hand, body, and controller movements are tracked and mapped into virtual space.
    • Gesture and motion recognition algorithms interpret physical actions.

This happens through complex input processing systems that take the signals from your controllers (or gloves) and match them up to virtual movements.

Some systems even use gesture recognition to understand hand motions. If you’ve ever waved at someone in VR or given a thumbs-up to your virtual buddy, gesture recognition was making that possible. And of course, the feedback you get—whether it’s a subtle vibration in your controller or 3D audio that changes depending on where you turn—makes the virtual world feel more real.

b) Feedback Systems:

    • Haptic Feedback: Devices like VR gloves or controllers provide tactile responses when the user interacts with virtual objects.
    • Audio Feedback: 3D sound that responds to user interaction and environmental cues.
    • Visual Feedback: Changes in the virtual environment based on user actions, such as object movements or menu selections.

I always joke that VR is like playing in a movie where you’re the actor, and the world reacts to everything you do.

4. Application Layer: The Real Fun Stuff

Virtual Reality Application Layer

So, once you’ve got the hardware and software working together, what’s next? It’s all about the applications.

This is what you, as the user, are there for—the VR games, the training simulations, the virtual museums.

For me, the cool part is that VR can be anything, from a simple space shooter game to a complex training simulator for surgeons. Imagine being able to practice heart surgery in VR before ever touching a real patient. That’s not sci-fi anymore—it’s happening.

a) Virtual Reality Applications:

    • Games: VR games offer fully immersive experiences.
    • Training Simulations: Used in fields like healthcare, engineering, or aviation to provide training without real-world consequences.
    • Education: VR classrooms or virtual tours of historical sites.
    • Architectural Visualization: Allows architects and clients to walk through virtual buildings.
    • Health and Therapy: VR systems designed to reduce anxiety, provide physical rehabilitation, or offer mental health treatment.

b) Content Management System:

    • System for managing VR content updates, levels, assets, and versions.

5. Networking Layer: Connecting Virtual Worlds

If you’ve ever played multiplayer VR games, you know how fun (and chaotic) it can be to interact with other real people in the same virtual world. 

This is made possible by the networking layer. Multiplayer engines ensure that everyone’s actions are synced in real-time, whether you’re battling aliens together or just hanging out in a virtual lounge. 

Without it, you’d see people’s avatars lagging behind their real-world movements, and that’s a total immersion killer.

a) Multiplayer Engine:

    • Handles real-time communication between multiple users in the same VR environment.
    • Synchronizes user actions, avatar positions, and interactions.

b) Cloud Services:

    • Cloud-based storage and computation (e.g., offloading complex rendering or physics calculations).
    • Asset streaming for dynamic content updates.

Final Thoughts: Putting It All Together

When you take a step back, the architecture of Virtual Reality system isn’t just about the tech. It’s about how all these layers—hardware, software, interaction, and networking—work together to create something that feels seamless. 

I think the real beauty of VR is that, when everything works perfectly, you forget it’s a system at all. You’re just… there, wherever “there” is.

And trust me, once you’re lost in a virtual world, it’s hard to come back to reality without feeling like something magical just happened.