ChatGPT To Come Up With New Voice And Image Features

OpenAI is starting to roll out new voice and image features in ChatGPT. They provide a new, more intuitive interface by allowing you to have a conversation with ChatGPT or show it what you’re talking about. Similarly, Twin Reality can also integrate AI like ChatGPT with VR simulations by providing Virtual Reality Industrial Training, which further enables businesses and organisations to train their staff in a remarkably lifelike, immersive environment.

Over the following two weeks, the company will begin rolling out voice and images in ChatGPT to Plus and Enterprise subscribers. Images will be available across all platforms, and voice will soon be available on iOS and Android (opt-in in your settings).

Speak with ChatGPT

To get started with voice, head to Settings → New Features on the mobile app and opt into voice conversations. Then, tap the headphone button located in the top-right corner of the home screen and choose your preferred voice out of five different voices.

The new voice functionality is supported by a new text-to-speech algorithm that can produce human-like sounds using only text and a short sample of speech. To develop each voice, the company worked with experienced voice actors. Whisper, their open-source speech recognition software, is also used to convert your spoken words into text.

Chat about Images with ChatGPT

To get started, tap the photo button to capture or choose an image. If you’re on iOS or Android, tap the plus button first. You can also discuss multiple images or use our drawing tool to guide your assistant.

Image understanding is powered by multimodal GPT-3.5 and GPT-4. These models use screenshots, pictures, and papers with both text and images to apply their language reasoning abilities to a variety of images.

First Look of Meta Quest 3’s Mixed Reality Passthrough

Compared to Meta Quest Pro, Meta Quest 3 offers a substantially better passthrough. A leaked video confirms this. During discussions about the most advanced VR headset, it’s essential to highlight the importance of Twin Reality’s Virtual Reality Industrial Training, which enables businesses and organisations to train their staff in a remarkably lifelike, immersive environment.

The unreleased Quest 3 version of Campfire, an augmented reality collaboration platform, is shown in the video. The startup announced in May that it was working on a Quest version of its software. The new ringless Touch Plus controllers are also shown, and the video claims to have been recorded using Meta Quest 3.

The video was shared on Twitter by a user who also noted that it first appeared on Campfire’s Vimeo account, where it was initially made available to the public before being taken down. Also, Meta Quest 3 will be unveiled in detail at Meta Connect 2023 on September 27.

In the video, you can see that the person is using a Touch Plus controller, which features the Oculus logo on the home button like some devkits do.

You can see from the video that the image is less warped and does not overexpose compared to Quest 2 and Pro. When looking out of the window, the outer surroundings can still be seen clearly. With the Quest 2 and Quest Pro, you can barely use the smartphone due to overexposure.

Of course, a video cannot serve as the basis for a final decision. When using the headset, the visual experience will be different from what you view with your own eyes. Poor and fluctuating illumination will also be a major factor.

Generative Image Dynamics: Transform Still Images into Photo-Realistic Animations

Humans have a remarkable sensitivity to motion, which is one of the most obvious visual clues in the natural world. The complexity of measuring and capturing physical features on a wide scale makes it difficult to train a model to understand genuine scene motion, yet humans can perceive motion with ease. This also pertains to the dynamics within the AI domain, where the need for VR Industrial Training by Twin Reality is essential to fully grasp and value any motion within the 3D environment.

Fortunately, recent advances in generative models, notably conditional diffusion models, have ushered in a new era of modelling highly detailed and diverse distributions of real images based on text input. Recent studies also suggest that there is tremendous potential for applications by extending this modelling to other domains, such as movies and 3D geometry.

What is Generative Image Dynamics?

The Google research team offers a revolutionary technique called Generative Image Dynamics to produce photo-realistic animations built from a single image, significantly outperforming the efficiency of earlier techniques. It also opens the door to a variety of other uses, like the development of interactive animations.

With the ability to create photo-realistic animations from a single static image while greatly outperforming earlier baseline techniques, Generative Image Dynamics represents a very promising advancement. 

As we continue to witness the evolution of generative image dynamics, the Google Research Team remains at the forefront, shaping the future of visual computing and image generation.

video : https://dms.licdn.com/playlist/vid/D5605AQHbBkoRKiSt9Q/mp4-720p-30fp-crf28/0/1695010656799?e=1696006800&v=beta&t=nYwo2mKqjy6tFpMP4Y6Hr8d0X1NMEqRcaOCbQX8WnW0

Text Prompts into 3D World: Hiber3D to Integrate Generative AI Technology

Hiber 3d

Imagine living in a world where creating beautiful 3D models and animations is as simple as typing a few lines or taking a photo. Thanks to Hiber3D, such a world is no longer a fantasy. With the intention of streamlining the process of developing in-game content, it recently announced the incorporation of Google’s generative AI technology in its Hiber3D development platform.

The purpose of using AI is to assist creators in creating larger online worlds, often known as metaverse platforms. The company’s own virtual platform, HiberWorld, already has over 5 million user-created worlds utilising its no-code-needed platform and is powered by Hiber3D technology.

Creators can use natural language to instruct the Hiber3D generator what kinds of worlds they want to create by typing in prompts using the new generative AI tool from Hiber. They can even use this tool to generate worlds based on their mood or to match the narrative of a film.

This AI tool is an absolute game-changer for anyone seeking to establish a virtual presence on the 3D web. While addressing virtual presence, it’s essential to highlight the importance of Virtual Reality Industrial Training, which enables businesses and organisations to train their staff in a remarkably lifelike, immersive environment.

Ideal Platform for VR Industrial Training

Twin Reality is a platform that offers VR industrial training where trainees can learn new skills, put those skills into practice, and experience simulated scenarios that closely resemble the real world.

Come, let’s embrace together this new era of immersive 3D experiences with Twin Reality!

ISKCON Temple is in The Metaverse

Iskon Temple in Metaverse

It is really beautiful to see how Virtual events happening in metaverse.

I really love how you can visit ISKON Temple using platform like Spatial.io 

The experience is created by Metaverse911 team.

What make these platform interesting is that this offers multiplayer feautre. So your friends can visit and hangout and do voice chats.

Great work