Apple Vision Pro will automatically turn spoken words into text with the new Live Captions feature
Apple just dropped the news about a bunch of fresh accessibility features heading to iPhones and iPads. But the company didn't hit the brakes there – improvements are also on the horizon for visionOS, the operating system fueling its AR/VR headset, the Vision Pro, launched earlier this year (check out our Apple Vision Pro review if you want).
Vision Pro users can look forward to a new accessibility feature called Live Captions. Tailored for the deaf or hard of hearing, Live Captions will do exactly as the name suggests – transcribe spoken words into written form in real-time, ensuring nothing gets missed.
Take FaceTime on visionOS, for instance. With Live Captions in the mix, more users can seamlessly partake in the experience of connecting and collaborating through their Persona. Moreover, in its press release, the Cupertino tech giant says that:
These features are joining the dozens of accessibility features already packed into Apple Vision Pro, including:
All of these features work together to grant users who are blind or have low vision access to spatial computing. Additionally, features like Guided Access can offer support to users with cognitive disabilities. And thanks to accessibility features like Switch Control, Sound Actions, and Dwell Control, users with physical disabilities can take charge of Vision Pro using a combination of their eyes, hands, or voice.
It's encouraging to witness major tech companies prioritizing accessibility. Google, too, is stepping up its efforts to make Android more accessible to all users. Just announced, its Project Gameface is making its way to mobile, introducing hands-free control to Android devices.
Apple Vision Pro gets Live Captions and more features
Image Credit–Apple
Vision Pro users can look forward to a new accessibility feature called Live Captions. Tailored for the deaf or hard of hearing, Live Captions will do exactly as the name suggests – transcribe spoken words into written form in real-time, ensuring nothing gets missed.
Take FaceTime on visionOS, for instance. With Live Captions in the mix, more users can seamlessly partake in the experience of connecting and collaborating through their Persona. Moreover, in its press release, the Cupertino tech giant says that:
Apple Vision Pro will add the capability to move captions using the window bar during Apple Immersive Video, as well as support for additional Made for iPhone hearing devices and cochlear hearing processors.
On top of that, Apple will roll out updates for vision accessibility, incorporating features like Reduce Transparency, Smart Invert, and Dim Flashing Lights. As the names suggest, these features will benefit users with low vision or those who prefer to avoid bright lights and frequent flashing.
These features are joining the dozens of accessibility features already packed into Apple Vision Pro, including:
- VoiceOver
- Zoom
- Color Filters
All of these features work together to grant users who are blind or have low vision access to spatial computing. Additionally, features like Guided Access can offer support to users with cognitive disabilities. And thanks to accessibility features like Switch Control, Sound Actions, and Dwell Control, users with physical disabilities can take charge of Vision Pro using a combination of their eyes, hands, or voice.
It's encouraging to witness major tech companies prioritizing accessibility. Google, too, is stepping up its efforts to make Android more accessible to all users. Just announced, its Project Gameface is making its way to mobile, introducing hands-free control to Android devices.
Things that are NOT allowed: