Another intriguing Apple job posting shows glimpses of what the company is working on
The perfect candidate must not only have extensive research experience in the fields of multi-sensory and sensorimotor integration, decision making, depth perception, neural coding and decoding, but also be “fluent” in machine learning and programming languages like C, Python and others. And those are just two of the eight "Key Qualifications"!
Further hints as to what this person might be involved in come from another requirement: “Hardware and software experience with multimodal data acquisition, instrumentation and interfaces”. Multimodal data means data coming from multiple sources at the same time, which in the case of Augmented Reality glasses might mean microphone, camera, 3D mapping sensors and others.
Considering the expertise needed for the job, there are probably only a handful of suitable candidates. The job posting shows that Apple is working on product that will offer experience much different and more advanced compared to anything it has now. It also means that the project is still in the early stages of development, so we shouldn’t expect any ground-breaking announcements from the company just yet.
Google’s experiment with a wearable display, called Google Glass, wasn’t very well received and eventually got cancelled (although Google later released an Enterprise edition), but with the advancements in technology and the “Apple treatment”, a new product of that kind might be a lot more successful and widely adopted. One of the big obstacles Google faced were the privacy concerns that came from the camera potentially recording things it shouldn’t be, especially when it came to people. It’s possible that Apple is looking for ways to automatically blur out faces recorded by the device, using improved Face ID algorithms and machine learning.
If there’s a company that can make a product popular in a heartbeat, it’s Apple, so whatever it’s cooking at Cupertino, it’s poised to make an impact once they announce it.