Another intriguing Apple job posting shows glimpses of what the company is working on
Apple’s Research and Development department is working on multiple projects at any given time. Some are less secretive than others, like the rumored Apple car and the AR glasses. Since Apple doesn’t officially confirm any of these projects, information about them is gather by looking at patents, company and talent acquisitions and so on. While an earlier Apple job posting was clearly geared toward the AR development, this more recent one leaves more to the imagination. The company is looking to hire a Senior Systems Neuroscientist who is “passionate about the study of the brain and its application to building transformative neurotechnology”. While that in itself sounds ambitious, the key qualifications put the bar even higher.
The perfect candidate must not only have extensive research experience in the fields of multi-sensory and sensorimotor integration, decision making, depth perception, neural coding and decoding, but also be “fluent” in machine learning and programming languages like C, Python and others. And those are just two of the eight "Key Qualifications"!
Further hints as to what this person might be involved in come from another requirement: “Hardware and software experience with multimodal data acquisition, instrumentation and interfaces”. Multimodal data means data coming from multiple sources at the same time, which in the case of Augmented Reality glasses might mean microphone, camera, 3D mapping sensors and others.
Google’s experiment with a wearable display, called Google Glass, wasn’t very well received and eventually got cancelled (although Google later released an Enterprise edition), but with the advancements in technology and the “Apple treatment”, a new product of that kind might be a lot more successful and widely adopted. One of the big obstacles Google faced were the privacy concerns that came from the camera potentially recording things it shouldn’t be, especially when it came to people. It’s possible that Apple is looking for ways to automatically blur out faces recorded by the device, using improved Face ID algorithms and machine learning.
If there’s a company that can make a product popular in a heartbeat, it’s Apple, so whatever it’s cooking at Cupertino, it’s poised to make an impact once they announce it.
Further hints as to what this person might be involved in come from another requirement: “Hardware and software experience with multimodal data acquisition, instrumentation and interfaces”. Multimodal data means data coming from multiple sources at the same time, which in the case of Augmented Reality glasses might mean microphone, camera, 3D mapping sensors and others.
Considering the expertise needed for the job, there are probably only a handful of suitable candidates. The job posting shows that Apple is working on product that will offer experience much different and more advanced compared to anything it has now. It also means that the project is still in the early stages of development, so we shouldn’t expect any ground-breaking announcements from the company just yet.
Google’s experiment with a wearable display, called Google Glass, wasn’t very well received and eventually got cancelled (although Google later released an Enterprise edition), but with the advancements in technology and the “Apple treatment”, a new product of that kind might be a lot more successful and widely adopted. One of the big obstacles Google faced were the privacy concerns that came from the camera potentially recording things it shouldn’t be, especially when it came to people. It’s possible that Apple is looking for ways to automatically blur out faces recorded by the device, using improved Face ID algorithms and machine learning.
Things that are NOT allowed: