First, Pelican’s solution is much thinner than the current generation of camera sensors and would allow for even thinner devices. You might not have known this, but we have arrived at a point where the thickness of the camera module is actually a huge factor, often dictating the thickness of the device as a whole.
It’s not all about form, though. Pelican’s 16-lens camera array brings a couple of functional breakthroughs in mobile photography. Having 16 separate lens means that the camera is capable of capturing 16 different points of view simultaneously. With 16 different lenses, you can be certain that everything in your pictures will always be in focus. Pelican demonstrated how you can actually select the focus of an image after you capture it, so you don’t have to worry whether you’ve got it right or whether something little but important is out of focus.
Additionally, the 16 lenses capture a full depth map, so a single image shot on Pelican’s camera contains information about the position of each object. You can even learn the distance between two objects on a photograph shot on Pelican. Imagine the possibilities for architects, or say, interior designers wondering whether that sofa would fit in a particular space.
The full depth map, however, has one huge other use. Having a full 3D image means that it can be easily used to create a printable 3D model of the object you photograph. Imagine being able to just snap a picture of something, run it through a 3D printer, and then have a small figurine of your photograph! That’s exactly the field where Pelican hopes to find shared excitement with device manufacturers.
Pelican’s camera is also capable of capturing HDR video, using the stream from all 16 lenses and combining it into a single one with higher dynamics.