While LG was using wide-angle lenses on its flagship phones before it was cool, there is barely a popular brand now that hasn't jumped in on that very bandwagon, and not only for the selfie shots but within the main rear camera kits as well.
These lenses are great for group or nature photos but the problem is that the wide-angle shots also catch the curves lens areas as well, and return very visible distortions, especially towards the edges.
Thus, if your face happens to be at the periphery of a group shot, it could very well look like it's painted by Picasso. Google has been addressing that in the Pixel phones by employing computer learning algorithms to correct your discombobulated face. From the Pixel 3 camera writeup:
How did Google achieve that? Well, they've been researching on the subject for a while, it seems, and now we have the first paper - "Distortion-Free Wide-Angle Portraits on Camera Phones" - co-authored by Google and MIT researchers that explains how they did it.
The quote you see above is just a tiny fraction of Google's Pixel camera features explanations, mentioned passingly in the selfie section, but there's quite a lot of painstaking research behind its algorithms.
This level of attention to detail may explain why Google does with one paltry 12MP camera on the back of the Pixels what other need multilens kits to do, and are still catching up to Google's results. In that line of thought, we can't wait to see what the Pixel 4 is capable of.