Apple has fixed the way Siri speaks in a huge way since its release: from the robotic voice that we used to hear when the smart digital assistant was introduced to a much more pleasant, human-like voice in iOS 11.
How did Apple do it? It took a lot of deep learning algorithms and smart engineers to achieve that progress, and those engineers have just published a white-paper about the process over at Apple.com.
"Starting in iOS 10 and continuing with new features in iOS 11, we base Siri voices on deep learning. The resulting voices are more natural, smoother, and allow Siri’s personality to shine through," Apple said.
The article will interest researchers in the field, but even if you are not that interested in the technology, you could still listen to the examples provided that show the progress that has been made. Hit the source link right below and scroll all the way to the bottom, to "Table 1".