Google explains the secrets behind the Pixel 2's fantastic camera stabilisation.
The Google Pixel 2 has one of the highest rated cameras in the DxO benchmark tests, and image stabilisation is one of the features that highly impresses. Google has now released information about how the camera manages the sort of results that you might normally expect from a three-axis gimbal.
Google explains the main factors behind solving the issues of camera shake. Niot surprisingly they include motion blur, rolling shutter distortion, and focus breathing. The solution Google has used is a combination of optical stabilisation and electronic, what they refer to as Fused Video Stabilisation. This integration of the two methods is not new in dedicated camcorders, having being incorporated into many of Sony's models for a good few years now. But the benefits are obvious and clear to see.
Part of the reasoning behind combining the two methods is that optical stabilisation alone has a limited range of motion, and so adding an electronic layer to this helps expand the amount of correction that can be performed. However Google's method takes things a little bit further using, yes, you've guessed it, machine learning.
If there is one buzzword for 2017, it is AI! In this instance Google is using it to predict the motion of the camera by detecting patterns in the movements, which it says is not possible with optical stabilisation alone. But it doesn't stop there, this prediction of movement is also used to compensate for motion blur that still appears within a stabilised frame, and fooling the eyes by leaving in just the right amount of camera movement. It's clever stuff indeed and well worth reading the full page they have published about it.
Check out the video below to see just how good it is, then head on over to Google's page to see how it is all done.
Hat tip to Engadget for bringing this to our attention.