Last month’s launch of the Huawei P30 Pro, a much anticipated photography powerhouse, is the latest indication of how far we have come in mobile photography and what great cameras many of us have in our pockets today.
The new phone sport a hybrid zoom feature that has surprised many users, along with computational photography tricks that enable them to shoot in low light without the need for a tripod.
It’s a big upgrade, since the early days of the Apple iPhone in 2007, when people started snapping holiday pictures with their phones. Today, that mobile gadget is threatening the camera industry, even as more people are taking more pictures.
The outlook has never looked bleaker, according to the Camera & Imaging Products Association of Japan. The total shipment of digital cameras reached a peak of 121 million units in 2010 but has since slumped to a low of about 19 million in 2018.
Yet, the result isn’t all that surprising. When there are no big improvements in the image quality of compact cameras and, to a certain extent, the DSLRs and mirrorless cameras, consumers can’t be blamed for turning to their smartphones.
In mobile photography, Apple, Samsung and Huawei have clearly invested effort to improve their cameras. You just have to compare last year’s P20 Pro to this year’s P30 Pro from Huawei, for example.
Another thing is convenience. Folks using a regular camera often need multiple steps to share photos and videos on Facebook, Instagram, Twitter and YouTube. They have to download the images from the camera to a smartphone first before sharing the photos.
With more capable smartphone cameras and cheaper cost of mobile broadband, it is natural which device users will gravitate towards for convenience. For those on the move, there’s no need to sit behind a computer to post-edit videos and images.
To be fair, the image quality from a smartphone will often still struggle to match purpose-built cameras in their current iteration. A small sensor has physical limitations, which requires smart algorithms to achieve results similar to what you get from a much bigger sensor.
To achieve the type of blurred background, or bokeh, you often see on a DSLRs with an f1.8 lenses, a smartphone camera must be able to extract the subject from the background before applying filters to blur out the background.
As you often see on Instagram, this doesn’t work sometimes. The cut-out may seem too artificial, for example. Good thing is, even this is being improved.
Today’s phones come with a Time-of-Flight (ToF) sensor that can detect depth, so it better informs the phone how much blur to add to certain parts of an image. This is still an estimation, but it’s getting better.
A purpose-built camera still has its place in the imaging world, of course, though these are increasingly niche areas. For example, with a very wide range of telephoto lenses, wildlife and sports photography is just much easier with a camera body that has better ergonomics and focusing.
A DSLR and mirrorless camera’s buttons and switches are designed to be used intuitively when capturing fast-moving subjects. You can’t it as well on a smartphone – the soft button on the smartphone screen is just not as sensitive or quick.
Autofocus is also another area that smartphones still have to catch up on. Mirrorless cameras such as Sony’s Alpha 7 series are now able to track a subject’s eye to achieve pinpoint focus accuracy. This is so important for wedding and studio photographers.
The main reason why consumers have embraced the smartphone as the go-to photography device is because image quality has become acceptable to most. Of course, they do look good enough on social media as well.
As a result, a DSLR or mirrorless camera now comes across an overkill to an average consumer. Not everyone looks for specific colour accuracy, quality of bokeh and pin-sharp focus, especially when you are looking for efficiency rather than quality.
This is not saying that the camera manufacturers are resting on their laurels. Nikon and Canon have now joined Sony, Fujifilm, Panasonic and Olympus on the mirrorless bandwagon.
Consumers want a camera that is lighter and yet takes excellent photos and that’s where camera manufacturers still have an edge – at least for now.
The bad news is that the 35mm full-frame sensor in many mirrorless cameras still needs lenses of a certain size to produce excellent images. That means mirrorless camera can be end up as bulky and heavy as a DSLR, if you screw on a large lens.
To be sure, DSLR and mirrorless cameras will not disappear. They might slowly become a niche market, perhaps like film cameras now.
With a smaller base of customers to appeal to, expect revenues to diminish for camera makers. In turn, when profits are squeezed, prices of cameras and lenses will rise too, making it even more expensive to own a camera and further reduce the pool of customers. A vicious cycle begins.
Perhaps this is the reason why camera companies are merging to achieve better economies of scale. They can pool resources to conduct R&D, for starters. Already, we have seen Pentax and Ricoh merge, as well as the strategic partnership between Leica, Panasonic and Sigma to launch their L-Mount system.
Leica has also partnered with Huawei to improve smartphone cameras as well. That’s a way for the German camera company to broaden its income streams and delay the dreaded price increase of their already pricey cameras.
The success of the Huawei phones is down to the technology that both companies have developed for their camera. With Leica going into computational photographyto complement its famed lenses, the future looks exciting there.
So the natural question one might ask is whether camera manufacturers would jump into advanced computational photography as well. They may have all the lenses worked out but can the image be improved with a chip working at the same time to correct imperfections like brightness and contrast?
I posed this question to Leica engineers last year, asking if such systems will be put in use in Leica’s own camera systems but a straight answer was not forthcoming.
To be fair, there have been earlier forms of automated imaging solutions on modern cameras. The scene modes are great examples, though they still require a user to choose the right lenses.
If I were to shoot a portrait, I will have to use a camera lens that has an appropriate focal length, say, 50mm and set the aperture at f2 or even f1.8 to create the blurred background typical of many such shots. This is too much work for an average consumer. You also need time and effort to learn the basics.
Artificial intelligence (AI) on smartphones today do away with this. Selecting the right settings, it always offers great images even if you only want to point and shoot.
If the compact cameras of the past few years had such functions, their sales wouldn’t be decimated by smartphones so easily.
Smartphones today are also upping the ante. The P30 Pro has all the lenses that most users would use. You get a wide-angle lens, an ultrawide-angle lens and a zoom lens, without bringing along three separate lenses to swap in and out.
Perhaps that sums things up better than anything we’ve seen lately. Not only are smartphones replacing basic point-and-shoot cameras, they are beginning to make higher-end models seem too troublesome to lug around.
The correct spelling of the Japanese word is boke. You don’t spell the name of the martial art karateh and rice wine isn’t sakeh.