3D Photos, Facebook Adding Depth to Your Feed
Facebook teased its 3D Photos feature in May, and it’s already available to all members. It is exactly what it sounds like: a 3D photo. There was little information about it other than the name and a short video. Recently, a team of companies has also unveiled some research work about how it works, and I have personally seen results that are quite impressive.
3D Photos; What Are They?
The teaser showed that when you scroll by or touch 3D photos, or tilt your phone, the perspective shifts as if you were looking through a window into a miniature diorama. It works well with ordinary photographs of people and dogs, but also with landscapes and panoramic image
How the Idea Originated?
It is interesting to note that 3D photos did not originate as a way to improve snapshots, but as a means to democratize virtual reality creation. Synthetics are used everywhere! There is no way a regular Facebook user could create a 3D model and populate a virtual space with their content. Panorama and 360-degree imagery on Facebook is an exception to this rule since they are often wide enough to be effectively explored via VR. But the experience isn’t much different than seeing the picture printed on butcher paper floating nearby. It’s not exactly transformational. You may say that Facebook decided to add depth to the pics in its news feed to make it more appealing.
How Does It Work?
Using 3D photo technology, the foreground of the image is compared to the background of the photo. As you scroll through your News Feed, the way the photos appear in your News Feed will move and look more dimensional with the help of this data and Facebook’s custom software. You can also do it on your smartphone. The effect is similar to peering into a magic window. You will see 3D photos in a whole new way when wearing the Oculus Quest, Oculus Go, or HTC Rift platform headsets that respond dynamically to your movement and perspective. As skeptical as I am, I was very quickly convinced by the effect. A 3D model isn’t what it appears to be in the sense that it is a little window into something else than it is just a 3D illusion of depth.
The Earlier Version
Through careful analysis of parallax; how objects at different distances shift differently when the camera moves and phone motion, that scene could be reconstructed beautifully in 3D, that too complete with normal maps. Using the single camera’s rapid-fire images to infer depth information is both CPU-intensive and, as a technique, rather dated. In particular, many of the latest cameras have two lenses, as if they were two tiny eyes. The only way to make 3D photos with phones will be with dual cameras, although plans exist to make the feature a bit more accessible.
To observe parallax differences, both cameras should be used at the same time. Additionally, since the device is in the same position for both shots, the depth data is much less noisy, which means fewer number-crunching steps. When the phone takes a pair of pictures, its cameras calculate a “depth map,” an image that encodes the distance of everything in the frame, based on a pair of images.
Smartphone Giants’ Stance
There has been an increase in the number of smartphone makers including dual-lens portrait modes in their cameras. By taking stereo pictures and comparing the two images, they can create a rough depth map to some extent. Lighter colors usually indicate closer proximity. It is just so amazing how they can capture such detail with two cameras that are less than an inch apart. To layer the colored image on the gradient, Facebook created a 3D model. Through the power of artificial intelligence, you can enjoy 3D photos on your computer, mobile phone, or VR device the way they are now.
Google, Samsung, Huawei and Apple, all have built-in mechanisms which create artificial background blur. However, as of yet, they have mainly been used for artificially blurring background images. However, the depth map generated by that method does not have a scale – for example, light yellow does not mean 10 feet and dark red does not mean 100 feet. Yellow may indicate 1 foot in an image of a person taken a few feet to the left, and red may indicate 10 feet. As the scale differs for every picture, even if you take dozens or a hundred, you can’t tell how far a given object is from the camera, which can make stitching them together way more difficult.
Where Facebook Jumped In!
Facebook engineers tackled that problem. Their system captures multiple images of the environment by moving the smartphone around; an image (technically two images and a depth map) is acquired every second and added to the collection of images. A phone’s motion detection system will review the depth maps along with small movements captured by the camera in the background. Alternatively, the depth maps can be manipulated to line up with their neighbors in the correct shape. There is a complicated mathematical process behind this, which I cannot explain since it is a secret the researchers developed.
Instant 3D Photography
Creating a depth map this way not only works smoothly but also fast: about one second per image. That’s why they call the tool Instant 3D Photography and stitch the images together just the way you would with a panorama. This process, however, can be expedited and made easier by the use of the new and improved depth map. A 3D mesh (think of a papier-mache landscape) is then generated from these depth maps.
Nevertheless, the mesh is scrutinized for obvious edges, like a railing in the foreground covering the landscape in the background, and “ripped” where the edges obstruct the landscape. Objects are spaced so they appear at their many depths, and appear to move as if they were at their different depths.
Despite the diorama effect achieved, you might have guessed that the foreground would resemble little more than a paper cutout since, if it were a straight-on shot of a person, we would not see their sides or backs.
“Hallucinating” is The New Perfection
Convolutional neural networks are used in the final step, to hallucinate the remaining portion of the image. A bit like a content-aware fill, figuring out what to put and where based on nearby content. Like, if there is tiny hair in the image, it would not interrupt that. Similarly, it will also naturalize the skin tones a bit, of the people in the picture. So when you change the viewpoint slightly, it appears that you are looking “around” the object. It also convincingly recreates textures while giving an estimation of how the object may be shaped.
An image can either be viewed as a diorama-like 3D photo in a news feed or virtual reality by responding realistically to perspectives. Using this feature doesn’t require anyone to learn or download any new plug-ins. The effect of scrolling past these photos is to alter the perspective a little and alert people to their presence, which allows for easy interactions. The rendered images have some oddities, and the content isn’t always as accurate as it should be for instance, and the hallucinated scenes vary depending upon what the user sees.
All You Need to Know for Some 3D Snaps
You can start taking depth maps as soon as you have a phone with two cameras on the back of the device, so basically any modern smartphone. The best way to make 3D photos is by using a dual-lens smartphone, but you can also create them using other devices. A variety of widely available apps can also be used to create 3D photos, but that’s a topic for some other day.
When you have taken a few shots with a dual-lens smartphone, you can post it to Facebook using the three dots in the upper right corner of the new post to choose 3D photos. The action may take you to the Portraits folder on your device. You can also include a caption to the photo you wish to share before posting. Select the depth map option, depending on your phone’s model, in the Android camera app. A depth of field can also be simulated using the background blur. It is possible to conceal artifacts caused by 3D processing by adding a background blur. See which background blur works best for your shot by experimenting with similar photos and varying the amount of blur.
Some Cool Hacks and Tricks
The information below should be considered guidance, not rules, as it is somewhat subjective. Here are some tips and tricks I’ve collected while shooting exotic flowers and wild chickens over the summer. You’ll first need to find a few static objects, such as trees, flowers, and buildings. A 3D photograph is a great way to showcase miniatures, models, and toys. You can take 3D photos of people and animals too, of course, but excessive movement in the shot can ruin the shot, so ask your subject to stay still for a while.
Avoiding Wonky Results
Low light does not work most of the time. If the camera cannot see a certain part of the scene, it will estimate the depth and produce inaccurate results.
It is most accurate between 18 inches and 10 feet, but depending on the camera, it may even reach 30 feet.
Finding that Sweet Angle
Consider using an angle that gives the 3D effect the most effective if you’re going to shoot a geometric shape.
Those Fine Details
In some cases, fine details, like wisps of hair or bridge cables, don’t show up well because of the volume needed for depth calculations.
To capture depth when photographing landscapes, aim for something near the ground to capture a little depth (anything will do!).
Keeping It Still
Don’t move! Inaccurate data from the depth camera results from moving subjects and shaky cameras taking too long to process shots.
Play With the Distance and Background
Foreground and background elements should be distinct and strong. As a general rule, you want your photo split evenly across two or three layers of varying distance.
Keep It All Clear
Glass and water are both transparent surfaces, but they don’t always translate well. Transparent objects can be confusing to depth cameras, just as fine geometry can be.
Where Do We Stand Today?
Being skeptical beyond the limits, one can argue that while there are so many apps and filters available to make your photos look 3D, then what’s the need for all this process, that seems like a hassle at this point. Well, the level of detail and craftsmanship that you get to experience using Facebook’s 3D Photos cannot be mimicked by any possible way, to date. In this race of rendering more natural photos and videos, this tech seems to be taking the lead. However, their competitors are also working on some products within the same niche and it is being anticipated that they surely will give a tough competition to Facebook’s 3D Photo, which is overall pretty productive for the consumer market as they’ll get to see better service and products in this regard shortly.
At present, you can only create 3D photos with a device that has two cameras – because that is the limit of the technology – but anyone can view them. Our collective knowledge of how to capture our memories using this new technology is still in its infancy since 3D photography is still in its infancy. Don’t worry if you don’t get it right the first time. You’ll have to start again and again to figure out the right balance of light, composition, and subject, but you’ll eventually get it!