As an Amazon Associate I earn from qualifying purchases.

Improved Halo Body technology allows customers to take scans in tighter spaces

[ad_1]

Amazon Halo, a membership dedicated to helping customers improve their individual health and wellness, includes a feature called Body which helps customers accurately estimate body fat percentage at home via personalized 3D models of themselves. Now, feedback from customers has prompted an important update to the Halo Body feature.

In order to capture a scan, customers originally had to stand at least four-and-a-half feet away from their smartphone cameras to make their whole body visible. For people in smaller living spaces, this presented a challenge.

Based on feedback from customers, the Halo team set out to solve the issue. Their goal: provide the same functionality, with the same level of clinical accuracy, by using the minimum body visibility necessary to measure body composition. This effort resulted in improved computer vision algorithms that work even in situations where the body is only visible from the knees up.

Initially, this might seem like a simple problem. After all, most body fat that pertains to long-term health outcomes is located near the core of the body, below the neck and above the knees, noted Prakash Ramu, an Amazon Halo senior manager of applied science. So, it makes sense for the scan to concentrate on that region of the body, rather than the body as a whole.

To make this work, we had to update almost every component or every module in our processing pipeline.

In reality, enabling this experience was a complex process that required many innovations. The chief issue the team contended with is that all of the machine learning models involved in the Body feature had been trained with head-to-toe photos.

“To make this work, we had to update almost every component or every module in our processing pipeline, from how images are captured on the app, and how guidance is provided to the customers, all the way up to how the images are analyzed in the cloud,” said Brandon Smith, a senior applied scientist in computer vision and machine learning on the team.

Smith also noted that there is a lot more ambiguity in images where the body is only partially visible. The body proportions of a person whose body is seen from head to knee, for example, might be misinterpreted as if their lower limbs were much shorter. “This is the main thing that we had to do: to train the machine learning models to be able to handle partial visibility,” said Smith.

Generating realistic synthetic images to retrain the model

Typically, it would be necessary to collect and annotate a large number of new data points — in this case, photos of bodies from head to knee — to retrain these models. This would be a lengthy, expensive process.

“Fortunately, we don’t have to capture brand new training data to make this work,” said Smith. “We can synthetically produce cropped or occluded body images. We can make use of all our existing training data, but we can add lots of synthetic cropping and occlusion augmentations so that the different models learn how to deal with those conditions.”

Making realistic synthetic data is a challenge unto itself. To create synthetic cropped images from the original full-body images, scientists analyzed partially visible photos of Amazon employees serving as voluntary Halo trials participants. Those photos were used to model the distribution of camera orientations and depths, and the visibility of the body in each image, resulting in realistic-looking partially visible synthetic photos. Those were then used to retrain the machine learning models.

For Antonio Criminisi, a senior manager of applied science on the team, that is an important breakthrough. “Being able to exploit synthetic data means that you can generate as much of it as you want,” he explained. “And the fact that synthetically augmented data helps the system work well on real data is awesome, because it means that you can get this into the hands of the customers very, very quickly.”

Another challenge that scientists had to overcome: how to generate a full-body 3D model from photos where parts of the body are not visible. To solve this, scientists trained a deep learning model on pairs of images, one showing a fully-visible body and the other showing a synthetically cropped version of the same body. This process was used to train the model to infer the shape and appearance of body parts that are not visible in the scan images.

A seamless transition for customers

The improved Halo Body feature, now available in the Halo app, allows customers to stand closer to the smartphone camera by around one to two feet.

For customers who have enough space in their homes to take full-body photos, the experience remains unchanged. The scan will still be able to capture head-to-toe images. For those with limited space, the app will automatically allow a head-to-knee scan. The customer doesn’t have to actively choose between one option or the other.

Customers’ body scan images are encrypted in transit to and at rest in the cloud, and only the customer has access to these images. In addition, customers control those images — they can delete each one individually or all at once in the Halo app, or opt out of cloud storage at any time so that scan images stored in the cloud will be automatically deleted.

“By reducing the friction involved in taking a scan, we are helping the customer to get to their body composition information quickly, rather than having to figure out what is the best way to take a scan,” Ramu said. He hopes that, with this update, more Halo members will be able to use the information provided by Halo to guide a healthier lifestyle.



[ad_2]

Source link

We will be happy to hear your thoughts

Leave a reply

Discover Your Essential Style at NovaEssentials
Logo