As an Amazon Associate I earn from qualifying purchases.

iNaturalist opens up a wealth of nature data — and computer vision challenges

[ad_1]

On a hike in the woods, you spot a colorful little bird. You’re pretty sure it’s a finch — but what kind? The iNaturalist app was made for this kind of scenario: people all over the world use it to record and identify what they’re seeing outside. Increasingly, artificial intelligence enabled by Amazon Web Services (AWS) is playing a role in classifying those observations.

iNaturalist launched about 10 years ago, evolving from a master’s project from three students at the University of California, Berkeley. Since then, the app has attracted a community of 1.5 million scientists and nature lovers who post photos of everything from bumblebees to bears.

iNaturalist, which today is a joint initiative of the California Academy of Sciences and the National Geographic Society, once relied solely on its members to identify species.  Now computers are helping out.

“iNaturalist’s goal is really just to connect people with nature,” said Grant Van Horn, a research engineer at the Cornell Lab of Ornithology. Being able to name that flower or insect you see “really ups the engagement level and makes for a completely different experience,” he adds.

A unique computer vision challenge

Van Horn and Oisin Mac Aodha, now an assistant professor of machine learning at the University of Edinburgh, began working with iNaturalist five years ago to solve challenges related to the app’s data. Both were at the California Institute of Technology; Van Horn was working on his PhD, and Mac Aodha was a postdoctoral researcher. They were interested in how computer vision could help accelerate and validate the identifications that humans were making on the app.

The appeal of iNaturalist to the researchers is that it represented a unique challenge to the computer vision community, Van Horn says.

If you were to build a computer model to identify finches, for example, you might scrape some images from the internet and use those to train it.

But that dataset, likely full of high-quality photos with serenely perched birds, would look quite different from the vast diversity of mostly amateur photos on iNaturalist. There, a hiker may have just barely managed to capture a photo as a bird is flying away, or the bird might be hard to identify against the background.

That all assumes the bird is even standing still. Swallows and swifts, Van Horn noted, are rarely perching — a good birder will recognize them in flight, but how do you train a computer to do the same thing?

This is just one in a seemingly endless list of computer vision challenges related to nature.

Many species look strikingly similar. They have more than one name: The scientific one (Danaus plexippus, for example) and the common one (monarch butterfly). They can have more than one form: females of one species might look different from their male counterparts; eggs turn into larva, which turn into mature insects.

An image provided by the researchers illustrates the difficulty involved in identifying species from images taken in the wild.

Courtesy of Grant Van Horn and Oisin Mac Aodha

These challenges exist across millions of plant and animal species in the world. Taken from that perspective, the more than 300,000 species catalogued on the AWS-hosted iNaturalist are a fraction of what might be possible as users continue to add data.

“You could imagine a future system that can reason about all these things at, effectively, an unprecedented level of ability,” Mac Aodha said, “because there’s no person that’s going to be able to tell you which of X million different things this one picture could be.”

New machine learning competitions

In 2017, Van Horn and Mac Aodha began hosting competitions with iNaturalist data at the annual Conference on Computer Vision and Pattern Recognition (CVPR). Part of the conference’s Workshop on Fine-Grained Visual Categorization, the competitions present a dataset and then rank entries on their accuracy in classifying it. The winning team is the one that generates the lowest error rate.

In the beginning, just the basic taxonomy of iNaturalist’s data posed a learning curve for Van Horn and Mac Aodha. “This was not obvious to us: there’s no one taxonomic authority in the world,” Van Horn said.

They spent considerable time early on learning to work with the taxonomy, clean up the data, and assemble a dataset comprising 859,000 images for the first competition. In the second year, they featured a dataset with more of a long-tailed distribution, meaning there were many species that had relatively few associated images. In 2019, the dataset was reduced to 268,243 images of highly similar categories captured in a wide variety of situations.

After a break last year, the main iNat competition is back and bigger, with a training dataset of 2.7 million images representing 10,000 species. The image above is from an earlier iNat competition dataset.

Courtesy of Grant Van Horn and Mac Aodha Oisin

After a break last year, the main iNat competition is back and bigger, with a training dataset of 2.7 million images representing 10,000 species. The iNat Challenge 2021, which began March 8, ends on May 28.

“It’s not like we’re trying to throw in categories just to make this thing sound big,” Van Horn said. “It is big. And it will just continue to get bigger as the years progress.”

This year’s larger dataset could encourage teams to explore a recent trend in the machine learning field toward unsupervised learning, where a computer model can learn from the data without labels, or predefined “answers,” by seeking patterns within the information.

“We have quite a lot of images for each of these 10,000 categories,” Mac Aodha said. “We’re hoping that this will open up some interesting avenues for people who are exploring the self-supervised question in the context of this naturalistic, real-world task.”

Each competition entry must provide one predicted classification for every image in the dataset. An error rate of 5% on this year’s dataset would be “amazing,” Van Horn said, adding that one team had already achieved an 8.67% error rate by late March.

A move to Open Data

The ability to classify large groups of images opens up the potential to answer a wide range of scientific questions about habitat, behavior, and variations within a species. For example, iNaturalist users have documented alligator lizards’ jaw-clinching mating rituals in Los Angeles, where the amount of private property makes traditional wildlife studies impossible.

With this type of insight in mind, Mac Aodha and Van Horn have created a new dataset of natural world tasks (NeWT) that moves beyond the question of species classification and explores concepts related to behavior and attributes that are also exhibited in these photographs.

This work is appearing in the CVPR conference this year, and a competition is being planned to challenge competitors to produce models that generalize to these alternative questions.

So far, winning entries in the CVPR competitions haven’t been deployed by iNaturalist itself, because there are performance tradeoffs between code that generates the least errors, and code that is efficient enough to run on smartphones. But the competition datasets, Mac Aodha said, have found widespread use in the computer vision and machine learning literature, generating some 300 citations over the last few years.

FGVC7: Intro to the 7th Workshop on Fine-Grained Visual Categorization at CVPR 2020

The competitions are hosted on Kaggle, a machine learning and data science platform that draws a wide variety of entrants beyond the iNaturalist community. The 2019 competition drew 213 teams from around the world, and the winners were based in China.

In order for the competition to be fair, an entrant must be able to access and work with the thousands or millions of images in a dataset, no matter where they are in the world. The competitions, and now the iNaturalist app itself, are part of Open Data on AWS, which “makes accessing the data insanely easy and very convenient,” Van Horn said.

In 2020, iNaturalist received an Amazon Machine Learning Research Award, which provides unrestricted cash funds and AWS promotional credits to academics to advance the frontiers of machine learning. That helped cover costs for iNaturalist to continue storing data on AWS as it implemented machine learning classification. In March, the app moved to the Registry of Open Data on AWS, which ensures iNaturalist’s vast collection of observations — some 60 million — will remain freely accessible to anyone who wants access.

“iNaturalist is doing really important work to bring scientists and everyday citizens together to create a community and drive awareness on biodiversity and environmental sciences,” said An Luo, senior technical program manager leading the Amazon Research Awards program. “We are very excited that AWS is empowering them to serve more people as well as conduct advanced machine learning research using the AWS Open Data platform and AWS machine learning services such as Amazon SageMaker.”

Today, iNaturalist has gone from being entirely people-powered to regularly providing machine-generated identifications that are only just beginning to reveal new potential research paths.

“It’s important for us that this data lasts and is accessible for a long time, not just for the duration of the competitions,” Mac Aodha said. “Having a stable home for these datasets is a really valuable thing.”



[ad_2]

Source link

We will be happy to hear your thoughts

Leave a reply

Discover Your Essential Style at NovaEssentials
Logo