As an Amazon Associate I earn from qualifying purchases.

KDD: Where linear methods still have their place

[ad_1]

The Association for Computing Machinery’s annual conference on Knowledge Discovery and Data Mining (KDD) begins next week, marking 23 years since Christos Faloutsos, an Amazon Scholar and a professor of computer science at Carnegie Mellon, first attended it.

Faloutsos won the conference’s Innovation Award in 2010. This year, he’s a coauthor on three of the Amazon papers accepted to the conference.

Christos Faloutsos, an Amazon Scholar and professor of computer science at Carnegie Mellon University

Much has changed since Faloutsos’s first KDD, in 1997. “Back then the hot topic was association rules: people who buy bread also buy milk — or diapers and beer, a paradox,” Faloutsos says. “Later on, the hot topic was support vector machines. Then it was data science, big data, how to interoperate machine learning with Hadoop — in 2005, 2010, Hadoop was the tool of choice. And of course, the last several years it’s deep learning and neural networks. It has changed a lot, but the goal is still the same: how to find patterns in large amounts of data.”

In some subfields of computer science, the deep-learning revolution has meant that subject matter expertise is less important than it used to be: system designers can trust that neural networks themselves will learn which features of input data are relevant to a computational task. But Faloutsos says that in the field of knowledge discovery, that’s not always the case.

“With a lot of samples, this might be possible,” Faloutsos says. “If you have a billion dog photographs and a billion cat photographs, then eventually the deep-learning network will learn. With few samples, we still have to do very careful feature selection. Will customer ‘Smith’ buy shoes, or will patient ‘Johnson’ develop some disease? For such cases, we need to think about what features we should give: Should we just give the amount that Smith spent the past few times? Should we use the logarithm of the amount? Do we normalize the amount, be it zero-mean or standard deviation or something else? Similarly, for patient Johnson: What are the right features for the patient? Do we use body height, body weight, blood pressure? Feature extraction is the tough part.”

Indeed, in his own work, Faloutsos rarely uses neural networks. Much of his research focuses on traditional — frequently linear — knowledge discovery methods.

“The deep network will always do better because it includes linear methods as a special case,” Faloutsos says. That is, any linear method can be encoded in the parameters of a neural network; so if a neural network learns to exploit nonlinearities, it’s probably because they improve performance.

Explainability and speed

Still, linear methods have two advantages that in some circumstances can make up for a possible drop in accuracy: explainability and speed. Those are also two attributes that are important to many knowledge discovery applications at Amazon.

“For some applications, explainability is mandatory,” Faloutsos says. “You cannot say, ‘I’ll do open-heart surgery because the neural network said so.’ You have to have a very good reason.”

One of Faloutsos’s research projects is to start with linear knowledge discovery systems and then slowly add in nonlinearities, which should make the nonlinearities easier to explain. “If ten of your deep-learning units are linear, and two are nonlinear, you can figure out what these nonlinear units do,” Faloutsos explains. “For example, if you’re doing forecasting for software product sales, you could have a discontinuity when a new version is released.”

For some applications, explainability is mandatory. You cannot say, ‘I’ll do open-heart surgery because the neural network said so.’

Given the number of daily transactions at the Amazon Store, and the number of products in the Amazon catalogue, efficient computation is also essential. One of Faloutsos’s projects at Amazon, for instance, has been fraud detection, which requires rapid analysis of huge numbers of transactions to detect anomalies.

“If you have a bipartite graph, people buying products, and you have 20 people buying the same 40 products, that’s suspicious,” Faloutsos says. “Yes, everybody on Amazon buys more or less the same products. But you never buy exactly the same 40 items as exactly 20 other people. And we have a lot of algorithms that spot that, and they were very successful.”

“Linear methods are very easy to train,” Faloutsos explains. “They are super-optimized: for SVD [singular-value decomposition, a workhorse technique in linear knowledge discovery methods], there are dissertations after dissertations about how do to do fast SVD, dense SVD, sparse-matrix SVD, you name it. There are huge speed advantages because there are super algorithms for linear methods.”

What he hopes his students will appreciate, Faloutsos says, is that “we want to choose the right tool for the given application.”

“When a technique works in three or four different areas, it’s a great tool to have in your toolbox,” he says. “That’s my rule of thumb for a very good technique that I would recommend to my students. If I see a method working for text, for images, for voice, then it’s a good method. Neural networks apply in all these settings; that’s why they have had this huge success. But the same argument holds for SVD, for power-laws, and for fractals.”



[ad_2]

Source link

We will be happy to hear your thoughts

Leave a reply

Discover Your Essential Style at NovaEssentials
Logo