As an Amazon Associate I earn from qualifying purchases.

Amazon and Virginia Tech announce inaugural fellowship and faculty research award recipients

[ad_1]

Amazon and Virginia Tech today announced the inaugural class of academic fellows and faculty research award recipients as part of the Amazon – Virginia Tech Initiative for Efficient and Robust Machine Learning.

“Our inaugural cohort of fellows and faculty-led projects showcases the breadth of machine learning research happening at Virginia Tech,” said Naren Ramakrishnan, the Thomas L. Phillips Professor of Engineering and director of the Amazon-Virginia Tech Initiative. “The areas represented include federated learning, meta-learning, leakage from machine learning models, and conversational interfaces.”

The initiative, launched in March of this year, is focused on research pertaining to efficient and robust machine learning. It provides an opportunity for doctoral students in the College of Engineering who are conducting AI and ML research to apply for Amazon fellowships and supports research efforts led by Virginia Tech faculty members.

An overhead shot of the Virginia Tech campus

Related content

Initiative will be led by the Virginia Tech College of Engineering and directed by Thomas L. Phillips Professor of Engineering Naren Ramakrishnan.

“The talent and depth of scientific knowledge at Virginia Tech is reflected in the high-quality research proposals and PhD student fellowship applications we have received,” said Prem Natarajan, vice president of Alexa AI. “I am excited about the new insights and advances in robust machine learning that will result from the work of the faculty and students who are contributing to this initiative.”

“This research will not only contribute to new algorithmic advances, but also study issues pertaining to practical and safe deployment of machine learning,” Ramakrishnan said. “We are very excited that the partnership between Amazon and Virginia Tech has enabled these projects.”

The two fellows and four faculty members will each receive funding to conduct research projects at Virginia Tech across multiple disciplines. What follows are the recipients and their areas of research.

Academic fellows

Virginia Tech students Qing Guo, left, who is pursuing a PhD in statistics; and Yi Zeng, right, who is pursuing a PhD in computer science, have been named as academic fellows.

Qing Guo is pursuing a PhD in statistics and studying under Xinwei Deng, a professor in the department of statistics. Guo, who interned as an applied scientist with Alexa AI earlier this year, is researching nonparametric mutual information estimation with contrastive learning techniques; optimal Bayesian experimental design for both static and sequential models; meta-learning based on information-theoretic generalization theory; and reasoning for conversational search and recommendation.

Yi Zeng is studying under Ruoxi Jia, assistant professor of electrical and computer engineering, while pursuing a PhD in computer science. Zing’s research entails assessing potential risks as AI is increasingly used to support essential societal tasks, such as health care, business activities, financial services, and scientific research, and developing practical and effective countermeasures for the safe deployment of AI.

Faculty research award recipients

The Virginia Tech faculty research award recipients are, top row, left to right: Peng Gao, assistant professor of computer science; Ruoxi Jia, assistant professor of electrical and computer engineering; and Yalin Sagduyu, research professor in the Intelligent Systems Division; bottom row, left to right, Ismini Lourentzou, assistant professor of computer science; and Walid Saad, professor of electrical and computer engineering.

Peng Gao, assistant professor of computer science; and Ruoxi Jia, assistant professor of electrical and computer engineering, “Platform-Agnostic Privacy Leakage Monitoring for Machine Learning Models

“Machine learning (ML) models can expose private information of training data when confronted with privacy attacks. Despite the pressing need for defenses, existing approaches have mostly focused on increasing the robustness of ML models via modifying the model training or prediction processes, which require cooperation of the underlying AI platform and thus are platform-dependent. Furthermore, how to continuously monitor the privacy leakage and detect the leakage in real time remains an important unexplored problem. In this project, we seek to enable real-time, platform-agnostic privacy leakage monitoring and detection for black-box ML models. We will first systematically assess the privacy risks due to provision of black-box access to ML models. We will then propose new platform-agnostic privacy leakage detection methods by identifying self-similar, low-utility model queries. We will finally propose a stream-based system architecture that enables real-time privacy leakage monitoring and detection.”

Ruoxi Jia, assistant professor of electrical and computer engineering; and Yalin Sagduyu, research professor in the Intelligent Systems Division, “FEDGUARD Safeguard Federated Learning Systems against Backdoor Attacks

“Rapid developments in machine learning have compelled organizations and individuals to rely more and more on data to solve inference and decision problems. To ease the privacy concerns of data owners, researchers and practitioners have been advocating a new learning paradigm—federated learning. Under this framework, the central learner trains a model by communicating with distributed users and keeping the training data stored locally at the users. While opening up a world of new opportunities for training machine learning models without compromising data privacy, federated learning faces significant challenges in maintaining security due to the unreliability of the distributed users. Successful completion of the project provides key enabling technologies for secure federated learning and accelerate its adoption in security-sensitive applications such as digital assistant systems.”

Ismini Lourentzou, assistant professor of computer science, “Toward Unified Multimodal Conversational Embodied Agents

“The research community has shown increasing interest in designing intelligent agents that assist humans to accomplish tasks. To do so, agents must be able to perceive the environment, recognize objects, understand natural language, and interactively ask and respond to questions. Despite recent progress on related vision-language tasks and benchmarks, most prior work has focused on building agents that follow instructions rather than endowing agents the ability to ask questions to actively resolve ambiguities arising naturally in real-world tasks. Moreover, current conversational embodied agents lack understanding of social interactions that are necessary for human-agent collaboration. Finally, due to limited knowledge transfer across tasks, generalization to unobserved contexts and scenes remains a challenge. To address these shortcomings, the objective of this proposal is to design embodied agents that know when and what questions to ask to adaptively request assistance from humans, learn to perform multiple tasks simultaneously, effectively capturing underlying skills and knowledge shared across various embodied tasks, and be able to adapt to uncertain human behaviors. The outcome will be a general-purpose embodied agent that can understand instructions, interact with humans and predict human beliefs, and reason to complete a broad range of tasks.”

Walid Saad, professor of electrical and computer engineering, “Green, Efficient, and Scalable Federated Learning over Resource-Constrained Devices and Systems

“Federated learning (FL) is a promising approach for distributed inference over the Internet of Things (IoT). However, prior FL works are limited by the assumption that IoT devices and wireless systems (e.g., 5G) have abundant resources (e.g., computing, memory, energy, communication, etc.) to run complex FL algorithms, which is impractical for real-world, resource-constrained devices and networks. The goal of this research is to overcome this challenge by designing green, efficient, and scalable FL algorithms over resource-constrained devices and wireless systems while promoting the paradigm of computing, communication, and learning system co-design. To this end, this research advances techniques from machine learning, wireless communications, game theory, and mean-field theory to yield three innovations: 1) Rigorous analysis of the joint computing, communication, and learning performance tradeoffs (e.g., between energy-efficiency, learning accuracy and efficiency, convergence time, and others) as function of the constrained system resources, 2) Optimal design of the joint learning, computing, and communication system architecture and configuration for balancing the performance tradeoffs and enabling efficient and green FL, and 3) Novel approaches for scaling the system over millions of devices. This research has tangible practical applications for all products that rely on FL over real-world wireless systems and resource-constrained devices.”



[ad_2]

Source link

We will be happy to hear your thoughts

Leave a reply

Discover Your Essential Style at NovaEssentials
Logo