As an Amazon Associate I earn from qualifying purchases.

Amazon and UCLA announce 2023 Science Hub awards

[ad_1]

The Science Hub for Humanity and Artificial Intelligence at UCLA has announced three gift-funded awards and one sponsored project, recognizing researchers who are studying the societal impact of artificial intelligence (AI).

Launched in October 2021, the Science Hub supports projects that explore how AI can help solve humanity’s most pressing challenges while addressing critical issues of bias, fairness, accountability, and responsible AI. The hub seeks to foster collaborations between Amazon scientists and academic researchers across disciplines, including computer science, electrical and computer engineering, and mechanical and aerospace engineering.

Funded by Amazon and housed at the UCLA Samueli School of Engineering, the Science Hub supports a range of research projects and doctoral fellowships. In May 2022, Amazon and UCLA announced the recipients of the hub’s inaugural set of awards, which focused on topics that ranged from computational neuroscience and children’s automatic speech recognition to human-robot collaboration and privacy-preserving machine learning.

The project investigators and the respective projects being supported are as follows:

Kai-Wei Chang, associate professor and Amazon Scholar, and Nanyun (Violet) Peng, assistant professor, department of computer science and Amazon Visiting Academic: “Contextualized document understanding: Learning to comprehend documents through relevant information”

“Documents, such as receipts, tax forms, and resumes, are critical to communication between businesses and individuals,” Chang and Peng write. “However, processing them is tedious, time-consuming, and error-prone for clerks. Therefore, automatically extracting information from scanned documents using an AI system is a valuable solution. However, the variability in document layouts presents challenges for AI in understanding documents.

“In this project, we explore the potential of using contextual information to improve AI’s ability to process, interpret, and extract information from documents,” they continue. “We propose a novel multi-modal foundation model based on denoising sequence-to-sequence pre-training and investigate how contextual information, such as document type, purpose, and filling instructions, can be leveraged to understand documents.”

Cho-Jui Hsieh, associate professor, department of computer science: “Making large language models small and efficient”

“Large language models (LLMs) have demonstrated exceptional capabilities across a diverse range of tasks. However, these models come with high computational and memory costs,” Hsieh writes. “The open-sourced T5 model contains 770 million parameters, and state-of-the-art models such as GPT and PALM usually have hundreds of billions of parameters. The gigantic model size also results in considerable computational overhead during inference, making it challenging to deploy language models in real-time applications, not to mention edge devices with limited capacity. This proposal aims to develop a series of compression algorithms to make large language models small and efficient.

“We will introduce a new family of data-aware compression algorithms, which take into account both the structure and semantics of languages,” he continues. “For example, the importance of the words in a text can vary greatly, leading to the opportunity of filtering out unimportant tokens in the model. Further, texts often have a strong low-rank or clustering structure, presenting an opportunity to enhance existing compression methods. Based on this novel concept, we will improve existing compression methods by leveraging language structure and develop a new scheme for speeding up inference.”

Chenfanfu Jiang, associate professor, department of mathematics: “Differentiable physics augmented neural radiance fields for real-to-sim and manufacture-ready 3D garment reconstruction”

“The core challenge is to digitally reconstruct garments in a way that not only accurately models their 3D shape but also predicts how they move and can be manufactured,” Jiang writes. “Traditional methods capture shape but overlook the fabric’s material properties and sewing patterns, essential for realistic simulation and production. Addressing this gap has broad implications — from faster and waste-reducing design processes in the fashion industry to enhancing realism in virtual worlds like the metaverse.

“We’re integrating physics-aware machine learning models with existing 3D geometry techniques,” he continues. “The aim is to simultaneously recover the 2D sewing patterns and material parameters from images or videos of the garment. This allows for both accurate virtual simulation and real-world manufacturing.”

Jens Palsberg, professor, department of computer science: Learning to prune false positives from a static program analysis

“Static program analyses can detect security vulnerabilities and support program verification,” Palsberg writes. “If we can prune the false positives these tools sometimes produce, they will become even more useful to developers.

“Our society produces more code for more tasks than ever before, but that code is of mixed quality. Fortunately, we have tools that can discover many of the problems and if we can make those tools more useful, we will be on a path to fixing more problems,” he continues. “Our idea is to use machine learning to prune false positives. Our goal is to reach a false-positive rate of no higher than 15–20 percent.”



[ad_2]

Source link

We will be happy to hear your thoughts

Leave a reply

Discover Your Essential Style at NovaEssentials
Logo