[ad_1]
The UT Austin-Amazon Science Hub has announced the inaugural winners of two gift project awards and a doctoral graduate fellowship. The awards recognize researchers whose work fulfills the goals of the hub: to address current challenges via cutting-edge technological solutions that will benefit society at large.
The Amazon-funded collaboration, launched in April 2023 and hosted in UT Austin’s Cockrell School of Engineering, aims to promote partnership among faculty, students, and other leading scholars and foster a diverse and sustainable pipeline of research talent.
In line with the goals of the hub, this year’s award winners are conducting research in artificial intelligence, machine learning, and large language models (LLMs).
The fellowship provides selected doctoral students at UT Austin with up to one full year of funding to pursue independent research projects. The two research projects selected will be run by UT faculty principal investigators.
The winners of the awards are as follows:
Doctoral-fellowship award
Ajay Jaiswal, PhD candidate, Visual Informatics Group
Jaiswal’s research revolves around efficient and scalable learning, deep-neural-network compression, sparse neural networks, and efficient inference. Jaiswal is a member of the Visual Informatics Group (VITA) at UT Austin. His current research project is about efficiently scaling multimodal models up at the server while also making them deployable at the edge. His advisors are Ying Ding, the Bill and Lewis Suit Professor in the School of Information and herself a former recipient of an Amazon Research Award; and Atlas Wang, the Jack Kilby/Texas Instruments Endowed Assistant Professor in the Chandra Family Department of Electrical and Computer Engineering.
Gift project awards
“Verifying factuality of LLMs, with LLMs”
Greg Durrett, associate professor of computer science, and his team plan to build on prior work regarding political fact-checking and large language models to improve machine-written text. The team has previously critiqued the outputs of summation models, and this project’s goal is to decompose and verify the answers in paragraph-long responses. The system uses three stages: decomposition, sourcing, and verification. This mimics the process that a human uses to fact-check content.
“TinyCLIP: Training smaller transferable vision-language models through multimodal”
The transferability of CLIP (contrastive language-image pretraining) models is crucial to many vision-language tasks. Sujay Sanghav, associate professor of electrical and computer engineering, and his team have the goal of developing smaller CLIP models that remain fully transferable. This project will use several new algorithmic ideas to ensure that the new models are functional and also involves dataset creation.
window.fbAsyncInit = function() { FB.init({
appId : '1024652704536162',
xfbml : true, version : 'v2.9' }); };
(function(d, s, id){
var js, fjs = d.getElementsByTagName(s)[0];
if (d.getElementById(id)) {return;}
js = d.createElement(s); js.id = id;
js.src = "https://connect.facebook.net/en_US/sdk.js";
fjs.parentNode.insertBefore(js, fjs);
}(document, 'script', 'facebook-jssdk'));
[ad_2]
Source link