From edge computing and causal reasoning to differential privacy and visual-field mapping, the top blog posts of the year display the range of scientific ...
We explored everything from the science behind the new F1 car to a look back at how the operations science team addressed a challenge of almost unimaginable ...
As a Principal Scientist within Alexa, you are a trusted part of the technical leadership. You bring ...
Most state-of-the-art computer vision models depend on supervised learning, in which labeled data is used for training. But labeling is costly, and the cost ...
Zongyi (Joe) Liu is a principal scientist in the Amazon Customer Experience and Business Trends (CXBT) organization, which evaluates the customer experience ...
In June 2022, Amazon re:MARS, the company’s in-person event that explores advancements and practical applications within machine learning, automation, ...
A quick guide to Amazon’s innovative work at the IEEE Spoken Language Technology Workshop (SLT), which begins next week:Accelerator-aware training for ...
A. While I was studying industrial design at Lund University, I had the opportunity to go to Johnson Space Center and take part in a project with NASA. The ...
Modern AI models, such as those that recognize images and speech, are highly data dependent. While some public-domain data sets are available to train such ...
In an effort to create enduring pipelines of diverse science talent and differentiated research, Amazon today announced a collaboration with Tennessee State ...
In recent years, automatic speech recognition (ASR) has moved to all-neural models. Connectionist-temporal-classification loss functions are an attractive ...
Knowledge distillation is a popular technique for compressing large machine learning models into manageable sizes, to make them suitable for low-latency ...
- « Previous Page
- 1
- …
- 70
- 71
- 72
- 73
- 74
- …
- 87
- Next Page »