Recommendation Algorithms & System Designs Of YouTube, Spotify, Airbnb, Netflix And Uber
— ML Review (@ml_review) May 5, 2021
By @theinsaneapphttps://t.co/lBYLQI3aEB pic.twitter.com/ChKGS2dqTe
Recommendation Algorithms & System Designs Of YouTube, Spotify, Airbnb, Netflix And Uber
— ML Review (@ml_review) May 5, 2021
By @theinsaneapphttps://t.co/lBYLQI3aEB pic.twitter.com/ChKGS2dqTe
Today we present a systematic study comparing wide and deep #NeuralNetworks through the lens of their hidden representations and outputs. Learn how similarities between model layers can inform researchers about model performance and behavior at https://t.co/2iM9HBDQjC pic.twitter.com/2CNeYNTNyR
— Google AI (@GoogleAI) May 4, 2021
✨📊 ...and it's ready! Proud to announce:https://t.co/FuUDjPUNCJ
— 👩💻 Paige Bailey @ 127.0.0.1 🏡 #BLM (@DynamicWebPaige) May 4, 2021
Thinking in Data is a @Code extension pack designed to help users visualize, understand, and interact with data, inspired by @LostInTangent's brilliant Thinking in Code collection. ⭐️
📁 https://t.co/YpTTiCe4jR
The talks for #MLSys 2021 are now publicly accessible on SlidesLive free for all as promised. Check out https://t.co/kvSo6LVaPM
— Alex Smola (@smolix) May 4, 2021
Divorce rates and income https://t.co/kADcNDdYgy pic.twitter.com/excRz3htxk
— Nathan Yau (@flowingdata) May 4, 2021
BERT memorisation and pitfalls in low-resource scenarios
— AK (@ak92501) May 4, 2021
pdf: https://t.co/XS8uUh81bG
abs: https://t.co/Adfmq5Etix
enabling BERT to perform well in extremely low-resource scenarios and also achieves comparable performance in non low-resource settings pic.twitter.com/P8dD9oQe2k
One Model to Rule them All: Towards Zero-Shot Learning for Databases
— AK (@ak92501) May 4, 2021
pdf: https://t.co/hoNBr4qONK
abs: https://t.co/Sb9IRmIoYI
a new approach for learned database components that can support new databases without running any training query on that database pic.twitter.com/O7SnSTegGp
I made a timeline for the movie, #TENET. It was hard https://t.co/u1oK5kXVVB
— Mark O. Riedl (@mark_riedl) May 3, 2021
(Image is of character timelines for just one action sequence, spoiler-free. Link contains full version with spoilers) pic.twitter.com/9t1B91839a
CoCon: Cooperative-Contrastive Learning
— AK (@ak92501) May 3, 2021
pdf: https://t.co/BxFs21DHel
abs: https://t.co/y5vwjgu3QY
github: https://t.co/n0xYs7ZksZ
a cooperative version of contrastive learning, called CoCon, for self-supervised video representation learning pic.twitter.com/35QDE7XGrN
Some writing tips from this fascinating interviewhttps://t.co/j5KJVI5pT6 pic.twitter.com/LaIW3bsUJ4
— Sasha Rush (@srush_nlp) May 2, 2021
It's so intriguing to see the inductive biases that self-attention is able to learn from unlabelled visual data. https://t.co/4dk61zmaI2
— hardmaru (@hardmaru) May 1, 2021
Emerging Properties in Self-Supervised Vision Transformers https://t.co/UbIaCJLi9Q
— Ankur Handa (@ankurhandos) April 30, 2021
object segmentation emerges out of ViT networks trained with self-supervision. This information is directly accessible in the self-attention modules of the last block. pic.twitter.com/KCCpIg87z9