this is my latest view on how recurrent units and self-attention are related to each other. beautifully summarized and explained by @SewadeOgun https://t.co/17BAHrQivA
— Kyunghyun Cho (@kchonyc) June 13, 2020
this is my latest view on how recurrent units and self-attention are related to each other. beautifully summarized and explained by @SewadeOgun https://t.co/17BAHrQivA
— Kyunghyun Cho (@kchonyc) June 13, 2020
Today we describe a #MachineLearning approach combining #NaturalLanguageProcessing with #ComputerVision to automatically extract data from structured documents—invoices, receipts, etc.—with the potential to streamline many business workflows. Learn more at https://t.co/ed87XgCTbn pic.twitter.com/6uA9GTVzMT
— Google AI (@GoogleAI) June 12, 2020
Nice post about technology behind recent major improvements to Google Translate.
— Jeff Dean (@🏡) (@JeffDean) June 9, 2020
I love the animation showing how much translation quality has improved across so many languages over the past 13 yrs (most recent improvements give +5 BLEU points across all 100+ languages!) https://t.co/GZ7oopWmFr
I can think of hundreds of data augmentation strategies for CV, eg https://t.co/JsGj2fRNMa
— Reza Zadeh (@Reza_Zadeh) May 18, 2020
For NLP tasks, there's only a handful, eg https://t.co/aRQiZrh5N3
NLP data is far more brittle.
Does model size matter? 🤔@jxmorris12 does an excellent job comparing @huggingface's BERT and DistilBERT.#machinelearning #deeplearning #100daysofmlcodehttps://t.co/Y7dHbH7yqk
— Lavanya 🦋 (@lavanyaai) May 13, 2020
A Review of Using Text to Remove Confounding from Causal Estimates: https://t.co/0mNNL8dvL7
— Judea Pearl (@yudapearl) May 10, 2020
In other words, using Text as a noisy proxy for unmeasured confounders, as in https://t.co/fc47co4HOu. A survey of on-going works and pending problems.
Exploring Bayesian Optimization — A new Distill article by @ApoorvAgnihotr2 and @nipun_batrahttps://t.co/xDvuSjPoFU
— Distill (@distillpub) May 5, 2020
I just stumbled upon this Colab by @wightmanr that compares ResNets to EfficientNet. It's super informative and even-handed. It's also consistent with my own experience if you were curious- ResNets remain very competitive (still my go-to). https://t.co/gHH2nw1Ubc
— Jason #Masks4All Antic (@citnaj) May 4, 2020
My PhD thesis "Deep Learning with Graph-Structured Representations" is now available for download: https://t.co/hyz0cnoewZ -- It covers a range of emerging topics in Deep Learning: from graph neural nets (and graph convolutions) to structure discovery (objects, relations, events) pic.twitter.com/c5nNhsHGxB
— Thomas Kipf (@thomaskipf) May 4, 2020
Explainable Deep Learning: A Field Guide for the Uninitiated: https://t.co/xBsIfA17Tp
— Denny Britz (@dennybritz) May 1, 2020
This paper looks like a nice survey for anyone who wants to get started in Deep Learning explainability and trust/safety research. pic.twitter.com/kgwMvDRkQ4
"The Future of Natural Language Processing" https://t.co/PKl4cSSlZ8 from @huggingface, well done quick summary of recent NLP work and lots of good pointers! 👏
— Andrej Karpathy (@karpathy) April 24, 2020
Excited to give a (virtual, recorded) talk about "The Low-resource NLP Toolbox, 2020 Version" at the AfricaNLP workshop at #ICLR2020! Slides: https://t.co/ElPMlarUQK
— Graham Neubig (@gneubig) April 22, 2020
It's somewhat of a birds-eye view, but also focusing heavily on our work at @LTIatCMU. pic.twitter.com/VTQy2bPjXK