Re-labeling ImageNet: from Single to Multi-Labels, from Global to Localized Labels
— AK (@ak92501) January 14, 2021
pdf: https://t.co/eb1qvqaO7I
abs: https://t.co/rw45sireIH
github: https://t.co/C6Po5bcKKt pic.twitter.com/KFKVBEbsfA
Re-labeling ImageNet: from Single to Multi-Labels, from Global to Localized Labels
— AK (@ak92501) January 14, 2021
pdf: https://t.co/eb1qvqaO7I
abs: https://t.co/rw45sireIH
github: https://t.co/C6Po5bcKKt pic.twitter.com/KFKVBEbsfA
Encoder-Decoder models are going long-range in 🤗Transformers!
— Hugging Face (@huggingface) January 13, 2021
We just released 🤗Transformers v4.2.0 with Longformer Encoder-Decoder (LED) for long-range summarization from @i_beltagy
Summarize up to 16K tokens with 🤗's pipeline or our inference API: https://t.co/xeykBhHvMU pic.twitter.com/q2sV35V1fS
“Has anyone else lost interest in ML research?”
— hardmaru (@hardmaru) January 13, 2021
“My collaborators/advisors are mostly running after papers and don't seem to have interest in doing interesting off-the-track things. Ultimately, research has just become chasing one deadline after another.”https://t.co/oxul2zkmRg
This is really is a great tutorial/example for implementing a transformer from scratch, https://t.co/lU9EeIA15b https://t.co/KUNybhM8qz pic.twitter.com/NoSTauAN86
— Sebastian Raschka (@rasbt) January 13, 2021
Jeff actually links to *3* of @timnitGebru's papers in this post (after firing her last month & publicly claiming her latest paper didn't meet "Google standards", even though it was accepted to a top conference)
— Rachel Thomas (@math_rachel) January 13, 2021
Model cards, Saving Face, and Closing the AI Accountability Gap https://t.co/huWK5iy9tZ
Excited to announce that our Deep Bootstrap framework for understanding generalization in deep learning has been accepted @iclr_conf! #ICLR2021 https://t.co/LhcK6VL6zP
— Hanie Sedghi (@HanieSedghi) January 13, 2021
Overview of the amazing progress in deep learning and medical computer vision. Published in @nature #digitalmedicine with the great @AndreEsteva, @katherinechou, @syeung10, @nikhil_ai, @thisismadani, @samottaghi, @yun_liu, @EricTopol,@JeffDean https://t.co/7NYRIQlt5N pic.twitter.com/4GamYwRJOd
— Richard Socher (@RichardSocher) January 12, 2021
Folks are starting to poke around in that #Parler dataset. This map shows where the Parler users were posting from, which is roughly a population density map.
— Randy Olson (@randal_olson) January 12, 2021
The dataset is linked in the source below. #DataScience #dataviz
Source: https://t.co/8rL6jIheOo pic.twitter.com/u0VfyXesAM
Differentiable Vector Graphics Rasterization for Editing and Learning (SIGGRAPH Asia 2020)
— hardmaru (@hardmaru) January 12, 2021
Nice work that allows backpropagation through an image rasterizer, so we can apply the goodies that work on pixel images to vector graphicshttps://t.co/s6ooUYKsqnhttps://t.co/9jMZY7lPdS pic.twitter.com/TBEZYI3cVQ
Well this was unexpected: the FTC went after a privacy-violating company and required it to delete not just the data but also the models trained using the data. It reminded me why I work in tech policy—because it's only frustrating 90% of the time. https://t.co/rTCD02wj0N
— Arvind Narayanan (@random_walker) January 12, 2021
Switch Transformers: Scaling to Trillion Parameter Models with Simple and Efficient Sparsity
— AK (@ak92501) January 12, 2021
pdf: https://t.co/0i6fcOuy4X
abs: https://t.co/AUKgennqZy
github: https://t.co/8QD4sJ2ckE pic.twitter.com/iDPDXj4bRR
We thank @svlevine for his excellent talk "Data-Driven Reinforcement Learning: Deriving Common Sense from Past Experience" last Friday, now available on our YouTube channel. https://t.co/RfFMxmLivj
— UCL CSML (@uclcsml) January 10, 2021