MaterialGAN: Reflectance Capture using a Generative SVBRDF Model
— AK (@ak92501) September 25, 2020
pdf: https://t.co/rtEC1FadbY
project page: https://t.co/IN6iKOTw9y pic.twitter.com/EWccMi6tUm
MaterialGAN: Reflectance Capture using a Generative SVBRDF Model
— AK (@ak92501) September 25, 2020
pdf: https://t.co/rtEC1FadbY
project page: https://t.co/IN6iKOTw9y pic.twitter.com/EWccMi6tUm
"A Unifying Review of Deep and Shallow Anomaly Detection" https://t.co/BO6xu6C1uT pic.twitter.com/dpTL77UPIG
— Thomas G. Dietterich (@tdietterich) September 25, 2020
I am just preparing a recent few-shot learning paper that beats GPT-3 for our reading group: https://t.co/2dic20gHtw. I just realize the presented algorithm (iPET) is an NLP version of Noisy Student Training. It's nice to see that some algorithms work for both NLP and CV.
— Tim Dettmers (@Tim_Dettmers) September 24, 2020
finetuning data efficient gans 100-shot-obama to cartoonset
— AK (@ak92501) September 23, 2020
github: https://t.co/CZt3YeJ5Gd
dataset: https://t.co/IzSvkrsLyk pic.twitter.com/4vfzvmYpjd
Today we describe a #NaturalLanguageProcessing model that achieves near BERT-level performance on text classification tasks, while using orders of magnitude fewer model parameters. Learn all about it below: https://t.co/94GZU4GOt3
— Google AI (@GoogleAI) September 21, 2020
Researchers at Microsoft Semantic Machines are taking a new approach to conversational AI—modeling dialogues with compositional dataflow graphs. Learn how the framework supports flexible, open-ended conversations, and explore the dataset and leaderboard: https://t.co/nPaR8HOhs8
— Microsoft Research (@MSFTResearch) September 21, 2020
I've received so much criticism for not incorporating reward info into world model / representations used by RL agents
— hardmaru (@hardmaru) September 18, 2020
But the way I see it, rewards are so overvalued…
See this new paper, “Decoupling Representation Learning from Reinforcement Learning”https://t.co/ADKKpOJLGI https://t.co/APTEJRMmOs
The Hardware Lottery by @SaraHookr
— hardmaru (@hardmaru) September 18, 2020
“When a research idea wins because it is suited to the available software and hardware, not because the idea is superior…The advent of specialized hardware makes it increasingly costly to stray off of the beaten path.”https://t.co/6nOT9N3K1A
"Statistical significance in deep learning papers?" (https://t.co/RB39xlkjrh) -- why is this not a thing in practice? My thoughts: (a) test sets are usually large enough; (b) reviewers trad. didn't ask for it (c) people don't like pitfalls (independence violations). Others?
— Sebastian Raschka (@rasbt) September 17, 2020
Very cool work from UCL+FAIR-London on neural architectures for reasoning that learn to generate and select rules. https://t.co/vgCcbDWAPj
— Yann LeCun (@ylecun) September 16, 2020
The source code and checkpoints for BYOL are now available along with an updated version of the BYOL paper - including new theoretical and experimental insights.
— DeepMind (@DeepMind) September 15, 2020
Code: https://t.co/6QEUjmAt9L
Paper: https://t.co/qyaSXnPQjN https://t.co/8226fwJBuP
A Comparison of LSTM and BERT for Small Corpus
— Thomas (@evolvingstuff) September 15, 2020
"bidirectional LSTM models can achieve significantly higher results than a BERT model for a small dataset"https://t.co/sZDlJ2GtSt pic.twitter.com/ciNL8bhqQE