Generating Textual Adversarial Examples for Deep Learning Models: A Survey
— ML Review (@ml_review) February 4, 2019
By @Macquarie_Unihttps://t.co/SZkFlZSvsF pic.twitter.com/R7FRUcfJtD
Generating Textual Adversarial Examples for Deep Learning Models: A Survey
— ML Review (@ml_review) February 4, 2019
By @Macquarie_Unihttps://t.co/SZkFlZSvsF pic.twitter.com/R7FRUcfJtD
We tried to use Twitter to detect voter fraud & voter suppression. Here are a few things we learned during our quest:
— Emilio Ferrara (@jabawack) February 4, 2019
Perils and Challenges of Social Media and Election Manipulation Analysis: The 2018 US Midterms https://t.co/vTbKCMFkfA
w/ @ashokkdeb @LucaLuceri @adambbadawy pic.twitter.com/nhwb4odFuF
GLUE now has a human performance estimate for all nine tasks! Overall score 86.9. Thanks to @meloncholist. https://t.co/GNGW55uo5M
— Sam Bowman (@sleepinyourhat) February 3, 2019
Revisiting Self-Supervised Visual Representation Learning
— ML Review (@ml_review) February 3, 2019
By @__kolesnikov__ @XiaohuaZhai @giffmana
Investigates how architecture design influences representation quality
New SoTA for visual representation learned without labelshttps://t.co/y5USX7chILhttps://t.co/uCeG0s9l3z pic.twitter.com/sFRKltYKok
PyTorch Implementation of XNOR-Net https://t.co/NyQ95DvPtR #deeplearning #machinelearning #ml #ai #neuralnetworks #datascience #pytorch
— PyTorch Best Practices (@PyTorchPractice) February 3, 2019
One of those papers where the #nlproc person in me asks, “wait a sec, why wasn’t this a baseline in that community until now?” https://t.co/FlzAZrtyhU
— Delip Rao (@deliprao) February 2, 2019
This is a super cool resource: Papers With Code now includes 950+ ML tasks, 500+ evaluation tables (including SOTA results) and 8500+ papers with code. Probably the largest collection of NLP tasks I've seen including 140+ tasks and 100 datasets.https://t.co/lTAGE7LGZY pic.twitter.com/wfSyTplBR3
— Sebastian Ruder (@seb_ruder) February 1, 2019
There's a neat new position/analysis paper from @DeepMindAI on cross-task generalization in NLP (https://t.co/0eLCOO897Y; @redpony, @aggielaz are the only authors I can quickly find on Twitter). pic.twitter.com/UHU9K1eEq6
— Sam Bowman (@sleepinyourhat) February 1, 2019
The Evolved Transformer: They perform architecture search on Transformer's stackable cells for seq2seq tasks.
— hardmaru (@hardmaru) February 1, 2019
“A much smaller, mobile-friendly, Evolved Transformer with only ~7M parameters outperforms the original Transformer by 0.7 BLEU on WMT14 EN-DE.”https://t.co/ABtfdTGIYl pic.twitter.com/Rso7GUiDe9
DoWhy – Python library to estimate causal effects by @MSFTResearch
— ML Review (@ml_review) January 31, 2019
Based on a unified language for causal inference, combining causal graphical models and potential outcomes frameworks. https://t.co/wEh7EHwms1
"Using Pre-Training Can Improve Model Robustness and Uncertainty," Hendrycks et al.: https://t.co/AHNWcCTZtB
— Miles Brundage (@Miles_Brundage) January 30, 2019
(in response to "Rethinking Imagenet Pretraining" by He et al.)
Code for the Deep k-Nearest Neighbors (DkNN) was just added to the #CleverHans model zoo: https://t.co/7T0xVX4HIw
— Nicolas Papernot (@NicolasPapernot) January 29, 2019
The paper describing this approach for estimating support for a test-time prediction within the training data is here: https://t.co/dIPb2jEaem pic.twitter.com/jVDFySZMnq