Awesome, our new paper on "Gender Privacy: An Ensemble of Semi Adversarial Networks for Confounding Arbitrary Gender Classifiers" is now available on arxiv: https://t.co/y1KeXDPl5r
— Sebastian Raschka (@rasbt) August 2, 2018
Awesome, our new paper on "Gender Privacy: An Ensemble of Semi Adversarial Networks for Confounding Arbitrary Gender Classifiers" is now available on arxiv: https://t.co/y1KeXDPl5r
— Sebastian Raschka (@rasbt) August 2, 2018
"Auto Keras" is an OSS project that uses neural architecture search to automatically develop Keras models. GitHub code: https://t.co/neGqmQz4g2
— François Chollet (@fchollet) August 2, 2018
Paper detailing the algorithm used: https://t.co/n51QrXFy03 pic.twitter.com/JIU0Kfc1Os
Pretty mind-blowing paper, 3D-printed representations of neural networks you can run inference on by shining light through... literal *light speed* inference times.
— Robbie Barrat (@DrBeef_) August 1, 2018
They only have classification networks so far, but I'd love to see pix2pix or a GAN soon.https://t.co/mTqIzwQTEK pic.twitter.com/bTI0yXvT5F
My latest paper introduces local linear forests! Rina Friedberg, Julie Tibshirani, myself, and Stefan Wager. Improves on random forest to remove bias and exploit smooth signals. Establishes asymptotic normality. Useful for both prediction and heterogeneous treatment effects! https://t.co/PjE15tvFBZ
— Susan Athey (@Susan_Athey) August 1, 2018
The latest entry in turning ImageNet into MNIST: 75.8% top-1 test accuracy with ResNet-50 (90 epochs) in 6.6 minutes using 2048 Tesla P40 GPUs https://t.co/AxqiLdIOyQ 64K "mini"-batch size, mixed precision training, LARS, BN&bias weight decay at zero, custom all-reduce
— Andrej Karpathy (@karpathy) August 1, 2018
https://t.co/hYiWI7ntyk TensorFuzz automates the process of finding inputs that cause some specific testable behavior, like disagreement between float16 and float32 implementations of a neural network https://t.co/Nt9YX5kJXU
— Ian Goodfellow (@goodfellow_ian) July 31, 2018
"Highly Scalable Deep Learning Training System with Mixed-Precision: Training ImageNet in Four Minutes," Jia and Song et al., Tencent: https://t.co/AH3crhdBQH
— Miles Brundage (@Miles_Brundage) July 31, 2018
Learning other agents’ goals by putting ourselves in their “shoes”. Nice idea. https://t.co/nb1xw406CT
— Nando de Freitas (@NandoDF) July 30, 2018
Towards Understanding the Role of Over-Parametrization in Generalization of Neural Networks
— ML Review (@ml_review) July 28, 2018
By @bneyshabur @ylecun
New capacity bound that decreases with the increasing number of hidden units & explains the better generalization of larger networkshttps://t.co/1R2QD6oWEc pic.twitter.com/KteLTbEihK
"Variational Memory Encoder-Decoder," Le et al.: https://t.co/vF2PfCKu6p
— Miles Brundage (@Miles_Brundage) July 27, 2018
Style transfer used to mysteriously work best only on VGG architectures. See how a decorrelated image parameterization allowed us to bring naive style transfer (Only content + style loss after gram matrix transform a la Gatys et al.) to GoogLeNet:@zzznah @nicolapezzotti @ch402 pic.twitter.com/nS4OZn3YJd
— Ludwig Schubert (@ludwigschubert) July 25, 2018
Big shout out to the (very cool!) similar work by @hiroharu_kato et al and @anishathalye et al!https://t.co/troZxrSRp3https://t.co/3LKdMb7HAD 4/
— Chris Olah (@ch402) July 25, 2018