We’ve released pre-trained BigBiGAN representation learning models https://t.co/Rhm94rOuX5
— DeepMind (@DeepMindAI) October 8, 2019
on TF Hub: https://t.co/E18skH2iRC
Try them out in a Colab at: https://t.co/ixQZJaABRJ pic.twitter.com/Hu7vPpLkgL
We’ve released pre-trained BigBiGAN representation learning models https://t.co/Rhm94rOuX5
— DeepMind (@DeepMindAI) October 8, 2019
on TF Hub: https://t.co/E18skH2iRC
Try them out in a Colab at: https://t.co/ixQZJaABRJ pic.twitter.com/Hu7vPpLkgL
Very cool research, congrats! Next: generate full video games : ) https://t.co/EOqqshnqLp
— Oriol Vinyals (@OriolVinyalsML) October 7, 2019
Conventional wisdom: "Not enough data? Use classic learners (Random Forests, RBF SVM, ..), not deep nets." New paper: infinitely wide nets beat these and also beat finite nets. Infinite nets train faster than finite nets here (hint: Neural Tangent Kernel)! https://t.co/2qrGyyvCiI
— Sanjeev Arora (@prfsanjeevarora) October 7, 2019
In parallel with this paper, @facebookai has released higher, a library for bypassing limitations to taking higher-order gradients over an optimization process.
— Edward Grefenstette (@egrefen) October 7, 2019
Library: https://t.co/U5dFLBXTHZ
Docs: https://t.co/2mYODGdI8x
Contributions very welcome. https://t.co/F8S7TsZlfe
FaceForensics++: Learning to Detect Manipulated Facial Images
— hardmaru 😷 (@hardmaru) October 7, 2019
They propose benchmark for DeepFake detection is based on DeepFakes, Face2Face, FaceSwap and NeuralTextures as representative methods for face manipulation and release a large labeled dataset.https://t.co/L8rSBnBNx9 https://t.co/0mw5CEIIqY pic.twitter.com/M7y8LB8JTU
"Is Fast Adaptation All You Need?," Javed et al.: https://t.co/igj2mi69pT
— Miles Brundage (@Miles_Brundage) October 7, 2019
"representations learned by directing minimizing interference are more conducive to incremental learning than those learned by just maximizing fast adaptation."
Due to popular demand, code to reproduce our "Beyond BLEU" paper is now available. Check it out to train your MT models on semantic objectives: https://t.co/C0crMwd25F https://t.co/Exz5pQcUSk
— Graham Neubig (@gneubig) October 4, 2019
"Stabilizing Generative Adversarial Network Training: A Survey" -- imho, training GANs can be extremely frustrating; this is a nice paper to keep handy for these occasions: https://t.co/OsbnxXVhb4
— Sebastian Raschka (@rasbt) October 3, 2019
We are releasing a new benchmark and data set to evaluate performance across various neural code search techniques to make it easier to evaluate a new model on a common set of questions. https://t.co/bpMtJcfY2c pic.twitter.com/rUYHx9awgf
— Facebook AI (@facebookai) October 3, 2019
Just updated our recent paper on BERTScore, a super simple method for evaluating text generation with BERT, with many more experiments. We evaluated with the outputs of 363 MT systems and model selection experiments! --> 41 pages and 29 giant tables :)https://t.co/U3zmEDbCPj pic.twitter.com/jiuEYRJ9ke
— Yoav Artzi (@yoavartzi) October 2, 2019
We had a "Hamiltonian extravaganza" at @DeepMind. We show how to learn Hamiltonian gen models from pixels https://t.co/iqznICMHig and propose a general method for combining symmetry Lie groups with ODE-Flow generative models in https://t.co/COSkqDWhkn #physics #ML #symmetries https://t.co/3K2vxtSUfl pic.twitter.com/03GsvCxLE2
— Danilo J. Rezende (@DeepSpiker) October 1, 2019
Read, Attend and Comment: A Deep Architecture for Automatic News Comment Generation
— hardmaru (@hardmaru) September 28, 2019
Because “Automatic news comment generation is beneficial for real applications but has not attracted enough attention from the research community” ... 🤔https://t.co/XgKOUkNZa8 https://t.co/SQJwRMubxm pic.twitter.com/4PthVEmyfW