The model tries to generate consistent fonts in a vector SVG format, conditioned on a given latent style vector: pic.twitter.com/5JJfy8HkAt
— hardmaru (@hardmaru) April 5, 2019
The model tries to generate consistent fonts in a vector SVG format, conditioned on a given latent style vector: pic.twitter.com/5JJfy8HkAt
— hardmaru (@hardmaru) April 5, 2019
Unsupervised Learning by Competing Hidden Units
— hardmaru (@hardmaru) April 4, 2019
By Dmitry Krotov and John Hopfield
Paper: https://t.co/AImIkd9jXi https://t.co/XMwncvXaVu
The good folk at @TeamUpturn have done extensive research and concluded that Facebook can deliver ads to audiences skewed by race and gender **even when advertisers target large, inclusive audiences***: https://t.co/mIkZ9IQdNE
— Ariana Tobin (@Ariana_Tobin) April 4, 2019
Word embeddings quantify 100 years of gender and ethnic stereotypes
— Rachel Thomas (@math_rachel) April 2, 2019
paper: https://t.co/1Wk7kSINpg
code: https://t.co/nBi4WkmG4p
Reducing BERT Pre-Training Time from 3 Days to 76 Minutes
— Thomas Lahore (@evolvingstuff) April 2, 2019
"we propose the LAMB optimizer, which helps us to scale the batch size to 65536 without losing accuracy"https://t.co/WRRAh7zQFC
"Yet Another Accelerated SGD: ResNet-50 Training on ImageNet in 74.7 seconds," Yamazaki et al.: https://t.co/eqY8aY7zRb
— Miles Brundage (@Miles_Brundage) April 1, 2019
At the same time, people also found that we can simply scale up evolution to solve for millions of weights of a modern neural net, without using backprop. This will be immensely useful if we want to use large nets for non-differentiable problems.
— hardmaru (@hardmaru) March 31, 2019
Deep GA: https://t.co/3jTmnubzdi pic.twitter.com/65kC47qQC4
Human Performance on SQuAD2.0 has now been surpassed! Been in the making for the last year, but finally achieved! Congratulations Joint Laboratory of HIT and iFLYTEK Research on this milestone! @KCrosner @stanfordnlp pic.twitter.com/MyVU3faDhL
— Pranav Rajpurkar (@pranavrajpurkar) March 27, 2019
"SciBERT: Pretrained Contextualized Embeddings for Scientific Text," Beltagy et al.: https://t.co/WNR8pR99y7
— Miles Brundage (@Miles_Brundage) March 27, 2019
Preprint: On the consistency of supervised learning with missing valueshttps://t.co/BPOTyK9JIJ
— Gael Varoquaux (@GaelVaroquaux) March 26, 2019
• Consistency of mean imputation
• Adapting imputation procedures & formulations to out-of-sample prediction
• Study of missing-data procedures in trees pic.twitter.com/qaaKf7nwJO
Current methods for debiasing word embeddings are insufficient. They hide bias, rather than removing it, and a lot of the bias information can be recovered. https://t.co/s7RUtYDSnX pic.twitter.com/Q9h71vsnof
— Rachel Thomas (@math_rachel) March 25, 2019
Visualizing memorization in RNNs — A new Distill article by @andreas_madsen.https://t.co/urqxBDxiME
— distillpub (@distillpub) March 25, 2019