We're living in a cyberpunk future:
— hardmaru (@hardmaru) April 22, 2019
“Fooling automated surveillance cameras: adversarial patches to attack person detection” https://t.co/xiPDXa6aMy pic.twitter.com/bvDGnH4jMh
We're living in a cyberpunk future:
— hardmaru (@hardmaru) April 22, 2019
“Fooling automated surveillance cameras: adversarial patches to attack person detection” https://t.co/xiPDXa6aMy pic.twitter.com/bvDGnH4jMh
Neural Painters: A learned differentiable constraint for generating brushstroke paintings. https://t.co/TkcMO3oNj8 pic.twitter.com/ab2RnSxnrP
— arxiv (@arxiv_org) April 19, 2019
Need to fine tune your #neuralnetwork to a specific task? Check out MorphNet, an open source technique for neural network model refinement that takes an existing network as input and produces a new one that is smaller and faster with better performance. https://t.co/9fTMuHtlWh
— Google AI (@GoogleAI) April 17, 2019
🚨 NEW RESEARCH ALERT: The diversity crisis in AI has hit a moment of reckoning: the call is coming from inside the house. Study led by @sarahbmyers shows lack of workforce diversity and bias in AI systems are connected. Read here: https://t.co/GXsrgVftOH #AIdiversitycrisis
— Kate Crawford (@katecrawford) April 17, 2019
A Pytorch implementation of "Splitter: Learning Node Representations that Capture Multiple Social Contexts" (WWW 2019). https://t.co/17eVpoWT69 #deeplearning #machinelearning #ml #ai #neuralnetworks #datascience #pytorch
— PyTorch Best Practices (@PyTorchPractice) April 16, 2019
Pytorch implementation of Block Neural Autoregressive Flow https://t.co/6KGOSwLt9N #deeplearning #machinelearning #ml #ai #neuralnetworks #datascience #pytorch
— PyTorch Best Practices (@PyTorchPractice) April 16, 2019
OctConv is a simple replacement for the traditional convolution operation that gets better accuracy with fewer FLOPs https://t.co/5CSylHVdA2 pic.twitter.com/kTK96gNj1i
— Ian Goodfellow (@goodfellow_ian) April 15, 2019
Kaiming He's original residual network results in 2015 have not been reproduced, not even by Kaiming He himself. https://t.co/piSvPx9nDz
— /MachineLearning (@slashML) April 12, 2019
This post by the Snorkel team gives a great overview of ingredients that make up a state-of-the-art approach on GLUE:
— Sebastian Ruder (@seb_ruder) April 11, 2019
1) Traditional supervision
2) Transfer learning
3) Multi-task learning
4) Dataset slicing (motivated by error analysis)
5) Ensemblinghttps://t.co/AcVTfjdiPF pic.twitter.com/3i6gu6X88w
From Variational to Deterministic Autoencoders
— hardmaru (@hardmaru) April 10, 2019
They show that simple regularized autoencoders can also learn a smooth, meaningful latent space without having to force the bottleneck code to conform to some arbitrarily chosen prior (ie Gaussian for VAEs).https://t.co/rsmiXk8Vw6
Open Questions about Generative Adversarial Networks — A new Distill article by @gstsdn.https://t.co/eD5ShfiZAR
— distillpub (@distillpub) April 9, 2019
Unsupervised RNNGs (https://t.co/p80QaM7Rt6 from Yoon + DeepMind #naacl2019).
— harvardnlp (@harvardnlp) April 9, 2019
Learn a language model while jointly inducing syntactic structure. Exploits a constrained model (cfg-crf) for posterior approx in AVI.
(Neat to see grammar induction return🌴, e.g. PRPN, DIORA, ...) pic.twitter.com/CHzLyGtmu1