Summary of the paper: https://t.co/dZlIAyYIC0
— hardmaru (@hardmaru) December 8, 2018
Summary of the paper: https://t.co/dZlIAyYIC0
— hardmaru (@hardmaru) December 8, 2018
This work will be presented at the #NeurIPS2018 Workshop on Machine Learning for Creativity and Design by @tkasasagi, in collaboration with @mikb0b, @kitamotoasanobu, Alex Lamb, Kazuaki Yamamoto, and myself. Here are more details about the dataset:
— hardmaru (@hardmaru) December 8, 2018
https://t.co/RJKPZvh5ZA
Our work on “Deep Learning for Classical Japanese Literature” is out!
— hardmaru (@hardmaru) December 8, 2018
We introduce Kuzushiji-MNIST, a drop-in replacement for MNIST, plus 2 other datasets. In this work, we also try more interesting tasks like domain transfer from old Kanji to new Kanji.https://t.co/qmizR9KF1O pic.twitter.com/ELdDDUmEE2
High-resolution PDF available here: https://t.co/5OmlGx91qR pic.twitter.com/tn3KLXZg1S
— Dustin Tran (@dustinvtran) December 7, 2018
code2vec: Learning Distributed Representations of Code #POPL2019
— ML Review (@ml_review) December 6, 2018
By @urialon1@omerlevy_ @yahave
Demohttps://t.co/OvpSLOtTCp
ArXivhttps://t.co/ZQsImneOsg
Githubhttps://t.co/ylECFYOhwn pic.twitter.com/30qn1hUvTW
I will give a talk next week at "The 2nd International Workshop on Big Data Analytic for Cyber Crime Investigation" in Seattle on the application of #DeepLearning to #forensics based on a solution to a @kaggle challenge.
— Vladimir Iglovikov (@viglovikov) December 6, 2018
The written version of it https://t.co/L6KiNOi5NZ pic.twitter.com/5t1StP11pF
#NeurIPS2018 Online Videoshttps://t.co/5GLsS3c1Dq pic.twitter.com/wjpNun2TYf
— ML Review (@ml_review) December 5, 2018
IntroVAE: Introspective Variational Autoencoders for Photographic Image Synthesis. They propose a clever way to combine VAE+GAN by reusing the encoder of the VAE as the discriminator, allowing VAE to generate hi-res images. Found this gem at #NeurIPS2018. https://t.co/WQl4DdLyyk pic.twitter.com/hrOpPsJOwT
— hardmaru (@hardmaru) December 4, 2018
Imitation by watching YouTube: learning features from YouTube videos through self-supervision allows us to solve hard exploration games in Atari.
— DeepMind (@DeepMindAI) December 4, 2018
Paper: https://t.co/8iZr3OUucZ
Video: https://t.co/mvOfxcZm6B
Spotlight talk: Wed 4:20pm, 220CD
Poster session: Wed 5-7pm, 210
Bilinear Attention Networks #NIPS2018
— ML Review (@ml_review) December 4, 2018
By @jnhwkim
Extend the idea of co-attention into bilinear attention which considers every pair of multimodal channels. Achieves VGA 2.0 and Flickr30k SoTAhttps://t.co/KUV7xx4sdR pic.twitter.com/FpKTONS3O8
Proteins are essential to life. Predicting their 3D structure is a major unsolved challenge in biology and could impact disease understanding and drug discovery. I’m excited to announce that we have won the CASP13 protein folding competition! #AlphaFold https://t.co/jGXR3e0lfh
— Demis Hassabis (@demishassabis) December 3, 2018
Advances in few-shot learning: reproducing results in PyTorch by Oscar Knagg https://t.co/U4g0oTwn6J #deeplearning #machinelearning #ml #ai #neuralnetworks #datascience #pytorch
— PyTorch Best Practices (@PyTorchPractice) December 1, 2018