A lot more interesting papers at #KDD2018. Full list with downloadable [PDF]'s here: https://t.co/FxTlxVM5Lt
— Xavier 🎗🤖🏃 (@xamat) August 20, 2018
A lot more interesting papers at #KDD2018. Full list with downloadable [PDF]'s here: https://t.co/FxTlxVM5Lt
— Xavier 🎗🤖🏃 (@xamat) August 20, 2018
Attention Is All You Need, but CNNs and RNNs help.
— hardmaru (@hardmaru) August 20, 2018
CNN: https://t.co/b7UJ5bEDWu
RNN: https://t.co/m0YUVX1ySM
FFT-accelerated Interpolation-based t-SNE
— ML Review (@ml_review) August 19, 2018
By @GCLinderman
The most time-consuming step of t-SNE is accelerated by interpolating onto an equispaced grid and subsequently using the FFT to perform the convolution.
Paperhttps://t.co/aCL3Fmr0cn
Githubhttps://t.co/V4FTDdxPK1 pic.twitter.com/zxNxvmxQDi
Another paper out this week looking at the application of deep learning (RNN/LSTM) to activity prediction with tracking data in hockey and basketball. https://t.co/P5dXpE60ui pic.twitter.com/PzinnOMvUy
— Luke Bornn (@LukeBornn) August 18, 2018
Recycle-GAN: Unsupervised Video Retargeting
— ML Review (@ml_review) August 18, 2018
By @aayushbansal
Demonstrate the advantages of spatiotemporal constraints over
spatial constraints for image-to-labels, and labels-to-imagehttps://t.co/jimO74dQDc #MachineLearning pic.twitter.com/gkhU2dqnq3
Deep Convolutional Networks as shallow Gaussian Processes:https://t.co/oyoBqNJyfE
— fastml extra (@fastml_extra) August 17, 2018
Code:https://t.co/6x3Ru5TVLS
0.84% on MNIST! :P
Neural nets are terrible at arithmetic & counting. If you train one in 1 to 10, it will do okay on 3 + 5 but fail miserably for 1000 + 3000. Resolving this,"Neural Arithmetic Logic Units" can track time, do arithmetic on images of numbers, & extrapolate https://t.co/15SSRpjMnR pic.twitter.com/Abw2qRnu5I
— Reza Zadeh (@Reza_Zadeh) August 17, 2018
PinSage: A New Graph Convolutional Neural Network for Web-Scale Recommender Systems - GCNs for network data made it from small academic benchmarks (https://t.co/DjT738oGI3) to a web-scale industry application in less than two years time https://t.co/iiytbfATEF
— Thomas Kipf (@thomaskipf) August 16, 2018
Top-Down Tree Structured Text Generation. https://t.co/4E44PVF5kX pic.twitter.com/79LvEvGOwu
— arxiv (@arxiv_org) August 16, 2018
Deep EHR: Chronic Disease Prediction Using Medical Notes from @narges_razavian, Jingshu Liu and Zachariah Zhang at NYU
— PyTorch (@PyTorch) August 16, 2018
Code & Tool: https://t.co/jEJ4aDfHhI
Read the paper at: https://t.co/uycxypYmdo pic.twitter.com/NDqccr2cYW
Evaluating generative models by estimating a latent skill variable that explains their performance seems like a promising new research direction https://t.co/MyCT97j5G7
— Ian Goodfellow (@goodfellow_ian) August 16, 2018
New paper with @dkaushik96 is up: ***How Much Reading Does Reading Comprehension Require? A Critical Investigation of Popular Benchmarks*** We establish sensible RC baselines, finding question- and passage-only models perform surprisingly well. https://t.co/aaILTfu8WU
— Zachary Lipton (@zacharylipton) August 16, 2018