Neat đź“„ from #EuroVis:
— Mara Averick (@dataandme) June 6, 2018
"Exploring Interactive Linking Between Text and Visualization" ✒️ @beck_fabian &cohttps://t.co/tcvXcJvyCv #dataviz #infovis pic.twitter.com/71kWt6zdOh
Neat đź“„ from #EuroVis:
— Mara Averick (@dataandme) June 6, 2018
"Exploring Interactive Linking Between Text and Visualization" ✒️ @beck_fabian &cohttps://t.co/tcvXcJvyCv #dataviz #infovis pic.twitter.com/71kWt6zdOh
Excited to share our recent work showing that pooling is neither necessary nor sufficient for appropriate deformation stability in CNNs! Rather, filter smoothness most directly modulates deformation stability in networks both *with* and *without* pooling. https://t.co/nJvvyGenV0 pic.twitter.com/UAmuXbVXbZ
— Ari Morcos (@arimorcos) June 6, 2018
"Deep Video Networks" tee-up a future where everyone can fake everyone else. We're about to go through the looking glass in terms of trust in the information space and I don't think anyone understands the ramifications. - Read more in Import AI #97: https://t.co/FnF3vXldxd pic.twitter.com/nJqWrCXBuM
— Jack Clark (@jackclarkSF) June 6, 2018
Interested in AI and cognition?—read @GaryMarcus's 'Deep Learning: A Critical Appraisal! Crystal-clear summary of current challenges in #DeepLearning, with a positive role to be played by cognitive science going forward. Agree with so much here https://t.co/TMSteZCjNK
— Courtney Hilton (@courtneybhilton) June 6, 2018
"Playing Atari with Six Neurons," Cuccu et al.: https://t.co/o24EGI4AUS
— Miles Brundage (@Miles_Brundage) June 6, 2018
(6-18 for decision-making - additional computation needed for state representation which is separate but jointly trained)
Just finished reading through DeepMind's / @PeterWBattaglia et al.'s new review/position paper on graph neural nets: https://t.co/BtiATZg4jB Timely and highly relevant contribution to the field IMO, couldn't agree more with their motivation for this class of models
— Thomas Kipf (@thomaskipf) June 5, 2018
Cyberattack Detection using Deep Generative Models with Variational Inference
— Jeremy Jordan (@jeremyjordan) June 5, 2018
https://t.co/Kj68urhjNB
detect anomalous behavior using reconstruction error of a variational autoencoder. model is capable of learning in an online setting although threshold values may need tuning. pic.twitter.com/0Mulps72El
"Multi-Modal Methods: Image Captioning" 🌟
— ML Review (@ml_review) June 5, 2018
By The M Tank (w/ @beduffy1)
A reflective narrative of key ideas and milestones in Image Captioning for the last 5 years: from Translation through Attention to Reinforcement Learning and beyond.https://t.co/3pvIrUAOE0 #ComputerVision pic.twitter.com/KnIuUbhK4A
A nice review article about relational reasoning, relational inductive biases in typical deep learning building blocks, and graph networks. By @PeterWBattaglia et al. from DeepMind, Google Brain, MIT, University of Edinburgh. https://t.co/kmypHzcizw
— hardmaru (@hardmaru) June 5, 2018
Making it deliberately difficult to reproduce or evaluate your results is a sure sign that you are writing papers for the wrong reasons.
— François Chollet (@fchollet) June 5, 2018
Science may be a job, but before that, it is humanity's engine for knowledge creation. Make it work
If you care about the truth of your findings, about generating reliable knowledge, then you should *want* as many people as possible try to reproduce your results. And you should make it frictionless. https://t.co/19C9qOsDYa
— François Chollet (@fchollet) June 5, 2018
Modern deep CNNs are unable to generalize over tiny image transformations such as translation, scale, contrary to widespread belief: https://t.co/xtc0Us0b06 @filippie509 @GaryMarcus
— Max Little (@MaxALittle) June 5, 2018