Here's a fun 8-minute YouTube video explaining their CoordConv technique. Wish more authors did this! https://t.co/HrnVbjXvQy
— hardmaru (@hardmaru) July 11, 2018
Here's a fun 8-minute YouTube video explaining their CoordConv technique. Wish more authors did this! https://t.co/HrnVbjXvQy
— hardmaru (@hardmaru) July 11, 2018
An Intriguing Failing of Convolutional Neural Networks and the CoordConv Solution, from UberAI. Adding CPPN-style coordinate information actually complements and improves spatial-invariant property of ConvNets. Includes a TF implementation in the Appendix. https://t.co/qDfuJc7JuL pic.twitter.com/y7FEQkF7No
— hardmaru (@hardmaru) July 11, 2018
Our latest work is out!
— Aäron van den Oord (@avdnoord) July 11, 2018
Representation Learning with Contrastive Predictive Coding (CPC).
Autoregressive modeling meets contrastive losses in the latent space.
Learn useful representations in an unsupervised way.
-> On Audio, Vision, NLP and RL.
Arxiv: https://t.co/HN0zChI9he pic.twitter.com/3Tnpqt9N0v
Smoothing the max operator in a dynamic program recursion induces a random walk on the computational graph. The expected path on that walk can be computed efficiently by backpropagation, which converges to backtracking as smoothing vanishes. https://t.co/AjyGsez1B1 pic.twitter.com/RZELfrmRqn
— Mathieu Blondel (@mblondel_ml) July 10, 2018
Self-Guessing Mapper: Extreme Generalization with Topological Modeling & AIThttps://t.co/jo2bWpnLzs
— MLWave (@MLWave) July 10, 2018
Extends deep learners to work with zero data/increases extrapolation power
- Solve @GaryMarcus Challenge
- Learn Fizz Buzz with MLP
- Exploit Iris dataset
- Manifold Imputation pic.twitter.com/LnfiRFLhht
A mechanistic model of connector hubs, modularity, and cognition
— Alessandro Vespignani (@alexvespi) July 9, 2018
“model of hub connectivity accurately predicts the cognitive performance of 476 individuals in four distinct tasks”
https://t.co/IZE1Pihies pic.twitter.com/eEdqOXjz0w
Research published in Nature describes an artificial neural network made out of DNA that can solve a classic machine learning problem: correctly identifying handwritten numbers. The work is a step towards programming AI into synthetic biomolecular circuits https://t.co/fxileiitLi pic.twitter.com/4RzKdAPZFj
— nature (@nature) July 9, 2018
Go check out our interactive demo and upload your face to make it smile, grow old, add a beard, get blonde hair! Wonder how Prof. Hinton looks with a beard... pic.twitter.com/ydAVCZIK3s
— Prafulla Dhariwal (@prafdhar) July 9, 2018
Synthesizing realistic high-resolution images with Glow, a new reversible generative model: https://t.co/WIa9aI6NGU pic.twitter.com/vdCruz17li
— OpenAI (@OpenAI) July 9, 2018
Feature-wise transformations - A new Distill article by @dumoulinv, Ethan Perez, @nschucher, Florian Strub, @harm_devries, Aaron Courville, Yoshua Bengiohttps://t.co/YH4wHHs0X4
— distillpub (@distillpub) July 9, 2018
What are radiological deep learning models actually learning? When they're looking for pneumonia outside the lungs, we should all be asking this question.https://t.co/qjPxDEIpFa
— John Zech (@johnrzech) July 8, 2018
Code for this paper https://t.co/XolvOYZzXQ
— Tim Vieira (@xtimv) July 8, 2018
- Code is very readable (Python) and includes some neat tricks
- Uses scikit-learn as the base algorithm https://t.co/fXOjPJSJDj