Our second paper, “Generating Albums with SampleRNN to Imitate Metal, Rock, and Punk Bands” published @ MuMe 2018 (ICCC 2018)
— dadabots (@dadabots) June 28, 2018
Paperhttps://t.co/WViBdgaVxR
Musichttps://t.co/BCqoF1IepC
Our second paper, “Generating Albums with SampleRNN to Imitate Metal, Rock, and Punk Bands” published @ MuMe 2018 (ICCC 2018)
— dadabots (@dadabots) June 28, 2018
Paperhttps://t.co/WViBdgaVxR
Musichttps://t.co/BCqoF1IepC
Unexpected finding while reproducing World Models: Untrained RNNs perform just as well.https://t.co/eugLm1gb4t pic.twitter.com/Nh8iE7Amdr
— Brandon Rohrer (@_brohrer_) June 28, 2018
The challenge of realistic music generation: modelling raw audio at scale
— DeepMind (@DeepMindAI) June 28, 2018
- Paper: https://t.co/0NqNXVBa2n
- Listen to piano music samples here: https://t.co/qOOi34Jl3P
Guided evolutionary strategies: escaping the curse of dimensionality in random search. A principled method to leverage training signals which are not the gradient, but which may be correlated with the gradient. Work with @niru_m @Luke_Metz @georgejtucker. https://t.co/LNPHDUrDFu
— Jascha (@jaschasd) June 28, 2018
This algorithm beats CMA-ES (kind of like the LSTM of the ES world) for a few tasks. CMA-ES (from Nicolaus Hansen) is still my algorithm of choice for blackbox optimisation. I wonder if this algo will consistently beat CMA-ES on a variety of different tasks and make me use it ..
— hardmaru (@hardmaru) June 28, 2018
Guided Evolutionary Strategies: Escaping the curse of dimensionality in random search, from Google Brain: @Niru_M @Luke_Metz @GeorgeJTucker @JaschaSD.
— hardmaru (@hardmaru) June 28, 2018
Paper: https://t.co/6UmZdNyzLz
GitHub: https://t.co/BrKeRlkEBS
Notebook: https://t.co/AzuBF7coAP pic.twitter.com/HIGEEi7FkN
Stacking WaveNet autoencoders on top of each other leads to raw audio models that can capture long-range structure in music. Check out our new paper: https://t.co/JGHfu2OkAL
— Sander Dieleman (@sedielem) June 28, 2018
Listen to some minute-long piano music samples: https://t.co/CzF7PnxVC3 pic.twitter.com/snWRKHbnwr
Our latest work shows that learning to colorize videos causes visual tracking to emerge automatically!
— Carl Vondrick (@cvondrick) June 27, 2018
Blog: https://t.co/FDVzJmmZ7h
Paper: https://t.co/U4jS83iI7B@alirezafathi @kevskibombom @sguada @abhi2610 pic.twitter.com/R3vMR3raFJ
"remarkable architecture search efficiency (with 4 GPUs: 2.83% error on CIFAR10 in 1 day; 56.1 perplexity on PTB in 6 hours)"
— PyTorch (@PyTorch) June 26, 2018
Try it now from: https://t.co/8khIix99mahttps://t.co/HFuW0II5Hl
Tracking Emerges by Colorizing Videos. Vondrick et al presented this at the GAN tutorial, and you’ve got to see the videos to believe it! https://t.co/WovNHz3F6v #computervision pic.twitter.com/6rCdwGsRhT
— Tomasz Malisiewicz (@quantombone) June 26, 2018
This looks quite encouraging. Still some room for improvement in the results, but a good direction for Neural Architecture Search https://t.co/fJbkOMptAr
— Jeremy Howard (@jeremyphoward) June 26, 2018
Welcome back, gradients! This method is orders of magnitude faster than state-of-the-art non-differentiable techniques.
— Oriol Vinyals (@OriolVinyalsML) June 26, 2018
DARTS: Differentiable Architecture Search by Hanxiao Liu, Karen Simonyan, and Yiming Yang.
Paper: https://t.co/gnKLXx6Pi9
Code: https://t.co/fZYIYNhLzz pic.twitter.com/pIHg3krnAE