New release of Rainbow, with setup for the new data-efficient variant π Results and pretrained models for all 26 games in the paper included: https://t.co/EdYCQMfyOO
β Kaixhin (@KaiLashArul) June 18, 2019
New release of Rainbow, with setup for the new data-efficient variant π Results and pretrained models for all 26 games in the paper included: https://t.co/EdYCQMfyOO
β Kaixhin (@KaiLashArul) June 18, 2019
I'm proud to share our work on a new version of capsule networks, called Stacked Capsule Autoencoders, with @sabour_sara, @yeewhye and @geoffreyhinton: https://t.co/tBD8ag0dCF pic.twitter.com/BjixrWK4NG
β Adam Kosiorek (@arkosiorek) June 18, 2019
Fully Interpretable Deep Learning Model of Transcriptional Control
β ML Review (@ml_review) June 17, 2019
By @UChicago
Every layer is interpretable. Moreover, the DNN is biologically validated and predictivehttps://t.co/dkH2fWBVnk pic.twitter.com/1SqwXkK80T
Detailed ICML 2019 Notes from David Abel https://t.co/glU9NUdVzD
β /MachineLearning (@slashML) June 17, 2019
Contrastive Multiview Coding
β ML Review (@ml_review) June 17, 2019
By @YonglongT @dilipkay @phillip_isola
Unsupervised representation learned to maximize MI measure between different views results in SoTA for downstream tasks such as object classification
ArXivhttps://t.co/xlk1C7l1FZ
Githubhttps://t.co/S1vpmgjuIR pic.twitter.com/uCMuyOQJrX
Fairwashing: the risk of rationalization. It it is possible to forge a fairer explanation from a truly unfair black box. Paper presented at #ICML2019 by @umaivodj https://t.co/c4PGpAXEAT pic.twitter.com/ePsU5lDHeB
β Rachel Thomas (@math_rachel) June 16, 2019
Best paper award at #ICML2019 main idea: unsupervised
β Reza Zadeh (@Reza_Zadeh) June 15, 2019
learning of disentangled representations is fundamentally
impossible without inductive biases. Verified theoretically & experimentally. https://t.co/pMWJzjbhLg
Erin LeDell presenting her poster on the open source AutoML benchmark
β Rachel Thomas (@math_rachel) June 14, 2019
Details here: https://t.co/OjaBUyiPC9 #icml2019 @ledell @wimlds pic.twitter.com/xTPr1bQjhf
Paper: https://t.co/V7U4FpYMb9
β Victor Zhong (@hllo_wrld) June 14, 2019
Dockerized code: https://t.co/gWocV5qmPn
10/10#nlproc#ACL2019NLP#DeepLearning
Our #acl2019nlp paper Entailment-driven Extracting and Editing for Conversational Machine Reading (E3) is out! E3 studies conversational machine reading (CMR), a task oriented dialogue problem where reasoning rules aren't fixed but implied by procedural text. Paper/codeπ1/10 pic.twitter.com/v6Ro6wOaik
β Victor Zhong (@hllo_wrld) June 14, 2019
Tackling Climate Change with Machine Learning
β hardmaru (@hardmaru) June 14, 2019
A 54-page paper consisting of 12 articles written by @david_rolnick and many other collaborators on this important topic.https://t.co/7TnbqrXDij pic.twitter.com/O1dHFirrLq
Can we use reinforcement learning together with search to solve temporally extended tasks? In Search on the Replay Buffer (w/ Ben Eysenbach and @rsalakhu), we use goal-conditioned policies to build a graph for search.
β Sergey Levine (@svlevine) June 13, 2019
Paper: https://t.co/qMEHyU06mU
Colab: https://t.co/jqADqWeJEu