arXiv link: https://t.co/u7i5aPCyMU
— hardmaru (@hardmaru) March 2, 2020
Code: https://t.co/lTvtuqeqny
arXiv link: https://t.co/u7i5aPCyMU
— hardmaru (@hardmaru) March 2, 2020
Code: https://t.co/lTvtuqeqny
Everyone writing summary and analysis papers like these instead of chasing after SOTA is a true hero 👏💪
— Denny Britz (@dennybritz) February 28, 2020
With the flood of new models and papers, studies like these are invaluable. They save researchers thousands of hours of time. https://t.co/OdQj3orDVI
This is a *really* extensive repo containing ~380 BERT-related papers sorted into downstream tasks, modifications, probes, multilingual models, and more. Nice job, @stomohide!https://t.co/CaY0nZxDhV
— Sebastian Ruder (@seb_ruder) February 28, 2020
CookGAN: Meal Image Synthesis from Ingredients
— roadrunner01 (@ak92501) February 27, 2020
pdf: https://t.co/T63W7JVxEj
abs: https://t.co/8TI8WY80rL pic.twitter.com/PMIoJdgZES
Relabeling goals = inverse RL. This simple insight enables a whole family of inference-based relabeling algs that speed up goal-based RL and RL with other parameterized rewards.
— Sergey Levine (@svlevine) February 26, 2020
"Rewriting History with Inverse RL," w/ B. Eysenbach, X. Geng, @rsalakhu https://t.co/g0VQv7GlBS pic.twitter.com/86zMoTeCoA
Network Randomization: A Simple Technique for Generalization in Deep Reinforcement Learning
— hardmaru (@hardmaru) February 26, 2020
By inserting noise in the feature space rather than in the input space as is typically done for visual inputs, their agents can generalize better to unseen tasks!https://t.co/mhwA9BA6Ck pic.twitter.com/zZzfHnVuOx
Happy to see more DL-based program analysis work being open-sourced! Here’s DeepBinDiff from Duan et al. Claims improved diffing compared to BinDiff & recent academic work. https://t.co/dBqCblXQL3
— Brendan Dolan-Gavitt (@moyix) February 26, 2020
Gradient Boosting Neural Networks: GrowNet
— ML Review (@ml_review) February 25, 2020
– Shallow NNs as “weak learners” in gradient boosting framework
– Incorporates 2nd order stats, corrective step & dynamic boost rate to remedy pitfalls of gradient boosting tree
– Outperforms XGBoosthttps://t.co/OQ045GeeiM pic.twitter.com/Bc6jSqLxND
Sketchformer: Transformer-based Representation for Sketched Structure
— hardmaru (@hardmaru) February 25, 2020
“Transformer for sketches” performs multiple tasks: sketch classification, sketch based image retrieval, and reconstruction and interpolation of sketches. Beats LSTM-based SketchRNN :)https://t.co/d3TVgVvIQ4 pic.twitter.com/fwfi6oqVTy
Our contribution towards differentiable programming: O(n log n) differentiable sorting and ranking operators. Key techniques: projections onto permutahedra & isotonic optimization. Applications to top-k classification, label ranking, least trimmed squares. https://t.co/wjfrxLBFJS pic.twitter.com/7V21zPM8MM
— Mathieu Blondel (@mblondel_ml) February 21, 2020
The Problem with Metrics is a Fundamental Problem for AI-- paper by me and David Uminsky @DataInstituteSF accepted to EDSC 2020 https://t.co/bg0elsbBd9 cc: @craignewmark pic.twitter.com/UzFP24ySni
— Rachel Thomas (@math_rachel) February 21, 2020
loved working on this paper: all I like in one paper - probabilistic generative models, sequence models and machine translation https://t.co/swlqiQblEf
— Kyunghyun Cho (@kchonyc) February 19, 2020