Hydra provides an ability to compose and override configuration from the command line and config files, helping PyTorch developers to more easily manage complex ML projects. https://t.co/Kftd5mwp2E
— PyTorch (@PyTorch) February 5, 2020
Hydra provides an ability to compose and override configuration from the command line and config files, helping PyTorch developers to more easily manage complex ML projects. https://t.co/Kftd5mwp2E
— PyTorch (@PyTorch) February 5, 2020
François (@madlag) is both an independent ML researcher investigating sparse efficient models and... an early angel investor in 🤗
— Thomas Wolf (@Thom_Wolf) February 4, 2020
Happy that he agreed to share some of his knowledge and experience on sparse models in a series of posts!
First one is here https://t.co/LfIL1iPUb4
A comparison of message queues https://t.co/27OoFrhyG2 (by @archerabi from our tech blog)
— Erik Bernhardsson (@fulhack) January 31, 2020
A well done video explanation of FixMatch, thanks @CShorten30 ! https://t.co/scOd1TIU3Q
— David Berthelot (@D_Berthelot_ML) January 31, 2020
Really nice overview of the underpinnings the Altair project - thanks @eitanlees! https://t.co/nDQpBfJXaI
— Jake VanderPlas (@jakevdp) January 28, 2020
We looked at the Zoo's predictions. Impressive work @ph_singer @dott1718 https://t.co/KueCcSfWHD
— Michael Lopez (@StatsbyLopez) January 27, 2020
Hey machine learning enthusiasts! Want to read about how we use neural networks to recommend easy issues to new OSS contributors? Sure you do! 😃 Check out our latest blog post here https://t.co/JkOP6Fhrp5 pic.twitter.com/ektYXKerZ8
— GitHub (@github) January 22, 2020
Survey of machine-learning experimental methods at #NeurIPS2019 and #ICLR2020 https://t.co/kDexrfjv79
— Gael Varoquaux (@GaelVaroquaux) January 22, 2020
Results of a poll @bouthilx and I ran, with some analysis and discussion of statistical power and reproducibility. pic.twitter.com/FhebPSOCRW
I also added a link to @lilianweng's excellent overview post, in the 'further reading' section.https://t.co/N30sTMqfE3
— Jeremy Howard (@jeremyphoward) January 21, 2020
“Meet AdaMod: a new deep learning optimizer with memory” by Less Wright https://t.co/TmtflP9s6h
— Jeremy Howard (@jeremyphoward) January 11, 2020
Missed this great article on the Clever Hans effect in NLP
— Thomas Wolf (@Thom_Wolf) January 10, 2020
Many SOTA results we get with Bert-like models are due to these models "breaking" our datasets –in a bad sense– by exploiting their weaknesses@benbenhh's piece has nice overview & advice
Also follow @annargrs on this https://t.co/xPb5Nj87z9
What I did over my winter break!
— Jeff Dean (@JeffDean) January 9, 2020
It gives me great pleasure to share this summary of some of our work
in 2019, on behalf of all my colleagues at @GoogleAI & @GoogleHealth.https://t.co/hGoog8G9QD