Taking face censoring one more step: replace them with randomly generated face at the correct pose.
— Reza Zadeh (@Reza_Zadeh) September 11, 2019
Paper: https://t.co/T2edC4VGXB
Code: https://t.co/F7iUh2lJVt pic.twitter.com/XqxKo041bG
Taking face censoring one more step: replace them with randomly generated face at the correct pose.
— Reza Zadeh (@Reza_Zadeh) September 11, 2019
Paper: https://t.co/T2edC4VGXB
Code: https://t.co/F7iUh2lJVt pic.twitter.com/XqxKo041bG
We release the largest publicly available language model: CTRL has 1.6B parameters and can be guided by control codes for style, content, and task-specific behavior. Incredible generations!
— Richard Socher (@RichardSocher) September 11, 2019
Paper https://t.co/0Wr2XiOl2V
Github https://t.co/PA8GxqtS9V
Blog https://t.co/Q2xQFtKQQE pic.twitter.com/PRidiAzJOM
Their GitHub repo contains the game engine, code for writing client-side bots, and reference RL agent implementations. https://t.co/PPmWc8p7US
— hardmaru (@hardmaru) September 10, 2019
Our new model for word embeddings combines our open source library fastText with a supervised task that embeds misspellings close to their correct variants. Learn more: https://t.co/gk91vqPb06 pic.twitter.com/AtLL9ueNmZ
— Facebook AI (@facebookai) September 7, 2019
Trained a ResNet-50 that can recognise shot scale in images (Close Up, Long, Medium, etc) with a custom dataset. Fascinating heatmaps!
— Rahul Somani (@rsomani95) September 6, 2019
Thanks to @math_rachel @jeremyphoward and @GuggerSylvain for the fastai library
Project: https://t.co/wgcA1TTJ7g
Code: https://t.co/iecxHlfKmr pic.twitter.com/jabs0VY8TX
https://t.co/ApUfadRVDl - End-to-end machine learning project showing key aspects of developing and deploying ML driven application https://t.co/r5Qtrg78No
— Python Trending (@pythontrending) September 5, 2019
Pytorch implementation of the paper "Class-Balanced Loss Based on Effective Number of Samples" https://t.co/i8zfOFPiQd #deeplearning #machinelearning #ml #ai #neuralnetworks #datascience #pytorch
— PyTorch Best Practices (@PyTorchPractice) September 5, 2019
Now that we know it's possible to achieve comparable results to BERT using only 66M parameters, can someone find a way to train a 66M param model from scratch instead of distilling? https://t.co/ycJjMwSwsr
— Chip Huyen (@chipro) August 28, 2019
We're excited to release OpenSpiel: a framework for reinforcement learning in games. It contains over 25 games, and 20 algorithms, including tools for visualisation and evaluation.
— DeepMind (@DeepMindAI) August 27, 2019
GitHub: https://t.co/RKMqy3olet
Paper: https://t.co/gVaCu7PCLQ pic.twitter.com/9atJDrpHHw
Targeted Dropout - Finding efficient subnetworks in over-parameterized models
— Mike Tamir, PhD (@MikeTamir) August 27, 2019
https://t.co/t6BzPVSV7r #AI #DeepLearning #MachineLearning #DataScience pic.twitter.com/Lh7nwy0su3
Presenting LXMERT at @EMNLP2019 --> https://t.co/T9SeONSlFO (prnc. 'leksmert'). Top3 in GQA & VQA challenges (May2019), Rank1 in VizWiz, & v.strong generalzn to NLVR2 (22% abs jump)! Awesome effort by @HaoTan5!
— Mohit Bansal (@mohitban47) August 21, 2019
CODE+MODELS all public: https://t.co/JWbjEWbhXS; pls use+share! 1/2 pic.twitter.com/WvxRirYGoB
Interesting -- the larger the model, the less data it needs to reach the same validation loss, this is the opposite of what statistics teaches us https://t.co/88qwts7klV
— Yaroslav Bulatov (@yaroslavvb) August 14, 2019