GPT-3 and A Typology of Hype (by Delip Rao) https://t.co/4G5U0tHf0i
β /MachineLearning (@slashML) July 27, 2020
GPT-3 and A Typology of Hype (by Delip Rao) https://t.co/4G5U0tHf0i
β /MachineLearning (@slashML) July 27, 2020
It's out! The first @pagestlabs issue is on how to think about the buzz in breakthrough technologies like GPT-3 while living in the midst of it. Thanks everyone who subscribed early. Hope you like reading long posts π πhttps://t.co/Z6zFyK4CuI
β Delip Rao (@deliprao) July 26, 2020
Great work @sh_reya & @notsleepingturk on GPT-3 sandbox: https://t.co/xhAZ0faufP, a flexible tool for building OpenAI-powered apps. https://t.co/zeYmgvXBpj
β Greg Brockman (@gdb) July 25, 2020
I have a joke about neural language models. I have a joke about neural language models. I have a joke about neural language models. I have a joke about neural language models.
β Graham Neubig (@gneubig) July 25, 2020
Reading code is hard! Don't you wish you could just ask the code what it does? To describe its functions, its types.
β Amjad Masad (@amasad) July 22, 2020
And maybe... how can it be improved?
Introducing: @Replit code oracle π§ββοΈ
It's crazy, just got access to @OpenAI API and I already have a working product! pic.twitter.com/HX4MyH9yjm
Still cropping and modifying BERT diagrams from Devlin et al. (2019)? Maybe don't?
β Sasha Rush (@srush_nlp) July 21, 2020
Jimmy's diagram below is super awesome. But for most cases BERT is a (very useful magic) feed-forward network. Draw a box. https://t.co/Gsox1y89Mr pic.twitter.com/DZ6y9rzj07
I bet GPT-3 will be really good at generating business plans for startups using GPT-3
β Mark O. Riedl (@mark_riedl) July 20, 2020
The GPT-3 hype is way too much. Itβs impressive (thanks for the nice compliments!) but it still has serious weaknesses and sometimes makes very silly mistakes. AI is going to change the world, but GPT-3 is just a very early glimpse. We have a lot still to figure out.
β Sam Altman (@sama) July 19, 2020
I very much liked this practical, no-nonsense summary of GPT-3 from a product perspective. Thanks @minimaxir for putting it together. https://t.co/HofphHUAzR
β Delip Rao (@deliprao) July 18, 2020
initial attempts: very impressive QA results (check out the coref in the gates questions!) but also has some glitches. pic.twitter.com/35hNLJWqWy
β (((Ω()(Ω() 'yoav)))) (@yoavgo) July 18, 2020
This is quite impressive. If you aren't astonished, you are a frog in a slowly warming pot. https://t.co/KsHqF4nWRm
β timoreilly (@timoreilly) July 18, 2020
Eager to use our newly released PruneBERT models to leverage extremely sparse (>= 95% ) networks?
β Hugging Face (@huggingface) July 17, 2020
Check out our new collaboration with the @octoml & TVM team!
Get an instant 3x inference speedup from Dense to Sparse models! π₯πhttps://t.co/e2clSVj3Vb