GPT-3 has 175 billion parameters, trained on 300 billion tokenshttps://t.co/rE97CQclwl pic.twitter.com/5tJgwwmABN
— Mark Riedl (@mark_riedl) May 29, 2020
GPT-3 has 175 billion parameters, trained on 300 billion tokenshttps://t.co/rE97CQclwl pic.twitter.com/5tJgwwmABN
— Mark Riedl (@mark_riedl) May 29, 2020
Fun experiment: they tested GPT-3's ability to perform simple arithmetic problems in natural language (without explicitly training it to do arithmetic) pic.twitter.com/KETICaNwxB
— hardmaru (@hardmaru) May 29, 2020
After going through GPT-3 paper, I have to remind myself that we can also do amazing things with small compute. https://t.co/pgWqQqgsJf
— hardmaru (@hardmaru) May 29, 2020
Scale *still* delivers! Congrats @OpenAI on showing very nice zero/few-shot language capabilities of GPT-3. #timelesstweet
— Oriol Vinyals (@OriolVinyalsML) May 29, 2020
Paper: https://t.co/SMT1n4eS1N
Endless Samples: https://t.co/arTp3Dxyo3 pic.twitter.com/LMfeR5EL4x
Nice/fun YouTube channel walking through recent papers in deep learning in a video format, this episode on GPT-3. Cool! :) https://t.co/9QyKkgSH8Q
— Andrej Karpathy (@karpathy) May 29, 2020
GPT-3 is terrifying because it's a tiny model compared to what's possible, trained in the dumbest way possible on a single impoverished modality on tiny data, yet the first version already manifests crazy runtime meta-learning—and the scaling curves 𝘴𝘵𝘪𝘭𝘭 are not bending! 😮 https://t.co/hQbW9znm3x
— 𝔊𝔴𝔢𝔯𝔫 (@gwern) May 31, 2020
.@VioletNPeng wrote a paper that produced shockingly #racist and #sexist paragraphs without any cherry picking. For @OpenAI to launch this during #BlackLivesMattters is tone deaf. pic.twitter.com/6q3szp0Mm1
— Prof. Anima Anandkumar (@AnimaAnandkumar) June 11, 2020
This web app by @sushant_kumar generates a tweet given a word using GPT-3. You can try it by using:https://t.co/hfLQSsUzas
— hardmaru (@hardmaru) July 17, 2020
(Replace "hong kong" with your own words in the URL. "%20" is the whitespace character)
Below is the tweet that GPT-3 generated when I put in "hong kong" pic.twitter.com/LWs3Si4bX7
GPT-3 often performs like a clever student who hasn't done their reading trying to bullshit their way through an exam. Some well-known facts, some half-truths, and some straight lies, strung together in what first looks like a smooth narrative.
— Julian Togelius (@togelius) July 17, 2020
This is quite impressive. If you aren't astonished, you are a frog in a slowly warming pot. https://t.co/KsHqF4nWRm
— timoreilly (@timoreilly) July 18, 2020
initial attempts: very impressive QA results (check out the coref in the gates questions!) but also has some glitches. pic.twitter.com/35hNLJWqWy
— (((ل()(ل() 'yoav)))) (@yoavgo) July 18, 2020
I very much liked this practical, no-nonsense summary of GPT-3 from a product perspective. Thanks @minimaxir for putting it together. https://t.co/HofphHUAzR
— Delip Rao (@deliprao) July 18, 2020