Homepage
Close
Menu

Site Navigation

  • Home
  • Archive(TODO)
    • By Day
    • By Month
  • About(TODO)
  • Stats
Close
by mark_riedl on 2020-05-29 (UTC).

GPT-3 has 175 billion parameters, trained on 300 billion tokenshttps://t.co/rE97CQclwl pic.twitter.com/5tJgwwmABN

— Mark Riedl (@mark_riedl) May 29, 2020
researchnlp
by hardmaru on 2020-05-29 (UTC).

Fun experiment: they tested GPT-3's ability to perform simple arithmetic problems in natural language (without explicitly training it to do arithmetic) pic.twitter.com/KETICaNwxB

— hardmaru (@hardmaru) May 29, 2020
nlp
by hardmaru on 2020-05-29 (UTC).

After going through GPT-3 paper, I have to remind myself that we can also do amazing things with small compute. https://t.co/pgWqQqgsJf

— hardmaru (@hardmaru) May 29, 2020
nlpmisc
by OriolVinyalsML on 2020-05-29 (UTC).

Scale *still* delivers! Congrats @OpenAI on showing very nice zero/few-shot language capabilities of GPT-3. #timelesstweet

Paper: https://t.co/SMT1n4eS1N
Endless Samples: https://t.co/arTp3Dxyo3 pic.twitter.com/LMfeR5EL4x

— Oriol Vinyals (@OriolVinyalsML) May 29, 2020
nlpresearch
by karpathy on 2020-05-29 (UTC).

Nice/fun YouTube channel walking through recent papers in deep learning in a video format, this episode on GPT-3. Cool! :) https://t.co/9QyKkgSH8Q

— Andrej Karpathy (@karpathy) May 29, 2020
learningtutorialvideo
by gwern on 2020-05-31 (UTC).

GPT-3 is terrifying because it's a tiny model compared to what's possible, trained in the dumbest way possible on a single impoverished modality on tiny data, yet the first version already manifests crazy runtime meta-learning—and the scaling curves 𝘴𝘵𝘪𝘭𝘭 are not bending! 😮 https://t.co/hQbW9znm3x

— 𝔊𝔴𝔢𝔯𝔫 (@gwern) May 31, 2020
nlp
by AnimaAnandkumar on 2020-06-11 (UTC).

.@VioletNPeng wrote a paper that produced shockingly #racist and #sexist paragraphs without any cherry picking. For @OpenAI to launch this during #BlackLivesMattters is tone deaf. pic.twitter.com/6q3szp0Mm1

— Prof. Anima Anandkumar (@AnimaAnandkumar) June 11, 2020
biasethicsnlp
by hardmaru on 2020-07-17 (UTC).

This web app by @sushant_kumar generates a tweet given a word using GPT-3. You can try it by using:https://t.co/hfLQSsUzas

(Replace "hong kong" with your own words in the URL. "%20" is the whitespace character)

Below is the tweet that GPT-3 generated when I put in "hong kong" pic.twitter.com/LWs3Si4bX7

— hardmaru (@hardmaru) July 17, 2020
nlpapplication
by togelius on 2020-07-17 (UTC).

GPT-3 often performs like a clever student who hasn't done their reading trying to bullshit their way through an exam. Some well-known facts, some half-truths, and some straight lies, strung together in what first looks like a smooth narrative.

— Julian Togelius (@togelius) July 17, 2020
nlpmiscthought
by timoreilly on 2020-07-18 (UTC).

This is quite impressive. If you aren't astonished, you are a frog in a slowly warming pot. https://t.co/KsHqF4nWRm

— timoreilly (@timoreilly) July 18, 2020
nlpmisc
by yoavgo on 2020-07-18 (UTC).

initial attempts: very impressive QA results (check out the coref in the gates questions!) but also has some glitches. pic.twitter.com/35hNLJWqWy

— (((ل()(ل() 'yoav)))) (@yoavgo) July 18, 2020
nlpmisc
by deliprao on 2020-07-18 (UTC).

I very much liked this practical, no-nonsense summary of GPT-3 from a product perspective. Thanks @minimaxir for putting it together. https://t.co/HofphHUAzR

— Delip Rao (@deliprao) July 18, 2020
nlplearning
  • 1
  • 2
  • 3
  • Next

Tags

learning tutorial misc nlp rstats gan ethics research dataviz survey python tool security kaggle video thought bayesian humour tensorflow w_code bias dataset pytorch cv tip application javascript forecast swift golang rl jax julia gnn causal surey diffusion
© Copyright Philosophy 2018 Site Template by Colorlib