Homepage
Close
Menu

Site Navigation

  • Home
  • Archive(TODO)
    • By Day
    • By Month
  • About(TODO)
  • Stats
Close
by sama on 2020-07-19 (UTC).

The GPT-3 hype is way too much. It’s impressive (thanks for the nice compliments!) but it still has serious weaknesses and sometimes makes very silly mistakes. AI is going to change the world, but GPT-3 is just a very early glimpse. We have a lot still to figure out.

β€” Sam Altman (@sama) July 19, 2020
thoughtnlp
by mark_riedl on 2020-07-20 (UTC).

I bet GPT-3 will be really good at generating business plans for startups using GPT-3

β€” Mark O. Riedl (@mark_riedl) July 20, 2020
humournlp
by gdb on 2020-07-25 (UTC).

Great work @sh_reya & @notsleepingturk on GPT-3 sandbox: https://t.co/xhAZ0faufP, a flexible tool for building OpenAI-powered apps. https://t.co/zeYmgvXBpj

β€” Greg Brockman (@gdb) July 25, 2020
toolnlp
by deliprao on 2020-07-26 (UTC).

It's out! The first @pagestlabs issue is on how to think about the buzz in breakthrough technologies like GPT-3 while living in the midst of it. Thanks everyone who subscribed early. Hope you like reading long posts πŸ˜…πŸ––https://t.co/Z6zFyK4CuI

β€” Delip Rao (@deliprao) July 26, 2020
nlpthought
by slashML on 2020-07-27 (UTC).

GPT-3 and A Typology of Hype (by Delip Rao) https://t.co/4G5U0tHf0i

β€” /MachineLearning (@slashML) July 27, 2020
nlpmisc
by gdb on 2020-07-29 (UTC).

One of our users is running a rigorous test of OpenAI's utility in copywriting: https://t.co/G6TYXWUNxN.

You can sign up on the landing page if you'd like your site to participate.

Will be exciting to see the results!

β€” Greg Brockman (@gdb) July 29, 2020
nlp
by hardmaru on 2020-07-31 (UTC).

Philosophers On GPT-3

β€œIt has many limitations and its work is full of glitches and mistakes. But the point is not so much GPT-3 but where it is going. Given the progress from GPT-2 to GPT-3, who knows what we can expect from GPT-4 and beyond?”https://t.co/Rkz6IRueZm

β€” hardmaru (@hardmaru) July 31, 2020
nlpthought
by random_walker on 2020-08-01 (UTC).

The futurists predicted a singularity where AI recursively improves AI, but what we've got instead is AI feeding on text generated by AI. https://t.co/wpXCIZ1BA7

β€” Arvind Narayanan (@random_walker) August 1, 2020
miscnlp
by fchollet on 2020-08-06 (UTC).

An insightful essay probing to what extent GPT-3 is capable of making analogies. Recommended!https://t.co/LXWKVlCHt2 https://t.co/PRWmPMYD9q

β€” FranΓ§ois Chollet (@fchollet) August 6, 2020
nlplearning
by rctatman on 2020-08-10 (UTC).

I'm not putting them on blast b/c they're a student, but I just ran across someone implying that GPT-3 correctly answered a medical question by using reasoning and underlying knowledge.

Language models simply do not do that. PLEASE don't use them for medical advice.

β€” Rachael Tatman (@rctatman) August 10, 2020
nlpmisc
by chrisalbon on 2020-08-11 (UTC).

GPT-3 isn’t some super intelligent AI.

It is a language model desperately trying to say the right things so it fits in.

β€” Chris Albon (@chrisalbon) August 11, 2020
nlphumour
by JanelleCShane on 2020-08-17 (UTC).

In today's disturbing news, I discovered that GPT-3 can write AI Weirdness blog posts.

1st paragraph is my prompt. The rest is GPT-3 handily simulating the fumbling of a much less powerful neural net.https://t.co/xOulehDuL8 pic.twitter.com/WHcWmvmUNv

β€” Janelle Shane (@JanelleCShane) August 17, 2020
misc
  • Prev
  • 1
  • 2
  • 3
  • Next

Tags

learning tutorial misc nlp rstats gan ethics research dataviz survey python tool security kaggle video thought bayesian humour tensorflow w_code bias dataset pytorch cv tip application javascript forecast swift golang rl jax julia gnn causal surey diffusion
Β© Copyright Philosophy 2018 Site Template by Colorlib