An insightful essay probing to what extent GPT-3 is capable of making analogies. Recommended!https://t.co/LXWKVlCHt2 https://t.co/PRWmPMYD9q
— François Chollet (@fchollet) August 6, 2020
An insightful essay probing to what extent GPT-3 is capable of making analogies. Recommended!https://t.co/LXWKVlCHt2 https://t.co/PRWmPMYD9q
— François Chollet (@fchollet) August 6, 2020
DeLighT: Very Deep and Light-weight Transformer
— AK (@ak92501) August 4, 2020
pdf: https://t.co/3BGksd53Bs
abs: https://t.co/QgKDzHmYy9
github: https://t.co/nZzapo7NCF pic.twitter.com/h0qsg58MRU
DeText: A deep NLP framework for intelligent text understanding https://t.co/HmI5f6RfD3 #Python #NLP #MachineLearning pic.twitter.com/iEtqIi1PPJ
— Python Weekly (@PythonWeekly) August 3, 2020
The futurists predicted a singularity where AI recursively improves AI, but what we've got instead is AI feeding on text generated by AI. https://t.co/wpXCIZ1BA7
— Arvind Narayanan (@random_walker) August 1, 2020
I've gotten a lot of question about this recent so:
— Rachael Tatman (@rctatman) July 31, 2020
I would not recommend using neural language generation (BERT, GPT-3, etc.) to generate text you send to users.
Why?
It *will* produce plausible sounding but factually incorrect output. Not if but when.
Philosophers On GPT-3
— hardmaru (@hardmaru) July 31, 2020
“It has many limitations and its work is full of glitches and mistakes. But the point is not so much GPT-3 but where it is going. Given the progress from GPT-2 to GPT-3, who knows what we can expect from GPT-4 and beyond?”https://t.co/Rkz6IRueZm
One of our users is running a rigorous test of OpenAI's utility in copywriting: https://t.co/G6TYXWUNxN.
— Greg Brockman (@gdb) July 29, 2020
You can sign up on the landing page if you'd like your site to participate.
Will be exciting to see the results!
Big Bird: Transformers for Longer Sequences https://t.co/vUZJj7SLTK
— /MachineLearning (@slashML) July 29, 2020
Big Bird: Transformers for Longer Sequences 🐦
— AK (@ak92501) July 29, 2020
pdf: https://t.co/1ZH5oC2T2e
abs: https://t.co/DLt59rpbps pic.twitter.com/XHuvaqPahM
"algorithm changes gender prediction from female to male when dr. is added" is so bafflingly bad that I am ALMOST surprised that such a bad algorithm was knowingly released into the world, that no one thought to test for exactly this if only in anticipation of bad PR.
— Casey Fiesler, PhD, JD, geekD (@cfiesler) July 29, 2020
Almost. https://t.co/WxwdgZs00X
Good post on the use of BPE (byte pair encodings) for I/O of language models, pointing out subtle under-the-hood detail with unintuitive repercussions https://t.co/vZ5R5lqteP e.g. hello,Hello,HELLO all tokenize completely differently, and possibly also of different # tokens each
— Andrej Karpathy (@karpathy) July 28, 2020
We just added new links on the transformers' readme to test inference on some of the most popular models: https://t.co/eSqQ8V5Fm9 pic.twitter.com/gcFxOi29hF
— Hugging Face (@huggingface) July 28, 2020