Homepage
Close
Menu

Site Navigation

  • Home
  • Archive(TODO)
    • By Day
    • By Month
  • About(TODO)
  • Stats
Close
by OpenAI on 2019-02-14 (UTC).

We've trained an unsupervised language model that can generate coherent paragraphs and perform rudimentary reading comprehension, machine translation, question answering, and summarization — all without task-specific training: https://t.co/sY30aQM7hU pic.twitter.com/360bGgoea3

— OpenAI (@OpenAI) February 14, 2019
nlpresearch
by seb_ruder on 2019-02-14 (UTC).

A new bigger, better language model by @OpenAI:
- Scaled-up version of their Transformer (10x params)
- Trained on 10x more curated data (40 GB of Reddit out links w/ >2 karma)
- SOTA on many LM-like tasks
- Discuss potential for malicious usehttps://t.co/4hnXa8DsJx

— Sebastian Ruder (@seb_ruder) February 14, 2019
nlp
by seb_ruder on 2019-02-14 (UTC).

We've seen before that LMs can do zero-shot learning. What I find most exiting about this is the quality of the generated samples. These are the most coherent, long-form LM samples I've seen so far. https://t.co/asivaWrkG6 pic.twitter.com/Rgpzchb5rJ

— Sebastian Ruder (@seb_ruder) February 14, 2019
nlpmisc
by gdb on 2019-02-14 (UTC).

Agree that the implications are scary! That’s why we didn’t release the model (https://t.co/LN3EDFtHGq). We’ve anticipated the need to restrict future publishing in our charter (https://t.co/udexZDbHcP) and co-authored a report on specific future threats (https://t.co/wc1agFCE5R)

— Greg Brockman (@gdb) February 14, 2019
misc
by jackclarkSF on 2019-02-14 (UTC).

This project is a good example of how Policy is integrated into @OpenAI - we worked very closely with the language and safety teams on this release, and I personally did multiple trips to speak with multiple stakeholders ahead of release. Will be doing a post-release campaign. https://t.co/z7zuh1QeXE

— Jack Clark (@jackclarkSF) February 14, 2019
misc
by deliprao on 2019-02-14 (UTC).

looks like we automated a typical redditor. https://t.co/Eptwd9pV8g

— Delip Rao (@deliprao) February 14, 2019
misc
by jackclarkSF on 2019-02-14 (UTC).

If you're interested in the @OpenAI language model, GPT-2 (https://t.co/z7zuh1QeXE), then take a look at this dump of tons of samples from it - gives you a sense of its diversity: https://t.co/SSwulKg3wB

— Jack Clark (@jackclarkSF) February 14, 2019
nlp
by kchonyc on 2019-02-14 (UTC).

"Due to our concerns about malicious applications of [Our model ... trained simply to predict the next word], we are not releasing the trained model" for the humanity, i feel now obliged to remove all the pretrained model weights i've made public so far. 😅 https://t.co/0gTqeXTZHg

— Kyunghyun Cho (@kchonyc) February 14, 2019
nlp
by Thom_Wolf on 2019-02-14 (UTC).

A personal❤️ in GPT2 paper: the discussion on language-modeling & multi-tasking. We still see papers being rejected bc LM is not considered "useful"(ex Transfo-XL)😢 If your dataset is diverse enough LM is a proxy for a *huge* multi-task objective over pretty much every NLP task! https://t.co/MeJJVrXdIg

— Thomas Wolf (@Thom_Wolf) February 14, 2019
misc
by Smerity on 2019-02-14 (UTC).

In terms of the underlying techniques, little has changed regarding the underlying model. The lovely thing about deep learning is that a slightly tweaked problem at a new scale can reveal an entirely new result. It's the "scale it until it breaks (or we do)" line of research ;)

— Smerity (@Smerity) February 14, 2019
nlpmisc
by jeremyphoward on 2019-02-15 (UTC).

It's great to hear from informed and thoughtful reporters like @jjvincent, who are working to put AI advances in context, and consider their wider implications.

His latest piece on @OpenAI's new language model is very nicely done.https://t.co/HduGSutCLL pic.twitter.com/0g7X3Gh9vc

— Jeremy Howard (@jeremyphoward) February 15, 2019
misc
by Smerity on 2019-02-15 (UTC).

Today's meta-Twitter summary for machine learning:
None of us have any consensus on what we're doing when it comes to responsible disclosure, dual use, or how to interact with the media.
This should be concerning for us all, in and out of the field.

— Smerity (@Smerity) February 15, 2019
nlp
  • 1
  • 2
  • 3
  • Next

Tags

learning tutorial misc nlp rstats gan ethics research dataviz survey python tool security kaggle video thought bayesian humour tensorflow w_code bias dataset pytorch cv tip application javascript forecast swift golang rl jax julia gnn causal surey diffusion
© Copyright Philosophy 2018 Site Template by Colorlib