Homepage
Close
Menu

Site Navigation

  • Home
  • Archive(TODO)
    • By Day
    • By Month
  • About(TODO)
  • Stats
Close
by yoavgo on 2019-02-15 (UTC).

Just wanted to give you all a heads up, our lab found an amazing breakthrough in language understanding. but we also worry it may fall into the wrong hands. so we decided to scrap it and only publish the regular *ACL stuff instead. Big respect for the team for their great work.

— (((ل()(ل() 'yoav)))) (@yoavgo) February 15, 2019
humour
by zacharylipton on 2019-02-17 (UTC).

Perhaps what's *most remarkable* about the @OpenAI controversy is how *unremarkable* the technology is. Despite their outsize attention & budget, the research itself is perfectly ordinary—right in the main branch of deep learning NLP research https://t.co/bmMkkL3KKJ

— Zachary Lipton (@zacharylipton) February 17, 2019
misc
by zacharylipton on 2019-02-17 (UTC).

***OpenAI Trains Language Model, Mass Hysteria Ensues*** New Post on Approximately Correct digesting the code release debate and media shitstorm.https://t.co/19Gbzxi6EW

— Zachary Lipton (@zacharylipton) February 17, 2019
misc
by jeremyphoward on 2019-02-17 (UTC).

This is the most thoughtful analysis of the @OpenAI model release issue I've seen.

I'm not sure I agree with it.

But I'm sure of this: those NLP academics who responded with shallow snarky toxic uninformed envy are an embarrassment to the field. https://t.co/fdQZ6fLBR3

— Jeremy Howard (@jeremyphoward) February 17, 2019
thought

Tags

learning tutorial misc nlp rstats gan ethics research dataviz survey python tool security kaggle video thought bayesian humour tensorflow w_code bias dataset pytorch cv tip application javascript forecast swift golang rl jax julia gnn causal surey diffusion
© Copyright Philosophy 2018 Site Template by Colorlib