Homepage
Close
Menu

Site Navigation

  • Home
  • Archive(TODO)
    • By Day
    • By Month
  • About(TODO)
  • Stats
Close
by slashML on 2019-07-21 (UTC).

BERT's success in some benchmarks tests may be simply due to the exploitation of spurious statistical cues in the dataset. Without them it is no better then random. https://t.co/o3i8FtmC8z

— /MachineLearning (@slashML) July 21, 2019
nlpresearch
by hardmaru on 2019-07-22 (UTC).

Contrary to popular belief, training a gigantic model on a humongous dataset of human text will not lead to AGI. 🙈🧠

Probing Neural Network Comprehension of Natural Language Arguments: https://t.co/3AtozIV03O https://t.co/Y5zg0Dfgke

— hardmaru (@hardmaru) July 22, 2019
nlpresearchthought
by hardmaru on 2019-07-22 (UTC).

Here's the link to their adversarial dataset for argument comprehension. Would've been nice (and made the paper more interesting) if they also provided more interesting adversarial examples than just one (about Google 😉), perhaps in the Appendix section. https://t.co/WUhuTR0gq6

— hardmaru (@hardmaru) July 22, 2019
researchnlpw_code
by peteskomoroch on 2019-07-23 (UTC).

Data leakage and model cheating are hot topics this week: https://t.co/QdKNwDmTin

— Peter Skomoroch (@peteskomoroch) July 23, 2019
misc

Tags

learning tutorial misc nlp rstats gan ethics research dataviz survey python tool security kaggle video thought bayesian humour tensorflow w_code bias dataset pytorch cv tip application javascript forecast swift golang rl jax julia gnn causal surey diffusion
© Copyright Philosophy 2018 Site Template by Colorlib