Homepage
Close
Menu

Site Navigation

  • Home
  • Archive(TODO)
    • By Day
    • By Month
  • About(TODO)
  • Stats
Close
by ch402 on 2019-05-09 (UTC).

“Adversarial Examples Are Not Bugs, They Are Features” by Ilyas et al is pretty interesting.

📝Paper: https://t.co/8B8eqoywzl
💻Blog: https://t.co/eJlJ4L8nhA

Some quick notes below.

— Chris Olah (@ch402) May 9, 2019
research
by ch402 on 2019-05-09 (UTC).

(2) The claim which will seems to me really remarkable if it holds up is that you can use this process to turn robust models into robust datasets, for which normal training creates robust models. pic.twitter.com/WwC4okBahs

— Chris Olah (@ch402) May 9, 2019
researchthought
by ch402 on 2019-05-09 (UTC).

(3) The other interesting result is that you can create a different dataset of adversarial attacks, where you try to predict the attack class.

They find this model - trained on adversarial attacks - generalizes to clean data, which I probably wouldn’t have predicted in advance. pic.twitter.com/ood8JL1dVy

— Chris Olah (@ch402) May 9, 2019
researchthought

Tags

learning tutorial misc nlp rstats gan ethics research dataviz survey python tool security kaggle video thought bayesian humour tensorflow w_code bias dataset pytorch cv tip application javascript forecast swift golang rl jax julia gnn causal surey diffusion
© Copyright Philosophy 2018 Site Template by Colorlib