Tweeted By @NicolasPapernot
CleverHans blog post with @nickfrosst: we explain how the Deep k-Nearest Neighbors (DkNN) and soft nearest-neighbor loss (SNNL) help recognize data that is not from the training distribution. The post includes an interactive figure (credit goes to Nick): https://t.co/aajpf8NOib pic.twitter.com/MKKc4WX8Rp
— Nicolas Papernot (@NicolasPapernot) May 21, 2019