Is poisoning ML possible without inserting poison in the model's training set? Yes. @iliaishacked et al. just introduces "data ordering attacks" which are able to target both the integrity and availability of ML simply by *reordering* points during SGDhttps://t.co/4rErkWugiP pic.twitter.com/xx3tNoS64q
— Nicolas Papernot (@NicolasPapernot) April 21, 2021