Homepage
Close
Menu

Site Navigation

  • Home
  • Archive(TODO)
    • By Day
    • By Month
  • About(TODO)
  • Stats
Close
by jb_cordonnier on 2020-01-10 (UTC).

Very happy to share our latest work accepted at #ICRL2020: we prove that a Self-Attention layer can express any CNN layer. 1/5

📄Paper: https://t.co/Cm61A3PWRA
🍿Interactive website : https://t.co/FTpThM3BQc
🖥Code: https://t.co/xSfmFCy0U2
📝Blog: https://t.co/3bp59RfAcj pic.twitter.com/X1rNS1JvPt

— Jean-Baptiste Cordonnier (@jb_cordonnier) January 10, 2020
researchw_codecvlearning
by hardmaru on 2020-01-11 (UTC).

On the Relationship between Self-Attention and Convolutional Layers

This work shows that attention layers can perform convolution and that they often learn to do so in practice. They also prove that a self-attention layer is as expressive as a conv layer.https://t.co/44I1uOd4LF pic.twitter.com/iqioR9eXzU

— hardmaru (@hardmaru) January 11, 2020
researchcv

Tags

learning tutorial misc nlp rstats gan ethics research dataviz survey python tool security kaggle video thought bayesian humour tensorflow w_code bias dataset pytorch cv tip application javascript forecast swift golang rl jax julia gnn causal surey diffusion
© Copyright Philosophy 2018 Site Template by Colorlib