Homepage
Close
Menu

Site Navigation

  • Home
  • Archive(TODO)
    • By Day
    • By Month
  • About(TODO)
  • Stats
Close
by hllo_wrld on 2018-08-25 (UTC).

If you squint, transformer is like a densely connected factor graph. Network depth approximates the number of rounds of loopy belief propagation.

— Victor Zhong (@hllo_wrld) August 25, 2018
thought
by williamleif on 2018-08-25 (UTC).

Yes! And this is basically the idea behind graph neural networks / relational networks (eg, https://t.co/SeihrvMIPs or https://t.co/SThMV0E8So). The whole “loopy message passing” with neural nets thing is a great idea (imo) :)

— Will Hamilton (@williamleif) August 25, 2018
thoughtresearch

Tags

learning tutorial misc nlp rstats gan ethics research dataviz survey python tool security kaggle video thought bayesian humour tensorflow w_code bias dataset pytorch cv tip application javascript forecast swift golang rl jax julia gnn causal surey diffusion
© Copyright Philosophy 2018 Site Template by Colorlib