Homepage
Close
Menu

Site Navigation

  • Home
  • Archive(TODO)
    • By Day
    • By Month
  • About(TODO)
  • Stats
Close
by filippie509 on 2018-07-13 (UTC).

Just read another one of a myriad of #DeepLearning papers and realized that the authors are reporting trivia, but covering it up with an obscene amount of deep learning fluff and burned GPU time... I shall tear it apart in my next blog post. Stay tuned.

— Filip Piekniewski (@filippie509) July 13, 2018
misc
by filippie509 on 2018-07-15 (UTC).

OK, as promised here is a small autopsy of a paper I recently came across: https://t.co/uv7Vm0W4YX #Deeplearning #AI https://t.co/6d0GafR84q

— Filip Piekniewski (@filippie509) July 15, 2018
misc
by dennybritz on 2018-07-16 (UTC).

So true, unfortunately. In NLP, people have been adding position features to embeddings for a long time. I don’t think anyone has managed to write a whole paper about a one-line feature to make it sound academic. https://t.co/cb9AsRZWMO

— Denny Britz (@dennybritz) July 16, 2018
misc
by dennybritz on 2018-07-16 (UTC).

I really enjoyed the paper presentation, but the TLDR basically is: If your task cares about absolute positions it makes sense to add a position feature to your input! Do we need a paper for this?

— Denny Britz (@dennybritz) July 16, 2018
misc

Tags

learning tutorial misc nlp rstats gan ethics research dataviz survey python tool security kaggle video thought bayesian humour tensorflow w_code bias dataset pytorch cv tip application javascript forecast swift golang rl jax julia gnn causal surey diffusion
© Copyright Philosophy 2018 Site Template by Colorlib