Homepage
Close
Menu

Site Navigation

  • Home
  • Archive(TODO)
    • By Day
    • By Month
  • About(TODO)
  • Stats
Close
by chrisalbon on 2021-01-29 (UTC).

Still the best learning path for machine learning:

- A Mathematics Course for Political and Social Research
- Introduction to Statistical Learning
- Elements of Statistical Learning
- Deep Learning

Lots of directions you can take your learning after that.

— Chris Albon (@chrisalbon) January 29, 2021
miscthought
by simongerman600 on 2021-01-29 (UTC).

South Korea is one of the few developed countries still reporting rabies cases (albeit extremely low numbers—no human infection since 2004 and no animal infection since 2014). Officials blame unvaccinated animals from North Korea crossing the porous DMZ. https://t.co/E7fkHPN0n0 pic.twitter.com/1RapcjqjZF

— Simon Kuestenmacher (@simongerman600) January 29, 2021
dataviz
by evolvingstuff on 2021-01-28 (UTC).

Three rules of thumb for machine learning:

1) be skeptical of good results
2) be skeptical of bad results
3) be skeptical of mediocre results

— Thomas (@evolvingstuff) January 28, 2021
humourmisc
by chrisalbon on 2021-01-28 (UTC).

Piece of advice that helped me: Pay attention to how you feel when interacting with folks. Do they make you feel excited? Smart? Stupid? Lazy?

One of the highest value things you can do for yourself is shadowbanning from your life folks who makes you feel worse about yourself.

— Chris Albon (@chrisalbon) January 28, 2021
miscthought
by gneubig on 2021-01-28 (UTC).

There has been much interest in ML methods that generate source code (e.g. Python) from English commands. But does this actually help software developers? We asked 31 developers to use a code generation plugin, and found some interesting results: https://t.co/ifiG3EYK3J 1/7 pic.twitter.com/u0mgY0LSSj

— Graham Neubig (@gneubig) January 28, 2021
researchnlp
by Miles_Brundage on 2021-01-28 (UTC).

"The results ... suggest that decision-makers can actually rid themselves of guilt more easily by delegating to machines than by delegating to other people." 😬

"Hiding Behind Machines: When Blame Is Shifted to Artificial Agents," Feier et al.: https://t.co/fpNTNdkY7A

— Miles Brundage (@Miles_Brundage) January 28, 2021
researchethics
by distillpub on 2021-01-27 (UTC).

High-Low Frequency Detectors — A new Distill article by @ludwigschubert, @csvoss, and @ch402. This is the fifth article in the circuits thread.https://t.co/b8Bfl0heOo

— Distill (@distillpub) January 27, 2021
learning
by ylecun on 2021-01-27 (UTC).

Want Transformers to perform long chains of reasoning and to remember stuff?
Use Feedback Transformers.
Brought to you by a team from FAIR-Paris.@tesatory https://t.co/dpCzt2yiPf

— Yann LeCun (@ylecun) January 27, 2021
research
In a group with 1 other tweets.
by kchonyc on 2021-01-27 (UTC).

what a simple yet effective idea! :)

looking at it from the architectural depth perspective (https://t.co/DM4MvzqFQW by zheng et al.,) the depth (# of layers between a particular input at time t' and output at time t) is now (t-t') x L rather than (t-t') + L. https://t.co/GkkuMH7a5u pic.twitter.com/eXWeqoEKxx

— Kyunghyun Cho (@kchonyc) January 27, 2021
research
In a group with 1 other tweets.
by HamelHusain on 2021-01-27 (UTC).

Have you heard about this nbdev thing but wondering what its really like to write code in it?

Watch me as I live-code a nbdev project from scratch 😆 👇✨

Thanks to @NorthwesternU for hosting the eventhttps://t.co/sOXb2unMGg

— Hamel Husain (@HamelHusain) January 27, 2021
learningtooltutorialvideo
by GoogleAI on 2021-01-26 (UTC).

Recent updates to the live transcription feature in the Google Translate app reduce revisions, providing a noticeable improvement to the user experience. Learn how masking and biasing enable high accuracy with low erasure and minimal lag at: https://t.co/9aV4pjcZj1 pic.twitter.com/ZsNObLw141

— Google AI (@GoogleAI) January 26, 2021
nlpsurvey
by alxndrkalinin on 2021-01-26 (UTC).

So cool to see Set Transformer being used for smart model ensembling. Kaggle competitions are becoming more interesting and Transformer models are getting more attention! https://t.co/6OKOutqOKQ pic.twitter.com/cMp2NSBCQu

— Alexandr Kalinin (@alxndrkalinin) January 26, 2021
applicationw_code
  • Prev
  • 117
  • 118
  • 119
  • 120
  • 121
  • …
  • Next

Tags

learning tutorial misc nlp rstats gan ethics research dataviz survey python tool security kaggle video thought bayesian humour tensorflow w_code bias dataset pytorch cv tip application javascript forecast swift golang rl jax julia gnn causal surey diffusion
© Copyright Philosophy 2018 Site Template by Colorlib