Homepage
Close
Menu

Site Navigation

  • Home
  • Archive(TODO)
    • By Day
    • By Month
  • About(TODO)
  • Stats
Close
by jbhuang0604 on 2021-10-12 (UTC).

It seems that everyone wants to publish many papers but no one wants to read others’ papers…

(thoughts after attending a poorly attended poster session)

— Jia-Bin Huang (@jbhuang0604) October 12, 2021
misc
by JohnHolbein1 on 2021-10-12 (UTC).

Look at the gender breakdown of who speaks in popular films! pic.twitter.com/Jq5CEerex2

— John B. Holbein (@JohnHolbein1) October 12, 2021
dataviz
by ak92501 on 2021-10-12 (UTC).

Causal ImageNet: How to discover spurious features in Deep Learning?
abs: https://t.co/NvelZA7Aa4 pic.twitter.com/OhFgarP17L

— AK (@ak92501) October 12, 2021
researchcv
by ak92501 on 2021-10-12 (UTC).

Yuan 1.0: Large-Scale Pre-trained Language Model in Zero-Shot and Few-Shot Learning
abs: https://t.co/2bT9if0KTH

singleton language model with 245B parameters, sota results on natural language processing tasks. high-quality Chinese corpus with 5TB high quality texts pic.twitter.com/4p6qxxDm0Y

— AK (@ak92501) October 12, 2021
nlpresearch
by ak92501 on 2021-10-12 (UTC).

CLIP-Adapter: Better Vision-Language Models with Feature Adapters
abs: https://t.co/keCbjWjil8 pic.twitter.com/Z2uqatQeIH

— AK (@ak92501) October 12, 2021
researchnlp
by ak92501 on 2021-10-12 (UTC).

A Few More Examples May Be Worth Billions of Parameters
abs: https://t.co/UaR7ANxkzq
github: https://t.co/U00DR6CMn5 pic.twitter.com/S30zjBTLm3

— AK (@ak92501) October 12, 2021
research
by jeremyphoward on 2021-10-11 (UTC).

Has anyone else noticed that walking whilst learning gives better results? Are there any studies of this phenomenon?https://t.co/oORKggBKXI pic.twitter.com/OkZoeAccdU

— Jeremy Howard (@jeremyphoward) October 11, 2021
misc
by ak92501 on 2021-10-11 (UTC).

ViDT: An Efficient and Effective Fully Transformer-based Object Detector
abs: https://t.co/rOytM75swG

obtains the best AP and latency trade-off among existing fully transformer-based object detectors, and achieves 49.2AP owing to its high scalability for large models pic.twitter.com/CVAzoT3dNh

— AK (@ak92501) October 11, 2021
researchcv
by ak92501 on 2021-10-11 (UTC).

Token Pooling in Visual Transformers
abs: https://t.co/0Jr3cJqvRe

Applied to DeiT, achieves the same ImageNet top-1 accuracy using 42% fewer computations pic.twitter.com/lusPaB9Bns

— AK (@ak92501) October 11, 2021
researchcv
by unsorsodicorda on 2021-10-10 (UTC).

Impressive 172 pp. paper from @DeepMind & @GoogleAI: train deep nets on ImageNet with SGD w/o batch norm, and even w/o skip connection if you substitute SGD with a better optimizer s.a. K-FAC or Shampoo. Shocking! And probably very useful for theory. https://t.co/CoPGVN3Csl pic.twitter.com/hmJ3N2ZMl1

— andrea panizza (@unsorsodicorda) October 10, 2021
research
by karpathy on 2021-10-08 (UTC).

Nice new paper improving image generation and (generative) unsupervised representation learning https://t.co/OYB21fe3sm uses ViT instead of CNN to improve VQGAN into a new "ViT-VQGAN" image patch tokenizer. Tokens are then fed into a GPT for image generation, or linear probing. pic.twitter.com/cepvrv55Jk

— Andrej Karpathy (@karpathy) October 8, 2021
research
by Tim_Dettmers on 2021-10-08 (UTC).

I am excited to share my latest work: 8-bit optimizers – a replacement for regular optimizers. Faster 🚀, 75% less memory 🪶, same performance📈, no hyperparam tuning needed 🔢. 🧵/n

Paper: https://t.co/V5tjOmaWvD
Library: https://t.co/JAvUk9hrmM
Video: https://t.co/TWCNpCtCap pic.twitter.com/qyItEHeB04

— Tim Dettmers (@Tim_Dettmers) October 8, 2021
researchvideo
  • Prev
  • 60
  • 61
  • 62
  • 63
  • 64
  • …
  • Next

Tags

learning tutorial misc nlp rstats gan ethics research dataviz survey python tool security kaggle video thought bayesian humour tensorflow w_code bias dataset pytorch cv tip application javascript forecast swift golang rl jax julia gnn causal surey diffusion
© Copyright Philosophy 2018 Site Template by Colorlib