Homepage
Close
Menu

Site Navigation

  • Home
  • Archive(TODO)
    • By Day
    • By Month
  • About(TODO)
  • Stats
Close
by kara_woo on 2021-06-05 (UTC).

There are tons of things you can make with R Markdown at different points on the spectra of relevance and complexity. @seankross' postcards package aims to be high relevance and low complexity for students and folks new to R Markdown. https://t.co/ZHQLtYFjgj #cascadiaRconf

β€” Kara Woo (@kara_woo) June 5, 2021
rstatstool
by ak92501 on 2021-06-04 (UTC).

When Vision Transformers Outperform ResNets without Pretraining or Strong Data Augmentations
pdf: https://t.co/GYknaVoNAM
abs: https://t.co/kaUxIdMVNQ

+5.3% and +11.0% top-1 accuracy on ImageNet for ViT-B/16 and MixerB/16, with the simple Inception-style preprocessing pic.twitter.com/EI1ZSUccUn

β€” AK (@ak92501) June 4, 2021
researchcv
by hardmaru on 2021-06-02 (UTC).

Cloud TPU Virtual Machines finally released

These VMs run on TPU host machines that are directly attached to TPU accelerators, so everything feels like it's run locally.

The new π—Ήπ—Άπ—―π˜π—½π˜‚ library supports TensorFlow, PyTorch, JAX, and soon, Julia. πŸ”₯https://t.co/Yhatj1xaKz pic.twitter.com/N6u3ix0MI0

β€” hardmaru (@hardmaru) June 2, 2021
toolmisc
by ak92501 on 2021-06-02 (UTC).

You Only Look at One Sequence: Rethinking Transformer in Vision through Object Detection
pdf: https://t.co/LThmFQ2a6g
abs: https://t.co/XhhOip5bOw
github: https://t.co/crirLeiGGI

series of object detection models based on the naΓ―ve Vision Transformer pic.twitter.com/Ml38kzqdtt

β€” AK (@ak92501) June 2, 2021
researchcv
by huggingface on 2021-06-01 (UTC).

πŸš€ And merged to Transformers!

We are excited to welcome ByT5 as the first tokenizer-free model!

πŸ‘‰All available checkpoints can be accessed on the πŸ€—hub here: https://t.co/WEpfzu6uMN

πŸ‘‡ Demo (on master): https://t.co/2qgmt7YQO8 pic.twitter.com/m5pXe9qETU

β€” Hugging Face (@huggingface) June 1, 2021
researchnlpw_code
by ak92501 on 2021-06-01 (UTC).

StyTr^2: Unbiased Image Style Transfer with Transformers
pdf: https://t.co/H3OraPsolh
abs: https://t.co/PM8dZiCuct pic.twitter.com/8ld0m4SDyN

β€” AK (@ak92501) June 1, 2021
researchgancv
by ak92501 on 2021-06-01 (UTC).

An Attention Free Transformer
pdf: https://t.co/iOURQMubTR
abs: https://t.co/6TSsVXmjww

an efficient variant of Transformers that eliminates the need for dot product self attention pic.twitter.com/ZfaIbdmvnL

β€” AK (@ak92501) June 1, 2021
research
by ak92501 on 2021-06-01 (UTC).

Less is More: Pay Less Attention in Vision Transformers
pdf: https://t.co/ydo2bFvxsH
abs: https://t.co/baTSDrBpEd

hierarchical vision transformer pays less attention in
early stages to ease huge computational cost of self-attention modules over high-resolution
representations pic.twitter.com/6J5xdAO0mc

β€” AK (@ak92501) June 1, 2021
researchcv
by randal_olson on 2021-05-31 (UTC).

The quickest route along primary roadways to Washington D.C. from any point in the contiguous United States. #travel #dataviz

Source: https://t.co/135uBbkwFu pic.twitter.com/T5yPHImNQl

β€” Randy Olson (@randal_olson) May 31, 2021
dataviz
by chrmanning on 2021-05-29 (UTC).

The way I would improve spreadsheets is by allowing row numbering to start from any integer (including negative). Then row numbers could count what it makes sense to count. Books have done this forever via Roman numerals for front matter. Surely this isn’t so hard to do in 2021? pic.twitter.com/WMopl7uihp

β€” Christopher Manning (@chrmanning) May 29, 2021
misc
by randal_olson on 2021-05-28 (UTC).

Animated demographic pyramid of #Sweden from 1860-2020. #dataviz

Source: https://t.co/ndxA55dzNI pic.twitter.com/I8an9Q05sg

β€” Randy Olson (@randal_olson) May 28, 2021
dataviz
by dustinvtran on 2021-05-28 (UTC).

What gripes do you have with LaTeX's default, and what you always add to papers? Here are mine: 1. Cleveref. Don't use "Section \ref{sec:intro}". Use \Cref{sec:intro}. This makes writing less error prone and it makes "Section" part of the hyperlink! https://t.co/SbJkDXJwjK

β€” Dustin Tran (@dustinvtran) May 28, 2021
tipmisctool
  • Prev
  • 86
  • 87
  • 88
  • 89
  • 90
  • …
  • Next

Tags

learning tutorial misc nlp rstats gan ethics research dataviz survey python tool security kaggle video thought bayesian humour tensorflow w_code bias dataset pytorch cv tip application javascript forecast swift golang rl jax julia gnn causal surey diffusion
Β© Copyright Philosophy 2018 Site Template by Colorlib