Attention-based network for low-light image enhancement
— roadrunner01 (@ak92501) May 21, 2020
pdf: https://t.co/JKYNguJfYg
abs: https://t.co/iSajmMQlq7 pic.twitter.com/LLdftcrVST
Attention-based network for low-light image enhancement
— roadrunner01 (@ak92501) May 21, 2020
pdf: https://t.co/JKYNguJfYg
abs: https://t.co/iSajmMQlq7 pic.twitter.com/LLdftcrVST
Contextual Residual Aggregation for Ultra High-Resolution Image Inpainting
— roadrunner01 (@ak92501) May 21, 2020
pdf: https://t.co/8zVmxoYnmY
abs: https://t.co/65wqyUQoq7 pic.twitter.com/KQAlirDlbN
Sketch-RNN, Sketch-Transformer, and now Sketch-BERT
— hardmaru (@hardmaru) May 20, 2020
“According to Gestalt principles of perceptual grouping, humans can easily perceive a sketch as a sequence of data points.”
Nice way to apply masked language models from NLP to model freehand sketches.https://t.co/EACbtnQOas https://t.co/tC8QKzEdZO
Adversarial Colorization Of Icons Based On Structure And
— roadrunner01 (@ak92501) May 18, 2020
Color Conditions
pdf: https://t.co/6tIoJZiXye
abs: https://t.co/2LakM2d1bk
github: https://t.co/hV7v3wlzvU pic.twitter.com/OQ550tvGgp
Excited to release rank-1 Bayesian neural nets, achieving new SOTA on uncertainty & robustness across ImageNet, CIFAR-10/100, and MIMIC. We do extensive ablations to disentangle BNN choices.@dusenberrymw @Ghassen_ML @JasperSnoek @kat_heller @balajiln et al https://t.co/aMfBvVkl0v pic.twitter.com/T5sQDkO0xR
— Dustin Tran (@dustinvtran) May 18, 2020
[3/4] Unsupervised MLM scores from BERT narrow the human gap on the BLiMP minimal pairs set (@a_stadt, @sleepinyourhat), suggesting left-to-right bias in GPT-2 has an outsized effect.
— Julian Salazar (@JulianSlzr) May 15, 2020
(Yes, we ran these experiments when the dataset came out, 1 week before the ACL deadline 😌) pic.twitter.com/I5ng2NobSd
[1/4] “Masked Language Model Scoring” is in #acl2020nlp! Score sentences with any BERT variant via mask+predict (works w/ @huggingface). Improves ASR, NMT, acceptability.
— Julian Salazar (@JulianSlzr) May 15, 2020
Paper: https://t.co/j2iJlIJbVQ
Code: https://t.co/zVbCkAhi9P
(w/ @LiangDavis, @toannguyen177, Katrin K.) pic.twitter.com/LhLYdXC3FS
FlowTron: Improved Text to Speech Engine from NVIDIA
— PyTorch (@PyTorch) May 14, 2020
Try it out now!
Paper: https://t.co/1vMnpQSBbi
Code: https://t.co/O0sYbd2LTX https://t.co/B0I74FmxIY
Two papers I really appreciated from ICLR2020 were https://t.co/JHwl6VkA9m and https://t.co/0QnPZk8HJL. Both are great studies in deeply analyzing representations in bio-inspired NNs, and are a breath of fresh air compared to the more typical "state of the art black box" works. pic.twitter.com/jdGhzggTyQ
— Arthur Juliani (@awjuliani) May 13, 2020
A Review of Using Text to Remove Confounding from Causal Estimates: https://t.co/0mNNL8dvL7
— Judea Pearl (@yudapearl) May 10, 2020
In other words, using Text as a noisy proxy for unmeasured confounders, as in https://t.co/fc47co4HOu. A survey of on-going works and pending problems.
Zero-padding may not as benign as we thought.
— Brandon Rohrer (@_brohrer_) May 8, 2020
ICLR 2020 paper shows it indirectly encodes positional information in CNNs and boosts performance quite a bit. https://t.co/bs1310BLj9
Md Amirul Islam, Sen Jia, Neil D. B. Bruce
h/t @MSalvaris pic.twitter.com/sWLsY0NSA5
Use machine learning to track and automatically extract results from machine learning papers.https://t.co/wXBBSheMRd pic.twitter.com/kX0STizR5f
— hardmaru (@hardmaru) May 8, 2020