New topic on datamethods: Bayesian power and avoidance of unobservables. Discussions more than a few words should be added there, not here please: https://t.co/3RuV9oGs3R
— Frank Harrell (@f2harrell) July 7, 2019
New topic on datamethods: Bayesian power and avoidance of unobservables. Discussions more than a few words should be added there, not here please: https://t.co/3RuV9oGs3R
— Frank Harrell (@f2harrell) July 7, 2019
Collinearity in Bayesian models https://t.co/ULytnzwEgJ
— Andrew Gelman (@StatModeling) July 7, 2019
Conditional Density Estimation with Neural Networks: Best Practices and Benchmarks
— hardmaru (@hardmaru) July 6, 2019
This looks like a pretty useful toolkit for using neural networks for density estimation approaches commonly used in finance and econometrics.https://t.co/l8hGXY6yuBhttps://t.co/9dkilZza3Z https://t.co/2oKCxKkMd2
Sad that @dl_iss came to an end. But it was closing with a great talk & very educational tutorial on (biomedical) image segmentation in PyTorch by @alxndrkalinin #DLISS19 Check out his hands-on tutorial material (with a competition-winning solution) at https://t.co/hLRr2ZWoY8 pic.twitter.com/7cMMCbbURB
— Sebastian Raschka (@rasbt) July 5, 2019
TanH Activation Function https://t.co/eZ2bbpDzwV pic.twitter.com/4h4cbkTp6h
— Chris Albon (@chrisalbon) July 4, 2019
Amazing talk + tutorial on Deep Learning for NLP by @DanielPressel today at @dl_iss #DLISS19. From basic LSTMs to implementing BERT and achieving SOTA results; everything augmented by a rich set of code examples. Checkout his slides & tutorial material at https://t.co/wnPipEqzQC pic.twitter.com/QlFQ7kgqps
— Sebastian Raschka (@rasbt) July 4, 2019
New to contributing to open source, and confused about git and GitHub? Read @WillingCarol's post and it will all fall into place! https://t.co/HwezfpNv0L
— Guido van Rossum (@gvanrossum) July 3, 2019
Learn how to accelerate data modeling in #finance with @rapidsai by feeding data to #XGboost models for @nvidia #GPU training: https://t.co/lQdRpqNLC0 pic.twitter.com/WGCOdfoxdz
— NVIDIA AI (@NvidiaAI) July 3, 2019
How to read (in quantitative social science). And by implication, how to write. https://t.co/366i0Wleyw
— Andrew Gelman (@StatModeling) July 3, 2019
Neural Machine Reading Comprehension: Methods and Trendshttps://t.co/C7gRor6TR0
— Thomas Lahore (@evolvingstuff) July 3, 2019
Receiver Operating Characteristic https://t.co/eZ2bbpDzwV pic.twitter.com/gkq7rI1ziW
— Chris Albon (@chrisalbon) July 2, 2019
Gradient descent will take any shortcut available to map inputs to targets. Human perception works differently: it starts from a different input (embodied stream vs static images), it doesn't have a target for each input, and it isn't trained with SGD. https://t.co/coj5YmMwB3
— François Chollet (@fchollet) July 1, 2019