Great work, thanks for sharing! I'm wondering how fast this is compared to our Keras implementation https://t.co/gx7q4guN7B
— Hassan Ismail Fawaz (@hassanfawaz93) September 27, 2019
Great work, thanks for sharing! I'm wondering how fast this is compared to our Keras implementation https://t.co/gx7q4guN7B
— Hassan Ismail Fawaz (@hassanfawaz93) September 27, 2019
To learn more about this, and get involved with time series analysis in fastai more generally, be sure to check out the time series discussion here: https://t.co/WLXtrGuuks
— Jeremy Howard (@jeremyphoward) September 27, 2019
Shape and Time Distortion Loss for Training Deep Time Series Forecasting Models
— Thomas Lahore (@evolvingstuff) September 20, 2019
paper: https://t.co/jgRT3tC5Df
code: https://t.co/NIC6XjXlyn pic.twitter.com/dopIL5qYYu
Forecaster: A Graph Transformer for Forecasting Spatial and Time-Dependent Data
— ML Review (@ml_review) September 16, 2019
Uses Gaussian Markov Random Field to find dependency graph among locations
Sparsifies Transformer architecture based on the dependency graph
Taxi Demand Forecasting SoTA https://t.co/gcK2c7eets pic.twitter.com/tirBdkVK7O
Simple statistical methods are shown to much better than fancy machine learning on a whole bunch of real-world sequence-prediction datasets. The reason: the time series used are tiny by ML standards, and all the ML methods overfit. https://t.co/QVpxEQFwyS
— Julian Togelius (@togelius) September 15, 2019
Huh, just hearing about it for the first time, looks handy & convenient: "sktime -- A scikit-learn compatible Python toolbox for learning with time series data" https://t.co/XdV1RdQanq
— Sebastian Raschka (@rasbt) September 12, 2019
FFORMPP: Feature-based forecast model performance prediction. https://t.co/ymyKcvK7U2 pic.twitter.com/m3tu35nLKO
— arxiv (@arxiv_org) September 3, 2019
⏰ Great post on new ts pkg by @nj_tierney & @visnut!
— Mara Averick (@dataandme) August 17, 2019
📝 "Explore longitudinal data w/ {brolgar}" https://t.co/XRwaKzgYKv #rstats
/* 🖍 mine */ pic.twitter.com/KinkyvgNbj
M5 will start Jan. 2020. Unless things change, the plan is to run it in Kaggle with real, hierarchical data from big companies. There will be plenty of announcements in the fall. https://t.co/iO5m5vmHxM
— Spyros Makridakis (@spyrosmakrid) July 30, 2019
Recurrent Neural Processes is a generalization of Neural Processes to sequences by introducing a notion of latent time, proposed from a team @nnaisense. They show some nice results on predicting real-world time-series data, such as electricity consumption. https://t.co/HshrSlXJBB pic.twitter.com/pgpROILdAL
— hardmaru (@hardmaru) July 14, 2019
ICYMI, 🔥 From the basics to predictive analytics, this is fire!
— Mara Averick (@dataandme) June 14, 2019
📕 "UC Business Analytics R Programming Guide" by @bradleyboehmke https://t.co/3n3GcJl4q6 via @UC_Rstats #rstats pic.twitter.com/KgM0bdphqw
📈 Forecasting noisy time seriess w/ {nnfor} & {tsintermittent}
— Mara Averick (@dataandme) June 13, 2019
"Intermittent demand, Croston, and Die Hard" ✏ @brodriguescohttps://t.co/RmhycfIQ4N #rstats
[gif: Die Hard - welcome to the party, pal] pic.twitter.com/IpFvvyt6TQ