Use @PrometheusIO and @Grafana to instrument your @rapidsai @dask_dev deployments and monitor how your workflows are performing on your systems. https://t.co/uCLRqyi2WJ
— RAPIDS AI (@RAPIDSai) April 9, 2021
Use @PrometheusIO and @Grafana to instrument your @rapidsai @dask_dev deployments and monitor how your workflows are performing on your systems. https://t.co/uCLRqyi2WJ
— RAPIDS AI (@RAPIDSai) April 9, 2021
Last week, EleutherAI released two checkpoints for GPT Neo, an *Open Source* replication of OpenAI's GPT-3
— Hugging Face (@huggingface) March 31, 2021
These checkpoints, of sizes 1.3B and 2.7B are now available in🤗Transformers!
The generation capabilities are truly🤯, try it now on the Hub: https://t.co/LbLXuCNYip pic.twitter.com/eHq45J5tdu
Introducing torch.profiler! New PyTorch Profiler collects both GPU and framework related info, correlates them, performs automatic detection of bottlenecks in the model, generates recommendations on how to resolve these bottlenecks, and visualize.
— PyTorch (@PyTorch) March 25, 2021
Read 👉https://t.co/Ottly5CtF4 pic.twitter.com/zkFCts1lzn
Fast, differentiable sorting and ranking in PyTorch https://t.co/xaLlhOaejy #deeplearning #machinelearning #ml #ai #neuralnetworks #datascience #pytorch
— PyTorch Best Practices (@PyTorchPractice) March 22, 2021
Who needs floats? I-BERT doesn't!
— Hugging Face (@huggingface) March 22, 2021
I-BERT: A quantized Transformer with int-8 *only*
Get the best parameters with Transformers and use in TensorRT for a 4x (!!) speedup!
Contributed by @sehoonkim418, @amir__gholami @ZheweiYao
Try it on the hub: https://t.co/00w4evcRUe pic.twitter.com/eM1LJKgGAX
Avalanche: End-to-End Library for Continual Learning based on PyTorch
— ML Review (@ml_review) March 19, 2021
By @ContinualAI
Covers:
– Benchmarks
– Training
– Evaluation
– Models
Project: https://t.co/rZr2x2a7OE
GitHub: https://t.co/WzHDV632Rv pic.twitter.com/FBRmEsupDC
⚠️ New in the v0.0.7 release of `huggingface_hub` ⚠️
— Julien Chaumond (@julien_c) March 18, 2021
Community member @7vasudevgupta added the ability to mix-in a class named ModelHubMixin to *any PyTorch model* to be able to save, upload it, and load it from the https://t.co/ZygXVJ8qCM hub:
🔥🔥https://t.co/1jZ4Y6GPwo pic.twitter.com/mfCwH0f5sF
Freshly merged in the @scikit_learn main branch: periodic spline features:https://t.co/g6MTtmMKSY
— Olivier Grisel (@ogrisel) March 17, 2021
Avoids introducing a jump between Dec 31st and Jan 1st when doing non linear, smooth feature engineering on a "day of year" input feature for instance.
Thanks Malte Londschien! pic.twitter.com/g638Sl6sZa
ConSelfSTransDRLIB: Contrastive Self-supervised Transformers for Disentangled Representation Learning with Inductive Biases is All you need, and where to find them.
— Sebastian Raschka (@rasbt) March 13, 2021
The current state of deep learning research summarized in one sentence.
(Credit: https://t.co/RTuht7Lkj0)
🔥Fine-Tuning @facebookai's Wav2Vec2 for Speech Recognition is now possible in Transformers🔥
— Hugging Face (@huggingface) March 12, 2021
Not only for English but for 53 Languages🤯
Check out the tutorials:
👉 Train Wav2Vec2 on TIMIT https://t.co/33Bx8Nj4mN
👉 Train XLSR-Wav2Vec2 on Common Voicehttps://t.co/xOoEQV3Krn pic.twitter.com/rxp2hAbaLS
Yay! differentiable FFT! https://t.co/BXGOFhFih7
— Yann LeCun (@ylecun) March 10, 2021
Today we announce the release of new sparsity features in the #XNNPACK acceleration library that is powering #TensorFlowLite! Sparse inference improves efficiency without degrading quality in applications like Google Meet's background effects. https://t.co/QAcVvSNk5L pic.twitter.com/aYsByJ0Eiz
— Google AI (@GoogleAI) March 9, 2021