Tweeted By @julsimon
What if you could predict with your own BERT models in 1ms on GPU and 3ms on CPU, in a Docker container that runs anywhere? Please meet Infinity by @huggingface Please watch https://t.co/uCrCcWPIy0 and sign up for the trial at https://t.co/cdh5b2RowR #MachineLearning #NLP #MLOps
— Julien Simon (@julsimon) October 4, 2021