Since the inception of Transformers, NLP has seen tremendous progress towards finetuning the NLP models for various downstream tasks. But due to computational inefficiencies most of the models failed to serve in production. Reformer is one such efficient architecture which significantly reduces the compute time and memory during training the models as well as at inference. Below attached Colab notebook contains benchmark results established with Reformer
Colab Notebook:
https://colab.research.google.com/drive/1N2ckPEPzbmk_VPd5UiG1dFM99wkF_8em