Learn how to install and configure a GPU to be used with TensorFlow and R using a cloud provider like Google Cloud or Amazon Web Services and docker. We then use this environment to try OpenAI's large (1.5B parameters) GPT-2 model with a few questions to test this configuration works in a NVIDIA Tesla V100 GPU.
Docker Repo: https://github.com/mlverse/mlverse-docker