***The Dask XGBoost library has been deprecated as the functionality is now built into XGBoost***
Dataset sizes are increasing at an exponential rate - and data scientists now need to be able to scale their computations to handle hundreds of Gigabytes or even Terabytes of data. In the previous video tutorial, we showed how to GPU accelerate the popular XGBoost algorithm to train models faster and iterate on data quicker. In this video tutorial, we’ll show how to use Dask XGBoost to distribute computations across multiple nodes of GPUs - allowing data scientists to work faster with increasingly larger datasets
Subscribe to RAPIDS YouTube - https://nvda.ws/2JidAvb
#MachineLearning, #DataScience, #DistributedComputing