Protein design compute explained: when to use GPU vs CPU
If you're working with protein design tools or planning compute resources for your lab, this is a must-watch to understand performance tradeoffs and make smarter decisions. This session breaks down one of the most important concepts in modern computational biology: the difference between CPUs and GPUs—and why it matters for protein design. You’ll learn how traditional tools like Rosetta rely on CPU-based, sequential processing, while newer machine learning tools (like AlphaFold) leverage GPUs for massive parallelization. The video explains core architectural differences, including clock speed vs. core count, memory design, and why GPUs excel at matrix-heavy workloads. It also covers practical considerations such as cloud costs, hardware purchasing decisions, and when CPUs are still the better choice. Mythbusters video on GPU vs CPU performance using paintballs: https://www.reddit.com/r/pcmasterrace/comments/13yv3q8/mythbusters_and_nvidia_demonstrate_gpu_vs_cpu/ More info about this Bootcamp can be found at https://rosettamlbootcamp2025.github.io/ Instructor: Ian Anderson Affiliation: UC Davis Chapters 00:00 Introduction: why CPUs vs GPUs matters 00:29 Shift from CPU-based tools (Rosetta) to GPU tools 01:00 CPU vs GPU architecture basics 02:30 Parallel processing and why GPUs exist 03:12 GPUs and machine learning (CUDA, matrix math) 04:14 Inside a CPU vs inside a GPU 05:45 Apple vs Nvidia GPUs explained 07:45 Hands-on exercise: CPU vs GPU performance 08:48 When to use CPU vs GPU (Rosetta vs AlphaFold) 11:23 Cost and availability: cloud GPUs vs CPUs 14:56 Hardware recommendations for labs
Download
0 formatsNo download links available.