How to Install Folding@home with NVIDIA CUDA GPU Support

How to Install Folding@home with NVIDIA CUDA GPU Support

If you want to maximize your contribution to Folding@home, enabling CUDA GPU acceleration is one of the best ways to boost performance. NVIDIA GPUs can dramatically speed up protein folding calculations compared to CPU-only setups.

This tutorial walks you through installing Folding@home with NVIDIA CUDA support on Linux, using Docker and the NVIDIA Container Toolkit. By the end, youโ€™ll have Folding@home running on your GPU, crunching numbers at full speed.

๐Ÿ“– Source: NVIDIA Container Toolkit Install Guide


Why Use Folding@home with CUDA?

  • Faster Work Units: NVIDIA GPUs can process Folding@home tasks much faster than CPUs.
  • Better PPD (Points Per Day): More points means more impact for your team and the global project.
  • Efficient Resource Use: If you already have an NVIDIA GPU, CUDA lets you unlock its full potential.

Prerequisites

Before starting, make sure you have:

  1. A Linux system with an NVIDIA GPU.
  2. Latest NVIDIA GPU drivers installed.
  3. Docker or another container runtime installed.
  4. Sudo/root access to configure system settings.

Step 1: Install the NVIDIA Container Toolkit

The NVIDIA Container Toolkit is required to give Docker access to your GPU. Installation steps vary by distribution.

Ubuntu / Debian

# Add NVIDIA GPG key and repository
curl -fsSL https://nvidia.github.io/libnvidia-container/gpgkey \
  | sudo gpg --dearmor -o /usr/share/keyrings/nvidia-container-toolkit-keyring.gpg

curl -s -L https://nvidia.github.io/libnvidia-container/stable/deb/nvidia-container-toolkit.list \
  | sed 's#deb https://#deb [signed-by=/usr/share/keyrings/nvidia-container-toolkit-keyring.gpg] https://#g' \
  | sudo tee /etc/apt/sources.list.d/nvidia-container-toolkit.list

# Install toolkit
sudo apt-get update
sudo apt-get install -y nvidia-container-toolkit

Fedora / RHEL / CentOS

# Add NVIDIA repo
curl -s -L https://nvidia.github.io/libnvidia-container/stable/rpm/nvidia-container-toolkit.repo \
  | sudo tee /etc/yum.repos.d/nvidia-container-toolkit.repo

# Install toolkit
sudo dnf install -y nvidia-container-toolkit

Step 2: Configure Docker for NVIDIA Runtime

Once installed, configure Docker to use NVIDIAโ€™s runtime:

sudo nvidia-ctk runtime configure --runtime=docker
sudo systemctl restart docker

This step ensures containers can access your GPU devices.


Step 3: Verify GPU Access in Containers

Run a test container to check if CUDA works:

docker run --rm --gpus all nvidia/cuda:11.0-base nvidia-smi

If you see your GPU details, CUDA is working correctly.


Step 4: Run Folding@home with NVIDIA CUDA

Now you can launch Folding@home with GPU acceleration:

docker run -d \
  --gpus all \
  --name foldingathome \
  foldingathome/fahclient:latest \
  --user YourUsername \
  --team YourTeamNumber \
  --gpu true
  • --gpus all enables CUDA GPU access.
  • Replace YourUsername and YourTeamNumber with your actual Folding@home details.
  • --gpu true ensures GPU folding is turned on.

Step 5: Monitor Folding@home Performance

You can monitor your Folding@home container logs:

docker logs -f foldingathome

You should see work units assigned to your GPU. Youโ€™ll also notice much higher PPD compared to CPU folding.


Troubleshooting

  • If CUDA isnโ€™t detected inside the container, recheck the NVIDIA driver installation.
  • Ensure Docker is using the NVIDIA runtime (nvidia-smi inside container must work).
  • Check compatibility between CUDA version and GPU driver.

Conclusion

By following this guide, youโ€™ve successfully set up Folding@home with NVIDIA CUDA GPU support. This setup unlocks the full power of your NVIDIA GPU, helping you contribute to groundbreaking research faster and more efficiently.

Whether youโ€™re folding solo or with a team, enabling GPU acceleration ensures your contributions make a bigger impact.

How useful was this article?

Click on a star to rate it!

We are sorry that this article was not useful for you!

Let us improve this article!

Tell us how we can improve this post?