If you’re working on complex Machine Learning projects, you’ll need a good Graphics Processing Unit (or GPU) to power everything. And Nvidia is a popular option these days, as it has great compatibility and widespread support.
If you’re new to Machine Learning and are just getting started, then a free Kaggle or Colab might be enough for you. But that won’t be the case when you want to go deeper. You’ll need a GPU, which can get costly if you’re continuously using it on the cloud.
But there’s some good news: you can utilize your computer’s Nvidia GPU (GTX/RTX) quite easily and perform machine learning-related tasks right on your local machine. The cool thing is, it won’t cost you anything other than the electricity it uses!
When you’re running Machine Learning models on your local machines, the most suitable operating system is a Linux-based one, like Ubuntu. But Windows has improved a lot for this purpose. If you’re using the latest Windows 11, you can leverage Windows Subsystem for Linux (WSL) and use your GPU directly for Machine Learning-related workflows.
This process can be quite tricky, though, as can making two popular Machine Learning frameworks, TensorFlow and PyTorch, compatible with your system GPU in Windows 11. That’s why I have written this comprehensive guide to ease your pain.
In it, I’ll help you set up CUDA on Windows Subsystem for Linux 2 (WSL2) so you can leverage your Nvidia GPU for machine learning tasks.
By following these steps, you’ll be able to run ML frameworks like TensorFlow and PyTorch with GPU acceleration on Windows 11.
Keep in mind that this guide assumes you have a compatible Nvidia GPU. Make sure to check Nvidia's official compatibility list before proceeding.
I have also prepared a video for you that’ll help you follow proper guidelines throughout this article.
Also, if this tutorial helps you, then don’t forget to add a star to the GitHub repository CUDA-WSL2-Ubuntu-v2. If you face any issues or have any suggestions/improvements, then please raise an issue in the GitHub repository. Currently, the live website is available at ml-win11-v2.fahimbinamin.com.
Table of Contents
Prerequisites
Before you begin, make sure you have the following requirements met:
Windows 11 operating system
Nvidia GPU (GTX/RTX series)
Administrator access to your PC
At least 30 GB of free disk space
Internet connection for downloads
Latest Nvidia drivers installed
Windows Terminal
First, you’ll need to ensure that you have Windows Terminal installed properly in your operating system. It is the newest terminal application for users of command-line tools and shells like Command Prompt, PowerShell, and WSL. You can download it from the Microsoft Store.

After ensuring that it’s installed properly, you can proceed to the next steps.
Windows PowerShell (Latest & Greatest)
Windows PowerShell is a modern and updated command-line shell from Microsoft. You can use some Linux specific commands directly on it. It comes with built-in command suggestions. You can download it from the official GitHub page.

Download the latest x64 installer and install it. After ensuring that it is installed properly, you can proceed to the next steps.
Configure Windows Terminal
Now you’ll need to configure your Windows Terminal to use PowerShell as the default shell. It’s optional and you might skip this step. But I recommend doing it for a better experience.
Open Windows Terminal. Click on the down arrow icon in the title bar and select "Settings".

In the Settings tab, under "Startup", find the "Default profile" dropdown menu. Select "PowerShell" from the list.
Now for the "Default terminal application", select "Windows Terminal".
By default, Windows PowerShell always shows the version number in the title bar. If you want to disable it, select the "PowerShell" profile from the left sidebar. Click on the "Command Line" field and add an --nologo argument at the end of the command. After this, the line becomes "C:\Program Files\PowerShell\7\pwsh.exe" --nologo.

If you don’t use other shells frequently and want to hide them in the dropdown, then you’ll need to select those profiles one by one from the left sidebar. Scroll down to the bottom and find the "Hide profile from dropdown" toggle and enable it. It will hide that specific shell from the dropdown menu.
For example, I am hiding the Azure Cloud Shell profile as I don't use it frequently:

Now click on the "Save" button at the bottom right corner to apply the changes. Close the Windows Terminal for now.
Configuration of My Computer
I figured it’d be helpful to share my current computer’s configuration so you can have a clear idea of which setup I’m using in this guide. Here are the details:
| Component | Specification |
| Processor | AMD Ryzen 7 7700 8-Core Processor (8 Core 16 Threads) |
| RAM | 64GB DDR5 6000MHz |
| Storage | 1 TB Samsung 980 NVMe SSD, 4 TB HDD, 2 TB SATA SSD |
| GPU | NVIDIA GeForce RTX 3060 12GB GDDR6 |
| Operating System | Windows 11 Pro Version 25H2 |
Now that you have an idea about my computer’s configuration, we can proceed to the next steps.
CPU Virtualization
As we are going to use WSL2, we’ll need to make sure that the CPU virtualization is enabled. To check whether virtualization is enabled or not from Windows, simply open the Windows Task Manager. Go to the Performance tab and select CPU from the left sidebar. In the bottom right corner, you will see the Virtualization status. If it shows "Enabled", then you are good to go. If it shows "Disabled", then you need to enable it from the BIOS.

⚠️ You have to ensure that CPU Virtualization is enabled in your BIOS settings. Different manufacturers have different ways to access the BIOS. Usually, you can access the BIOS by pressing the Delete or F2 key during the boot process. Once in BIOS, look for settings related to "Virtualization Technology" or "Intel VT-x"/"AMD-V" and make sure it is enabled. Save the changes and exit the BIOS.
Install WSL2
Open the Windows Terminal or Windows PowerShell as an administrator. Run the following command to install WSL2 along with the latest Ubuntu LTS distribution:
wsl.exe --install
It will install Windows Subsystem for Linux 2 (WSL2). After the installation is complete, you will be prompted to restart your computer. Do so to finalize the installation.

⚠️ If you encounter any issues during installation, refer to the official Microsoft documentation for troubleshooting WSL installation problems.
Install Latest LTS Ubuntu via WSL2
Open the Windows Terminal or Windows PowerShell again with the administrator privileges. If you want to check the available Linux distributions to install via WSL, run the following command:
wsl.exe --list --online

For installing any specific distribution, run the following command:
wsl.exe --install <DistroName>
We are going to install the latest LTS Ubuntu distribution. As of now, the latest LTS version is Ubuntu 24.04. But I prefer to install the Ubuntu directly as it always points to the latest LTS version. So, run the following command:
wsl.exe --install Ubuntu
You need to give it a default user account name. For me, I am going with fahim.

It also comes with a nice GUI management tool for WSL.

You can configure a lot of stuff in it including restricting core, RAM, disk space and a lot of specifications from the settings GUI window.

Update & Upgrade Ubuntu Packages
Open your Ubuntu terminal from Windows Terminal. First, we need to update and upgrade the existing packages to their latest versions.
To update the Ubuntu system, simply use the following command:
sudo apt update -y

To upgrade all the packages at once, simply use the following command:
sudo apt upgrade -y

⚠️ Make sure that you have a stable internet connection during the update and upgrade process to avoid any interruptions.
Install and Configure Miniconda
In Machine Learning, we need to manage multiple environments with different package versions. Conda is a popular package and environment management system that makes it easy to create and manage isolated environments for different projects. We will install Miniconda, a minimal installer for Conda, to manage our Python environments. But if you prefer Anaconda, you can install it instead.
Go to the official website of Miniconda. Currently the Miniconda installer is inside Anaconda here. If the official website gets updated, you can always search for "Miniconda installer" on Google to find the latest version. Also, you can create an issue in the official GitHub repository of this project to notify me about it.

As we are installing it inside WSL, we have to select the macOS/Linux Installation. Then select Linux Terminal Installer and choose Linux x86 for downloading the installer.
wget https://repo.anaconda.com/miniconda/Miniconda3-latest-Linux-x86_64.sh
It will download the installer to your WSL directory. Then use the following command to install it properly:
bash ~/Miniconda3-latest-Linux-x86_64.sh
⚠️ Make sure that you are in the correct directory where the installer is downloaded. If you downloaded it to a different location, adjust the path accordingly. Also, replace bash with zsh or sh if you are using a different shell.

Make sure to choose the initialization option properly. I prefer to keep the conda env active whenever I open a new shell. Therefore, I chose "Yes".

Make sure that the installation succeeds without any errors.

For the changes to take effect, you can close and reopen the current shell. But you can also do that without closing and reopening the shell by applying the command below.
source ~/.bashrc
⚠️ If you’re using a different shell like zsh or fish, make sure to source the appropriate configuration file (e.g., ~/.zshrc for zsh).
Install Jupyter & Ipykernel
I prefer to use Jupyter Notebook for running my machine learning experiments. It provides an interactive environment for coding and data analysis. We’ll install Jupyter Notebook and Ipykernel to run Jupyter notebooks in our conda environment. We will do that in all conda environments starting with the base environment. It also helps us to keep the conda environment kernel inside Jupyter Notebook.
First, make sure that you are in the base conda environment. You will see (base) on the left side of the terminal.

Now install Jupyter and Ipykernel both by applying the following command:
conda install jupyter ipykernel -y
Make sure that you accept the terms of service of Conda.

Now, I will create a separate conda environment for both TensorFlow and the PyTorch GPU. You can directly install them in the base environment or in any other environment as per your preference. I am not specifying any specific Python version while creating the environment. It will automatically install the latest stable version of Python.
conda create -name ml -y

To activate any specific conda environment, you have to use the following command:
conda activate <conda-env-name>
For example, if I want to activate my newly created ml environment, I will use this command:
conda activate ml
If you’re not sure which conda environments are installed in your system, you can check all available and installed conda environments in your system by running the following command:
conda env list
Nvidia Driver
Ensure that you have the latest Nvidia drivers installed on Windows. WSL2 uses the Windows driver, so no separate driver installation is needed in Ubuntu. You can download the latest drivers from the official Nvidia website.

If you are just installing the latest GPU driver, then after installing the drivers, restart your computer to ensure the changes take effect. You can either use the GeForce Game Ready Driver or the NVIDIA Studio Driver. But I recommend using the Studio Driver for better stability with creative and ML applications.
Install CUDA Dependencies
You might face some issues if you do not have the CUDA dependencies installed properly. I recommend that you install the required dependencies before proceeding further:
sudo apt install gcc g++ build-essential
After installing the dependencies, you can then verify the CUDA installation if you had any issues earlier.
CUDA Toolkit
TensorFlow GPU is very picky about the CUDA version. So we need to install a specific version of CUDA Toolkit that is compatible with the TensorFlow version we are going to install.
To understand exactly which CUDA version is compatible with which TensorFlow version, you can check the official TensorFlow GPU support matrix here.

At the time I’m writing this article, the TensorFlow GPU documentation says that we should have CUDA Toolkit 12.3. So I will ensure that I install exactly that version. You can simply click on that version link in the official docs and it will redirect you to the official Nvidia CUDA Toolkit download page. But if the link gets updated in the future, you can always search for "Nvidia CUDA Toolkit" on Google to find the latest version.

As TensorFlow GPU is asking for exact Version 12.3, I will select version 12.3.0 exactly.
In the CUDA Toolkit download page, make sure to choose the operating system as Linux, Architecture as x86_64, Distribution as WSL-Ubuntu, Version as 2.0 and the Installer type as runfile(local).
⚠️ As we are using Ubuntu in our WSL2, you can also choose Ubuntu as your operating system. But I prefer to choose WSL-Ubuntu for better compatibility.

After selecting those, it will give you the download commands. You have to apply them sequentially. Make sure that you don't keep the checkmark in "Kernel Objects" during installing CUDA.

⚠️ Make sure to copy and paste the commands one by one in your WSL Ubuntu terminal to download and install the CUDA Toolkit properly. If you face any issues related to CUDA dependency, then quickly go through the Install CUDA dependencies section, where I have explained how to install the CUDA dependencies properly.
Add Path to Shell Profile for CUDA
After installing CUDA Toolkit, we need to add the CUDA binaries to our shell profile for easy access. This will allow us to run CUDA commands from any directory in the terminal.
Note that, depending on the shell you are using (bash, zsh, and so on), you need to add the CUDA path to the appropriate configuration file. Make sure to replace .bashrc with .zshrc or other configuration files if you are using a different shell.
To add the CUDA binary path, follow the command below:
echo 'export PATH=/usr/local/cuda-12.3/bin:$PATH' >> ~/.bashrc
You have to use the updated path where you installed it. Your terminal will show it after installing the CUDA:

Now, you need to add the path inside the Library path. Just use the exact path where you installed CUDA. Your terminal will list the path properly.
echo 'export LD_LIBRARY_PATH=/usr/local/cuda-12.3/lib64:$LD_LIBRARY_PATH' >> ~/.bashrc

After adding those paths, you need to source the shell profile for the changes to take effect. You can do that by running the following command:
source ~/.bashrc
nvcc Version
NVCC stands for Nvidia CUDA Compiler. It is basically a compiler driver for the CUDA platform that allows developers to write parallel programs to run on Nvidia GPUs. As we have already installed the CUDA toolkit, we need to see whether the compiler is also properly activated. To check that, we need to verify the version.
Verify that CUDA is properly installed by checking the version:
nvcc --version

If the output shows the correct CUDA version, then you have successfully installed CUDA Toolkit in your WSL2 Ubuntu environment.
cuDNN SDK
The cuDNN (CUDA Deep Neural Network) SDK is a GPU accelerated library of primitives for deep neural networks, developed by Nvidia. It provides highly optimized building blocks for common deep learning operations, significantly speeding up the training and inference processes of AI models on Nvidia GPUs.
Note: Even though TensorFlow GPU suggests a specific cuDNN version, it’s often compatible with multiple versions. Because of this, I recommend downloading the latest cuDNN version that is compatible with your installed CUDA version. You can find the cuDNN download page here.
Select the Operating System as Linux, Architecture as x86_64, Distribution as Ubuntu, Version as 24.04, Installer Type as deb (local), Configuration as FULL. After selecting those, it will give you the download commands. You have to apply them sequentially.

⚠️ Make sure to copy and paste the commands one by one in your WSL Ubuntu terminal to download and install the cuDNN SDK properly. If you face any issues related to CUDA dependency, then quickly go through the Install CUDA dependencies section, where I have explained how to install the CUDA dependencies properly.
TensorFlow GPU
Now, we are going to install TensorFlow GPU in our conda environment. Make sure that you have activated the conda environment where you want to install it. I’m going to install it in my previously created ml environment. To activate it, I’ll use the following command:
conda activate ml
⚠️ Make sure that you have activated the correct conda environment before installing TensorFlow GPU. You will see the environment name in the terminal prompt.

I will install ipykernel and jupyter in this new environment.
conda install jupyter ipykernel -y
Now, to install TensorFlow GPU, I will simply use the following command:
pip install tensorflow[and-cuda]
It might take a couple of minutes depending on the internet speed you have. Just have patience and wait for it to finish the installation.
Check TensorFlow GPU
After installing TensorFlow GPU, we need to verify that it is working properly with GPU support. Open a Python shell in your Ubuntu terminal and run the following commands:
python3 -c "import tensorflow as tf; print(tf.config.list_physical_devices('GPU'))"
If the output shows a list of available GPU devices, then TensorFlow GPU is successfully installed and working properly.

PyTorch GPU
Now, we’re going to install PyTorch GPU in our conda environment. Make sure that you have activated the conda environment where you want to install it. I’m going to install it in my previously created ml environment. To activate it, I will use the following command:
conda activate ml
Installing PyTorch GPU is very straightforward. You can use the official PyTorch installation command generator here.
Make sure to select PyTorch Build as the latest Stable one, Your OS as Linux, Package as Pip, Language as Python. For the Compute Platform, select the CUDA version that matches your installed CUDA Toolkit. For me, it is CUDA 12.3. But, if you can not find the exact one then choose the closest. As CUDA 12.3 is not available for me now, I am choosing CUDA 12.6.
After selecting those, it will give you the installation command. You have to apply it in your WSL Ubuntu terminal.

It might take a couple of minutes depending on the internet speed you have. Just have patience and wait for it to finish the installation.

Check PyTorch GPU
After installing PyTorch GPU, verify that it is working properly with GPU support. Open a Python shell in your Ubuntu terminal and run the following commands:
python3 - << 'EOF'
import torch
print(torch.cuda.is_available())
print(torch.cuda.device_count())
print(torch.cuda.current_device())
print(torch.cuda.device(0))
print(torch.cuda.get_device_name(0))
EOF
The output should look similar to the screenshot, showing:
True: GPU is available for PyTorch
1: Number of detected CUDA devices
0: Index of the current active CUDA device
A device object representation
NVIDIA GeForce RTX 3060 (or your GPU name)

Check PyTorch & TensorFlow GPU inside Jupyter Notebook
Now that the environment is fully configured, we will verify GPU support directly inside Jupyter Notebook. This ensures both PyTorch and TensorFlow can successfully detect and use your GPU.
1. Test PyTorch GPU
Create a new Jupyter Notebook and run the following commands one by one:
import torch
print(torch.cuda.is_available())
print(torch.cuda.device_count())
print(torch.cuda.current_device())
print(torch.cuda.device(0))
print(torch.cuda.get_device_name(0))
If everything is configured correctly, you will see your GPU (for example NVIDIA GeForce RTX 3060) detected properly:

2. Test TensorFlow GPU
Next, run the following code to check whether TensorFlow detects your GPU:
import tensorflow as tf
print(tf.config.list_physical_devices('GPU'))
You can also check the number of GPUs detected:
print("Num GPUs Available:", len(tf.config.list_physical_devices('GPU')))
Finally, run TensorFlow’s built-in GPU validation (warnings are normal):
import tensorflow as tf
assert tf.test.is_gpu_available()
assert tf.test.is_built_with_cuda()

If TensorFlow logs show your GPU model (such as RTX 3060), then TensorFlow GPU is successfully installed and fully working inside Jupyter Notebook.
Conclusion
Thank you so much for reading all the way through. I hope you have been able to configure your Windows 11 computer properly for running almost any kind of Machine Learning-based experiments.
To get more content like this, you can follow me on LinkedIn and X. You can also check my website and follow me on GitHub if you are into open source and development.