Anaconda Accelerates AI Development and Deployment with NVIDIA CUDA Toolkit

We are pleased to announce that NVIDIA CUDA Toolkit 12 is now available on our main (AKA, defaults) channel, a significant update from our support for the previous version, CUDA Toolkit 11.8. 


CUDA serves as the software layer for NVIDIA GPUs. The CUDA Toolkit gives users a development environment for building high-performance, GPU-accelerated applications. It enables developers to create, enhance, and implement applications across a wide range of platforms, including GPU-accelerated desktops, cloud platforms, and more. 


Previously, Anaconda distributed runtime libraries (such as cudart, cublas, and cusolver), the components of CUDA Toolkit that are required to run CUDA-enabled software. With this update, Anaconda additionally distributes the compilation and development tools (such as nvcc, nvrtc, cccl, and nsight) that are used to develop the software. Components in the CUDA Toolkit are now packaged individually, enabling users to fetch only the necessary components, saving them time and hard disc space. 


Through Anaconda, users can manage both Python packages and non-Python software in one environment. This means they can leverage the benefits of our other GPU-accelerated packages without needing to spend time figuring out how to get system-related and low-level software like CUDA to work. And developers of the software that runs on NVIDIA GPUs can now easily develop and test CUDA-enabled software in a conda environment, managing CUDA tools and libraries alongside other low-level dependencies and focusing on development rather than on getting their system to work. With more than 5,000 packages in Anaconda’s repositories, developers working on AI initiatives can create and deploy secure Python solutions faster and using a more streamlined process. 


Recognized as a key software distributor, Anaconda partners with organizations across the ecosystem to keep the important software in our platform as up-to-date as possible. 


Key features of CUDA Toolkit 12.0 include: 

NVIDIA Ampere and NVIDIA Hopper architecture support, as well as Arm support 

Multi-OS drivers and runtime kernels 

Multi-Instance GPU (MIG), Tensor Cores, CUDA graphs, confidential computing, and NVIDIA NVLink

C++, Fortran compilers, and OpenACC directives

Extensive set of libraries for scientific computing, data science, AI/ML, and graphics 

Nsight Compute and Nsight Systems developer tools