Microsoft is releasing a preview build of Windows that offers more personalised AI running on Nvidia, AMDandIntelGPUs

 Nvidia RTX 4070 Super Founders Edition.
Nvidia RTX 4070 Super Founders Edition.

Nvidia has announced it is collaborating with Microsoft to power personalised AI applications on Windows through Copilot. The collaboration will extend to other GPU vendors, too, meaning AMD and Intel will also benefit.

The Windows Copilot Runtime will receive support for GPU acceleration, which means GPUs will be able to apply their AI smarts to apps on the OS a little easier.

"The collaboration will provide application developers with easy application programming interface (API) access to GPU-accelerated small language models (SLMs) that enable retrieval-augmented generation (RAG) capabilities that run on-device powered by Windows Copilot Runtime."

In simpler terms, it allows developers to use an API to have GPUs accelerate heavily-personalised AI jobs on Windows, such as content summaries, automation, and generative AI.

Nvidia currently offers one RAG application, Chat with RTX, which runs on its own graphics cards. In theory, further applications like this are possible with the Copilot runtime support, and Nvidia has at least one more of interest to PC gamers: Project G-Assist. It also has a new RTX AI Toolkit, "a suite of tools and SDKs for model customization."

This is potentially a promising move for Nvidia, and other GPU vendors. Right now the fight for dominance in client AI inference (i.e. local AI processing) is currently being fought by Intel, AMD and Qualcomm in laptops. Yet GPUs are actually fantastically powerful for AI.

Developers can choose where to put their AI applications: CPU, NPU (AI-specific accelerator block), or on the GPU. Greater access, or easier access, through an API, would mean developers can make better use of these components and make more powerful applications.

And it's not just Nvidia set to benefit. GPU acceleration through Copilot Runtime will be open to other GPUs.

Computex 2024

The Taipei 101 building and Taipei skyline in Taiwan.
The Taipei 101 building and Taipei skyline in Taiwan.

Catch up with Computex 2024: We're on the ground at Taiwan's biggest tech show to see what Nvidia, AMD, Intel, Asus, Gigabyte, MSI and more have to show.

"These AI capabilities will be accelerated by Nvidia RTX GPUs, as well as AI accelerators from other hardware vendors, providing end users with fast, responsive AI experiences across the breadth of the Windows ecosystem."

Notably, however, Microsoft still requires 45 TOPs of NPU processing for entry into its AI-ready computer club, known as Copilot+. Currently, this does not extend to GPUs, despite offering more TOPs performance than any NPU available today. But with so many rumours bounding around, not least from such luminaries as Mr. Dell, about Nvidia making its own ARM-based SoC, you've got to believe Windows on ARM will be running its Copilot AI business on Nvidia's integrated GPUs. GPUs and NPUs are, after all, essentially a similar sort of parallel processing silicon, so it shouldn't be a huge leap.

A preview API for GPU acceleration on Copilot Runtime will be available later this year in a Windows developer build.