A hidden GPU feature is coming soon to Intel Core Ultra laptops – you'll want to try this
Intel's Core Ultra laptops are getting a new Shared GPU Memory Override feature that lets you allocate system memory to the GPU.

Variable Graphics Memory – an AMD feature on its APUs (accelerated processing units) that's been around for a while now – is attractive not just to gamers, but also to people who like to use local AI. Now, it seems that Intel is going to follow suit and add a similar feature to its Core Ultra chips.
Intel's Boby Duffy revealed the information, as well as the fact that the new Shared GPU Memory Override feature will also come along with the latest version of the Arc drivers.
Basically, how this works is just like on AMD's recent APUs. You will be able to decide how much of your total system memory is reserved for the GPU. Of course, this is super useful for gaming, but it's also a welcome addition if you use local LLMs (Large Language Models) on your laptop.
Such models can work without manually selecting larger amounts of memory for the GPU. However, there are benefits in doing so.
Intel now lets Core Ultra users do something similar. In the Intel Graphics Software, there's now a slider where you can decide how much system memory is reserved for the GPU.
With Intel's new Shared GPU Memory Override feature, you can choose how much system memory to reserve for the GPU. For example:
So, if you're running AI models locally on a Core Ultra system, this is a nice way to get a little extra speed from your GPU. In my opinion, it's a simple tweak that can really help performance without any risk.
Basically, how this works is just like on AMD's recent APUs. You will be able to decide how much of your total system memory is reserved for the GPU. Of course, this is super useful for gaming, but it's also a welcome addition if you use local LLMs (Large Language Models) on your laptop.
If you have Intel Core Ultra and are doing AI, you're going to want to update to latest Intel Arc driver... because this pic.twitter.com/4BlTqW1RCo
— bobduffy ️ (@bobduffy) August 14, 2025
Such models can work without manually selecting larger amounts of memory for the GPU. However, there are benefits in doing so.
Intel's Core Ultra chips don't yet have true Unified Memory, like the kind you find on Apple Macs or AMD's latest Strix Halo chips. Unified Memory means the CPU (the main processor) and GPU (the graphics processor) share the same memory pool, which makes data exchange faster and simpler. Intel's chips sound similar, but it's not the same yet. Giving the GPU a larger portion of memory to use should improve performance.
Intel now lets Core Ultra users do something similar. In the Intel Graphics Software, there's now a slider where you can decide how much system memory is reserved for the GPU.
With Intel's new Shared GPU Memory Override feature, you can choose how much system memory to reserve for the GPU. For example:
- On a system with 32GB of RAM, splitting 16GB for the GPU and 16GB for the rest of the system allows the AI model to load fully into GPU memory while leaving enough RAM for the operating system and other programs.
- On a system with 16GB of total RAM, only a portion can be given to the GPU – at least 8GB should remain for the OS and applications.
This feature is only available in the latest Intel drivers and applies to systems with integrated Intel Arc graphics. Dedicated GPUs with their own VRAM don't need this feature, as they already perform better.
So, if you're running AI models locally on a Core Ultra system, this is a nice way to get a little extra speed from your GPU. In my opinion, it's a simple tweak that can really help performance without any risk.
Things that are NOT allowed:
To help keep our community safe and free from spam, we apply temporary limits to newly created accounts: