A hidden GPU feature is coming soon to Intel Core Ultra laptops – you'll want to try this

Intel's Core Ultra laptops are getting a new Shared GPU Memory Override feature that lets you allocate system memory to the GPU.

0comments
A hidden GPU feature is coming soon to Intel Core Ultra laptops – you'll want to try this
Variable Graphics Memory – an AMD feature on its APUs (accelerated processing units) that's been around for a while now – is attractive not just to gamers, but also to people who like to use local AI. Now, it seems that Intel is going to follow suit and add a similar feature to its Core Ultra chips. 

Intel's Boby Duffy revealed the information, as well as the fact that the new Shared GPU Memory Override feature will also come along with the latest version of the Arc drivers. 

Basically, how this works is just like on AMD's recent APUs. You will be able to decide how much of your total system memory is reserved for the GPU. Of course, this is super useful for gaming, but it's also a welcome addition if you use local LLMs (Large Language Models) on your laptop. 



Such models can work without manually selecting larger amounts of memory for the GPU. However, there are benefits in doing so. 

Intel's Core Ultra chips don't yet have true Unified Memory, like the kind you find on Apple Macs or AMD's latest Strix Halo chips. Unified Memory means the CPU (the main processor) and GPU (the graphics processor) share the same memory pool, which makes data exchange faster and simpler. Intel's chips sound similar, but it's not the same yet. Giving the GPU a larger portion of memory to use should improve performance.

Will you try Intel’s new GPU memory feature?


Intel now lets Core Ultra users do something similar. In the Intel Graphics Software, there's now a slider where you can decide how much system memory is reserved for the GPU.

With Intel's new Shared GPU Memory Override feature, you can choose how much system memory to reserve for the GPU. For example:
  • On a system with 32GB of RAM, splitting 16GB for the GPU and 16GB for the rest of the system allows the AI model to load fully into GPU memory while leaving enough RAM for the operating system and other programs.
  • On a system with 16GB of total RAM, only a portion can be given to the GPU – at least 8GB should remain for the OS and applications.

    Recommended Stories
    This feature is only available in the latest Intel drivers and applies to systems with integrated Intel Arc graphics. Dedicated GPUs with their own VRAM don't need this feature, as they already perform better.

For running AI models locally on Core Ultra systems, this feature can help boost GPU performance and reduce reliance on the CPU.

So, if you're running AI models locally on a Core Ultra system, this is a nice way to get a little extra speed from your GPU. In my opinion, it's a simple tweak that can really help performance without any risk.

Grab the Galaxy S25 + 2 Yrs Unlimited – only $30/mo from Mint Mobile

With Galaxy AI – port-in & $720 upfront required


We may earn a commission if you make a purchase

Check Out The Offer
Loading Comments...

Recommended Stories

FCC OKs Cingular\'s purchase of AT&T Wireless