DV Hardware - bringing you the hottest news about processors, graphics cards, Intel, AMD, NVIDIA, hardware and technology!
   Home | News submit | News Archives | Reviews | Articles | Howto's | Advertise
DarkVision Hardware - Daily tech news
January 26, 2021 
Main Menu
News archives

Who's Online
There are currently 193 people online.


Latest Reviews
ASUS ROG Strix B450-F Gaming Motherboard
Sonos Move
Ewin Racing Flash gaming chair
Arctic BioniX F120 and F140 fans
Jaybird Freedom 2 wireless sport headphones
Ewin Racing Champion gaming chair
Zowie P-TF Rough mousepad
Zowie FK mouse

Follow us

AMD Kaveri APU to feature unified memory

Posted on Tuesday, April 30 2013 @ 18:59:29 CEST by

AMD logo
AMD announced hUMA (heterogeneous uniform memory access), a new technology that will debut with the introduction of AMD's Kaveri APU later this year. Kaveri is based on the Steamroller architecture and will be AMD's first APU with fully shared memory between the CPU and the integrated graphics. Shared memory increases computing and power efficiency and makes it easier for programmers to code their applications. Further details can be read at ARS Technica.
This meant that whenever a CPU program wants to do some computation on the GPU, it has to copy all the data from the CPU's memory into the GPU's memory. When the GPU computation is finished, all the data has to be copied back. This need to copy back and forth wastes time and makes it difficult to mix and match code that runs on the CPU and code that runs on the GPU.

The need to copy data also means that the GPU can't use the same data structures that the CPU is using. While the exact terminology varies from programming language to programming language, CPU data structures make extensive use of pointers: essentially, memory addresses that refer (or, indeed, point) to other pieces of data. These structures can't simply be copied into GPU memory, because CPU pointers refer to locations in CPU memory. Since GPU memory is separate, these locations would be all wrong when copied.

hUMA is the way AMD proposes to solve this problem. With hUMA, the CPU and GPU share a single memory space. The GPU can directly access CPU memory addresses, allowing it to both read and write data that the CPU is also reading and writing.
AMD hUMA features

AMD isn't the only one working on unified memory access though, NVIDIA's Maxwell GPUs promise to enable other processors to access the GPU's memory via Unified Virtual Memory, while Intel is prepping a DirectX extension named InstantAccess for its Haswell CPUs.



DV Hardware - Privacy statement
All logos and trademarks are property of their respective owner.
The comments are property of their posters, all the rest © 2002-2021 DM Media Group bvba