AMD announced hUMA (heterogeneous uniform memory access), a new technology that will debut with the introduction of AMD's Kaveri APU later this year. Kaveri is based on the Steamroller architecture and will be AMD's first APU with fully shared memory between the CPU and the integrated graphics. Shared memory increases computing and power efficiency and makes it easier for programmers to code their applications. Further details can be read at ARS Technica.
This meant that whenever a CPU program wants to do some computation on the GPU, it has to copy all the data from the CPU's memory into the GPU's memory. When the GPU computation is finished, all the data has to be copied back. This need to copy back and forth wastes time and makes it difficult to mix and match code that runs on the CPU and code that runs on the GPU.
The need to copy data also means that the GPU can't use the same data structures that the CPU is using. While the exact terminology varies from programming language to programming language, CPU data structures make extensive use of pointers: essentially, memory addresses that refer (or, indeed, point) to other pieces of data. These structures can't simply be copied into GPU memory, because CPU pointers refer to locations in CPU memory. Since GPU memory is separate, these locations would be all wrong when copied.
hUMA is the way AMD proposes to solve this problem. With hUMA, the CPU and GPU share a single memory space. The GPU can directly access CPU memory addresses, allowing it to both read and write data that the CPU is also reading and writing.
AMD isn't the only one working on unified memory access though, NVIDIA's Maxwell GPUs promise to enable other processors to access the GPU's memory via Unified Virtual Memory, while Intel is prepping a DirectX extension named InstantAccess for its Haswell CPUs.