One of the main challenges in scaling multi-core for the future is that of migrating programming tools, build environments, and millions of lines of existing code to new parallel programming models or compilers. To help this transition, Intel researchers are developing “Ct,” or C/C++ for Throughput Computing.You can check it out over here.
In principal, Ct works with any standard C++ compiler because it is a standards-compliant C++ library (with a lot of runtime behind the scenes). When one initializes the Ct library, one loads a runtime that includes the compiler, threading runtime, memory manager — essentially all the components one needs for threaded and/or vectorized code generation. Ct code is dynamically compiled, so the runtime tries to aggregate as many smaller tasks or data parallel work quanta so that it can minimize threading overhead and control the granularity according to run-time conditions. With the Ct dynamic engine, one gets precise inter-procedural traces to compile, which is extremely useful in the highly modularized and indirect world of object-oriented programming.
Intel Ct whitepaper published
Posted on Tuesday, April 14 2009 @ 2:31 CEST by Thomas De Maesschalck