The Best of the First Half
The most popular articles on Dr. Dobb's for the first half of the year, sprinkled with editors' choices of particularly meritorious pieces. Enjoy!
View ArticleWill Parallel Code Ever Be Embraced?
The advent of the many-core era is not going to push developers to write more parallel code. That hasn't happened as we've gone from 1- to 2- to 4- to 8-core processors, has it? Writing parallel code...
View ArticleImproving Futures and Callbacks in C++ To Avoid Synching by Waiting
In C++, futures are a great way of decomposing a program into concurrent parts, but a poor way of composing those parts into a responsive and scalable program. Microsoft's Parallel Pattern Library...
View ArticleParallel Evolution, Not Revolution
Not all parallel programming is fine-grained. But it's still parallel.
View ArticleThe OpenACC Execution Model
In this second part of the introduction to OpenACC the OpenMP-style library for GPU programming the execution model is explained and samples are benchmarked against straight, OpenMP parallelism.
View ArticleWhat's New in .NET Framework 4.5
From arrays that can now exceed 2 GB to enhanced background garbage collection, changes in this release of .NET provide immediately useful capabilities.
View ArticleParallel In-Place Merge
Merging sorted arrays in parallel and in place can be done very efficiently, using this algorithm. Comparisons with the performance of similar STL functions are included.
View ArticleCreating and Using Libraries with OpenACC
How to write reusable methods (libraries, subroutines, and functions) that can transparently call optimized CPU and GPU libraries using OpenACC pragmas.
View ArticleAMD's Bold ARM Server Gambit
By combining 64-bit ARM processors with server-side technology, the company that led the x86 architecture into the 64-bit world is hoping to reinvent the data center and give itself new life.
View ArticleCache-Friendly Code: Solving Manycore's Need for Faster Data Access
As the number of cores in multicore chips grows Intel is poised to release the 50+ core Xeon Phi ensuring that program data can be delivered fast enough to be consumed by so many processors is a...
View ArticleIntel's 50-Core Xeon Phi: The New Era of Inexpensive Supercomputing
The advent of Intel's massively parallel coprocessor will make every server a supercomputer.
View ArticleScaling Up And Out
Most attention today is focused on adding nodes or cloud instances to scale out systems. Guest editor Nikita Shamgunov emphasizes the importance of scaling systems vertically as well.
View ArticleIntroduction to CUDA C/C++
Nvidia's Mark Ebersole introduces core concepts of heterogeneous computing concepts with CUDA C/C++ in this 30 minute tutorial.
View ArticleHeterogeneous Programming
AMD's Ben Sander shares details about the heterogeneous system architecture (HSA) and how it will change the way people program in the future.
View ArticleProgramming Intel's Xeon Phi: A Jumpstart Introduction
Reaching one teraflop on Intel's new 60-core coprocessor requires a little know-how
View ArticleIntroduction to OpenCL [video]
Ben Gaster from AMD Research talks about the design and use of the language OpenCL, which has been embraced by Apple, Intel, and Nvidia among other companies, to accelerate programs.
View ArticleCUDA vs. Phi: Phi Programming for CUDA Developers
Both CUDA and Phi coprocessors provide high degrees of parallelism that can deliver excellent application performance. For the most part, CUDA programmers with existing application code have already...
View ArticleThe Best of 2012
The most popular articles of the past 12 months from Dr. Dobb's, plus some additional pieces chosen for your thoughtful consideration by our staff.
View ArticleComparing OpenCL, CUDA, and OpenACC [video]
Rob Farber takes you on a tour of the paths to massively parallel x86, MultiGPU, and CPU+GPU applications.
View Article