The current trend in microprocessor technology is to take advantage of Moore’s Law (which states that the number of transistors per a certain area on the chip will double approximately every 18 months), to include an increasing number of processor cores in a single integrated circuit chip. Here's an interesting article by a group of Berkeley researchers that discusses parallel computing and why continuing to double the number of processor cores is likely to meet with diminishing returns.
A couple quotes I found interesting:
"Multicore will obviously help multiprogrammed workloads, which contain a mix of independent sequential tasks, but how will individual tasks become faster? Switching from sequential to modestly parallel computing will make programming much more difficult without rewarding this greater effort with a dramatic improvement in power-performance. Hence, multicore is unlikely to be the ideal answer."
"Since real world applications are naturally parallel and hardware is naturally parallel, what we need is a programming model, system software, and a supporting architecture that are naturally parallel."
This paper has me thinking about parallel programming models and the meaning of “naturally parallel.” For example, there are existing languages, such as Verilog, that are used to describe hardware systems and, as such, must be naturally parallel. However, I certainly wouldn’t want to code application software in languages this way. Instead, we'd want a naturally parallel language at a much higher level of abstraction. Even so, can we ever remove the essential complexity involved in coordinating multiple simultaneous threads of execution? Also, must we, as developers, forever need to explicitly parallelize applications, or should we eventually be able to work in a comfortable sequential programming model and have tools analyze the inherent parallelism and take care of the translation automatically?
Perhaps these issues may be a symptom of relying on imperative programming languages, and the solution will lie instead with declarative programming languages. To take advantage of the increasing number of processor cores we may need to move from specifying how to perform a computation to specifying what should be computed. It’ll be interesting to see what develops in a few years in this area. After all, 16- and 32-way processors are not that far off and, if this paper is right, we will experience diminishing returns at that point and software developers will be in for some big changes.