An Introduction to Multiprocessor Systems

Updated: 12-11-2006

 

Conclusion

Modern microprocessors are incredibly complex by any standard. However, multiprocessor systems take this complexity to a new level. Not only must the individual parts work correctly, but all the interactions must be carefully studied. Odd behavior in one part of a system can lead to seemingly unrelated problems elsewhere that are devilishly hard to track down. This complexity is precisely why most discussions of multiprocessor systems are somewhat incomplete or otherwise abridged. Given that multicore systems are currently the norm, it is essential to understand some of the different aspects of multiprocessor design. This article has mainly dealt with the fundamentals: memory hierarchy, cache coherency and scalability issues, along with a brief discussion of system topology.

However, there are many other issues which we have not introduced: problems with deadlock and assorted race conditions, the cost of various features, or even reliability, which is likely the most important. As systems get larger and larger, the probability of failure grows much faster due to combinatorial probabilities. This means that errors which occur once every 10 years on commodity 4 processor systems might be a monthly event for a larger 64 processor machine. Mainstream microprocessors from Intel and AMD currently host 2-4 cores, while some specialized processors, notably Sun’s Niagara feature up to 8. Thanks to Moore’s Law, many of the scalability problems facing the system designers of today will arrive at the doorstep of microprocessor architectures in the future as processors host 8-16 cores or more. Either way, readers should now be able to look at multiprocessor designs much more intelligently and understand the trade-offs that engineers make to balance performance, power, cost and other factors in the world of servers and microprocessors.

<< Prev Page