Sandeep Shrestha
Dr. Wiedemeier
CSCI 4012
10/24/2018
Article Review of “The Future of Microprocessors” by Olukotun and Hammond
“The Future of Microprocessors” provides insight on the development of powerful and superfast microprocessors and explains the various techniques utilized to enhance their speed. It highlights the current challenges faced by microprocessors and their future scope. The article begins with an insight on the factors that have improved the performance of the microprocessors of modern computers since their inventions, first factor being the improvement in the speed over time according to the Moore’s law and second being the increasing number of transistors on the processor chips. The performance of microprocessors has grown exponentially over the past years, driven by transistor speed and energy scaling, as well as by architecture advances that exploited the transistor density gains from Moore's Law. Microprocessors were invented in the 1970s, but it's difficult today to believe any of the early inventors could have conceived their extraordinary development in structure and use over the past 40 years. Microprocessors today not only involve complex microarchitectures and multiple execution engines (cores) but have grown to include all sorts of additional functions, including floating-point units, caches, memory controllers, and media-processing engines. However, the defining characteristics of a microprocessor remain the same with very few exceptions and modifications. Microprocessors continue to implement the conventional Von Neumann computational model. The authors have made a substantial research to enlarge knowledge of potential readers toward the technologies and functioning of microprocessors in this article.
The article outlines a variety of modifications within processors designed to increase the number of instructions processed per cycle. The first modification is pipelining of individual instruction execution into a sequence of stages. It allows designers to increase clock rates as instructions have been sliced into larger numbers of increasingly small steps, which are designed to reduce the amount of logic that needs to switch during every clock cycle. Superscalar processing is another technique which executes multiple instructions from a single, conventional instruction stream on each cycle. These function by dynamically examining sets of instructions from the instruction stream to find ones capable of parallel execution on each cycle. However, further developments in pipelining and superscalar processing are limited as they require larger number of transistors to be integrated into high-speed central logic within each processor core. Another issue that arises in their advancement is power. During their developments, the systems utilizing pipelining and superscalar processing started to consume more watts. The overall effect was that exponentially more power was required by each subsequent processor generation....