Recent Changes - Search:

Test

edit SideBar

ACM-BOK

Text References:

BO = Computer Systems: A Programmer's Perspective, Byrant and O'Hallaron

UD Core:

Topics listed in italics are currently not covered as core material, but up to instructor


AR/Interfacing and I/O Strategies [core] 3 hours - BO6.1,8.1

Topics:

  • I/O fundamentals: handshaking and buffering
  • Interrupt mechanisms: vectored and prioritized, interrupt acknowledgment
  • Buses: protocols, arbitration, direct-memory access (DMA)
  • Examples of modern buses: e.g., PCIe, USB, Hypertransport

Learning objectives:

  • Appreciate the need of open- and closed-loop communications and the use of buffers to control dataflow.
  • Explain how interrupts are used to implement I/O control and data transfers.
  • Identify various types of buses in a computer system and understand how devices compete for a bus and are granted access to the bus.
  • Be aware of the progress in bus technology and understand the features and performance of a range of modern buses (both serial and parallel).

AR/MemoryArchitecture [core] 5 hours - BO6

Topics:

  • Storage systems and their technology (semiconductor, magnetic)
  • Storage standards (CD-ROM, DVD)
  • Memory hierarchy, latency and throughput
  • Cache memories - operating principles, replacement policies, multilevel cache, cache coherency

Learning objectives:

  • Identify the memory technologies found in a computer and be aware of the way in which memory technology is changing.
  • Appreciate the need for storage standards for complex data storage mechanisms such as DVD.
  • Understand why a memory hierarchy is necessary to reduce the effective memory latency.
  • Appreciate that most data on the memory bus is cache refill traffic
  • Describe the various ways of organizing cache memory and appreciate the cost-performance tradeoffs for each arrangement.
  • Appreciate the need for cache coherency in multiprocessor systems

AR/FunctionalOrganization [core] - 6 hours - BO4,5

Topics:

  • Review of register transfer language to describe internal operations in a computer
  • Microarchitectures - hardwired and microprogrammed realizations
  • Instruction pipelining and instruction-level parallelism (ILP)
  • Overview of superscalar architectures
  • Processor and system performance
  • Performance – their meeasures and their limitations
  • The significance of power dissipation and its effects on computing structures

Learning objectives:

  • Review of the use of register transfer language to describe internal operations in a computer
  • Understand how a CPU’s control unit interprets a machine-level instruction – either directly or as a microprogram.
  • Appreciate how processor performance can be improved by overlapping the execution of instruction by pipelining.
  • Understand the difference between processor performance and system performance (i.e., the effects of memory systems, buses and software on overall performance).
  • Appreciate how superscalar architectures use multiple arithmetic units to execute more than one instruction per clock cycle.
  • Understand how computer performance is measured by measurements such as MIPS or SPECmarks and the limitations of such measurements.
  • Appreciate the relationship between power dissipation and computer performance and the need to minimize power consumption in mobile applications.

AR/Multiprocessing [core] - 6 hours- BO12, Supplemental

Topics:

  • Amdahl’s law
  • Short vector processing (multimedia operations)
  • Multicore and multithreaded processors
  • Flynn’s taxonomy: Multiprocessor structures and architectures
  • Programming multiprocessor systems
  • GPU and special-purpose graphics processors
  • Introduction to reconfigurable logic and special-purpose processors

Learning objectives:

  • Discuss the concept of parallel processing and the relationship between parallelism and performance.
  • Appreciate that multimedia values (e.g., 8-/16-bit audio and visual data) can be operated on in parallel in 64-bit registers to enhance performance.
  • Understand how performance can be increased by incorporating multiple processors on a single chip.
  • Appreciate the need to express algorithms in a form suitable for execution on parallel processors.
  • Understand how special-purpose graphics processors, GPUs, can accelerate performance in graphics applications.
  • Understand the organization of computer structures that can be electronically configured and reconfigured''

This section is loosely covered with 1-2 hours total

AR/PerformanceEnhancements [elective] - 6 hours - BO5, Supplemental

Topics:

  • Branch prediction
  • Speculative execution
  • Superscalar architecture
  • Out-of-order execution
  • Multithreading
  • Scalability
  • Introduction to VLIW and EPIC architectures
  • Memory access ordering

Learning objectives:

  • Explain the concept of branch prediction its use in enhancing the performance of pipelined machines.
  • Understand how speculative execution can improve performance.
  • Provide a detailed description of superscalar architectures and the need to ensure program correctness when executing instructions out-of-order.
  • Explain speculative execution and identify the conditions that justify it.
  • Discuss the performance advantages that multithreading can offer along with the factors that make it difficult to derive maximum benefits from this approach.
  • Appreciate the nature of VLIW and EPIC architectures and the difference between them (and between superscalar processors)
  • Understand how a processor re-orders memory loads and stores to increase performance

UD - Optional

AR/Directions in Computing [elective] - 7 hours - Supplemental

Topics:

  • Semiconductor technology and Moore’s law
  • Limitations to semiconductor technology
  • Quantum computing
  • Optical computing
  • Molecular (biological) computing
  • New memory technologies

Learning objectives:

  • To appreciate the underlying physical basic of modern computing.
  • Understand how the physical properties of matter impose limitations on computer technology
  • Appreciate how the quantum nature of matter can be exploited to permit massive parallelism
  • Appreciate how light can be used to perform certain types of computation
  • Understand how the properties of complex molecules can be exploited by organic computers
  • To get an insight into trends in memory design such as ovonic memory and ferromagnetic memories
Edit - History - Print - Recent Changes - Search
Page last modified on August 25, 2010, at 09:38 PM