Exam2 Study Guide

Exam 2 is closed-book and closed-notes. No electronic devices are allowed. You will be provided with a Reference Card with a list of RISC-V instructions and other useful information for the quiz.

This study guide highlights some of the major topics we have covered so far, but it is not an exhaustive list. Material from lectures and labs but not included here may also appear on the quiz.

The focus of the quiz will be on the material since the first exam, but you should be familiar with the material in the exam1 study guide.

Logic Design

Digital Signals & Abstraction

Logic Gates and Combinational Circuits

Logic gates are the fundamental building blocks that perform Boolean functions. They respond to input changes continuously (with a small propagation delay).

Combinational Circuits are an acyclic (no loops) network of logic gates.

Standard Digital Components

These are common circuits built from logic gates.


Sequential Logic

1. Combinational vs. Sequential Logic

2. The Hierarchy of Storage Elements

The Bistable Element (The Physics)

SR Latch (Set-Reset)

D Latch (Level-Sensitive)

D Flip-Flop (Edge-Triggered)

Registers and Clocks

The RISC-V Register File

The RISC-V Pipelined Processor

Pipelining Core Concept

Divide long combinational logic into smaller independent stages, separated by pipeline registers.

Clock rate limited by longest pipeline stage.

Pipelining divides instruction execution into 5 stages (IF → ID → EX → MEM → WB), overlapping multiple instructions to increase throughput while slightly increasing latency due to pipeline register overhead.

Key metrics: Throughput and Latency. In our pipelined architecture, each instruction has a latency of 5 cycles. Assuming no bubbles are injected, the throughput is one instruction per cycle.

The 5 Stages

Stage What happens
IF Fetch instruction from memory using PC; compute PC+4
ID Decode instruction, read registers, extract immediate
EX ALU operation, branch evaluation, jump target calculation
MEM Read/write data memory (loads and stores)
WB Write result back to register file

Hazards & Solutions

Data Hazard (RAW): A later instruction reads a register that an earlier instruction hasn’t written yet. → Forwarding bypasses the value from EX, MEM, or WB back to ID (zero penalty). Nearest stage has priority.

Load-Use Hazard: Special type of RAW hazard. A lw produces its value in MEM, but the next instruction needs it in ID — forwarding alone can’t bridge this gap. → Stall 1 cycle (IF/ID stall, EX gets bubble), then forward from MEM.

Control Hazard (Branch Misprediction): Branch outcome isn’t known until EX stage. We predict not taken (fetch PC+4). If wrong: → 2-cycle penalty — inject bubbles into ID and EX to remove mispredicted instructions from the pipeline.

Pipeline Register Modes

Performance

CPI = 1.0 + B/I (1 + bubbles per instruction)

Penalty = (fraction of that instruction type) × (hazard rate) × (bubble count). Typical CPI ≈ 1.2.


The Memory Subsystem

SRAM vs DRAM

Feature SRAM DRAM
Speed Faster Slower
Cost More expensive Cheaper
Structure Uses flip-flops (6 transistors) Uses capacitors (1 transistor + capacitor)
Volatility Volatile Volatile
Refreshing No need Requires periodic refreshing
Use Case Cache memory Main memory

Principle of Locality

Locality refers to the tendency of programs to access the same set of memory locations repetitively over a short period of time.

Temporal Locality

Spatial Locality


Role of Caches


Types of Cache Misses

Miss Type Description
Cold (Compulsory) The first time data is accessed; it’s not yet in the cache.
Conflict Multiple data mapped to the same cache location in set-associative/direct-mapped caches.
Capacity Cache cannot hold all the data being used; some data is evicted and reloaded.

LRU Replacement Algorithm


Performing a Cache Lookup

To perform a lookup:

  1. Extract the set index from the address.
  2. Search the set for a line where the valid bit is set and the tag matches.
  3. If found, it’s a hit. If not, it’s a miss.
  4. On a miss, fetch from lower memory and update the cache using the replacement policy.

Cache Terminology

Term Definition
Line Smallest unit of data storage in a cache.
Set A group of one or more cache lines.
Block A chunk of data transferred between memory and cache.
Tag Part of address used to identify memory blocks stored in cache lines.
Valid Bit Indicates whether the data in the cache line is valid.

Cache Mapping Techniques

Type Description
Direct-Mapped Each memory block maps to exactly one cache line. Fast but prone to conflict misses.
Fully Associative Any memory block can go into any line. Low conflict misses but slower and more hardware.
n-Way Set Associative Cache is divided into sets, each with E lines. A compromise between the two extremes.

Write Policies

Policy Description
Write-Through Updates are written to both the cache and main memory immediately. Easy to implement, slower writes.
Write-Back Updates are only written to cache initially. Written back to memory when line is evicted. Faster, but requires modified (dirty) bit tracking.

Cache Summary