Chapter 4: Computer Organization and Architecture (Set-9)

A CPU has a 32-bit address bus and is byte-addressable; ignoring reserved regions, the theoretical maximum directly addressable memory space is

A B) 2 GB
B C) 8 GB
C D) 16 GB
D A) 4 GB

If a system uses word-addressable memory where each address refers to a 4-byte word, then a 16-bit address bus can directly address how much memory

A B) 256 KB
B A) 64 KB
C C) 1 MB
D D) 4 MB

In a 4 KB direct-mapped cache with 16-byte blocks, the number of cache lines is

A A) 64 lines
B B) 128 lines
C C) 256 lines
D D) 512 lines

For the same 4 KB cache with 16-byte blocks, the block offset requires how many bits

A A) 2 bits
B C) 8 bits
C D) 16 bits
D B) 4 bits

In that 4 KB direct-mapped cache, the index field needs how many bits

A C) 8 bits
B A) 4 bits
C B) 6 bits
D D) 12 bits

In a 4-way set-associative cache of 4 KB with 16-byte blocks, the number of sets is

A A) 16 sets
B C) 64 sets
C B) 32 sets
D D) 256 sets

In that 4-way set-associative cache, the set index needs how many bits

A A) 4 bits
B C) 8 bits
C D) 10 bits
D B) 6 bits

Which cache organization requires comparing the tag of a memory block with tags in all cache lines during lookup

A B) Fully associative
B A) Direct mapped
C C) Set associative
D D) Write back

A key reason fully associative caches are uncommon for large caches is that they

A A) Need no tags
B C) Increase disk space
C B) Require many comparators
D D) Reduce bus width

In virtual memory, a page fault occurs when

A A) Cache hit happens
B C) Disk becomes full
C D) CPU changes opcode
D B) Page not in RAM

A major performance cost of a page fault is that it involves

A A) Register transfer only
B B) Disk access delay
C C) Faster cache lookup
D D) Higher clock speed

Which concept best explains why caches improve average access time without increasing RAM speed

A B) Monitor resolution
B C) Disk formatting
C D) Network routing
D A) Temporal locality

Spatial locality means that when a memory location is accessed, nearby locations are likely to be

A B) Accessed soon
B A) Never accessed
C C) Stored in ROM
D D) Sent to printer

In a pipeline, a control hazard is most closely related to

A A) Data dependency
B C) Cache replacement
C B) Branch instruction
D D) Disk scheduling

A typical method to reduce control hazard penalty is

A A) Lower bus width
B C) Remove cache
C D) Reduce word length
D B) Branch prediction

A pipeline data hazard can be reduced using forwarding because forwarding

A A) Deletes instructions
B B) Sends results early
C C) Slows down ALU
D D) Removes registers

If instruction latency stays the same but throughput increases, the most likely technique used is

A A) Pipelining
B B) Disk compression
C C) File encryption
D D) Screen scaling

Which metric better reflects performance for continuous workloads with many tasks

A A) Latency
B C) Address width
C D) Word size
D B) Throughput

Which metric is more critical for a single quick response, like reading one memory value once

A A) Throughput
B C) Cache size
C B) Latency
D D) MIPS

A CPU with same clock speed but better cache hit rate often performs faster mainly because it has

A B) Fewer memory waits
B A) More stalls
C C) Less register use
D D) Smaller bus width

When comparing two CPUs using MIPS, a major limitation is that MIPS

A A) Measures cache size
B C) Equals FLOPS always
C D) Measures bus width
D B) Ignores instruction mix

In CISC vs RISC, a key RISC feature that helps pipelining is often

A A) Variable-length instructions
B B) Fixed-length instructions
C C) No registers available
D D) No branch instructions

In cache write policies, “write-through” means

A B) Write cache and memory
B A) Write only in cache
C C) Write only to disk
D D) Never update memory

In cache write policies, write-back typically reduces memory traffic because it

A A) Writes every time
B C) Disables cache hits
C D) Doubles bus width
D B) Writes on eviction

In a write-back cache, a “dirty bit” indicates that a cache block

A A) Is empty now
B C) Has wrong tag
C B) Differs from memory
D D) Has parity error

A common replacement policy used in set-associative caches to reduce misses is

A B) LRU
B A) FIFO
C C) ASCII
D D) JPEG

If a cache line size is increased too much, one possible negative effect is

A A) More spatial locality
B C) More address bits
C D) Faster disk access
D B) More wasted bandwidth

In a byte-addressable system, the lowest address bits are commonly used as

A A) Tag bits
B C) Set bits
C B) Offset bits
D D) Opcode bits

In an interrupt, which CPU action helps ensure correct return to the interrupted program

A A) Clears cache
B B) Saves PC and flags
C C) Changes clock speed
D D) Flushes disk buffers

A non-maskable interrupt (NMI) is typically used for

A A) Normal keyboard input
B C) Screen refresh control
C D) Printer status check
D B) Critical hardware events

In DMA, the CPU is freed because DMA controller handles

A B) Data transfer cycles
B A) Instruction decode
C C) Cache replacement
D D) Register renaming

DMA is most beneficial for transferring

A A) Single byte input
B C) One instruction only
C B) Large data blocks
D D) Small register values

In multiprocessor systems, “shared memory” means processors

A B) Access common RAM
B A) Use separate RAM
C C) Share only cache
D D) Share only registers

A major challenge in shared-memory multiprocessors is ensuring

A A) Screen brightness
B C) Printer alignment
C D) Disk formatting
D B) Cache coherence

In general, increasing the number of CPU cores improves performance most when

A A) Program is single-thread
B B) Program is parallel
C C) Disk is full
D D) Cache is disabled

Which best describes why RISC may need more instructions for the same task compared to CISC

A A) No memory access
B C) No control unit
C D) No addressing modes
D B) Simpler instructions

In the instruction cycle, the “effective address” is typically finalized during

A C) Execute stage
B A) Fetch stage
C B) Decode stage
D D) Store stage

For an indexed addressing mode, the effective address is commonly formed by

A A) Tag plus index
B C) Opcode plus flags
C D) Offset minus cache
D B) Base plus index

In a pipeline, structural hazards occur when

A A) Branch taken
B B) Hardware resource conflicts
C C) Data dependency exists
D D) Cache miss happens

A key benefit of Harvard architecture over von Neumann in basic design is

A B) Separate code and data
B A) No registers needed
C C) No interrupts possible
D D) Only one bus used

Von Neumann bottleneck mainly refers to limitation caused by

A A) Small keyboard
B C) Large cache size
C B) Shared bus for code/data
D D) Many registers

In CPU performance, “CPI” refers to

A B) Cache per index
B C) Cores per interrupt
C D) Clocks per input
D A) Cycles per instruction

If CPI decreases while clock speed stays same, the CPU’s performance generally

A B) Increases
B A) Decreases
C C) Stays constant always
D D) Becomes slower always

In cache, a “hit” means the requested data

A A) Is on disk
B B) Found in cache
C C) Not in memory
D D) Causes page fault

In cache, a “miss penalty” is mainly the time to

A A) Print the result
B C) Increase bus width
C D) Change instruction set
D B) Fetch from lower level

A common reason L1 cache is smaller than L2 cache is that L1 must be

A A) Very large capacity
B C) Always non-volatile
C B) Extremely fast
D D) Stored on disk

In multicore CPUs, “cache coherence” protocols are needed because

A B) Caches may differ data
B A) Cores share keyboard
C C) RAM never changes
D D) ALU becomes slow

A CPU that supports simultaneous multithreading (SMT) improves utilization mainly by

A A) Adding more RAM
B C) Removing cache memory
C D) Slowing clock speed
D B) Sharing core resources

In performance analysis, Amdahl’s law mainly highlights that speedup is limited by

A A) Fastest component only
B B) Serial portion of work
C C) Cache size always
D D) Disk speed only

If a program is 80% parallelizable, the maximum theoretical speedup as cores approach infinity is

A B) 5×
B A) 1.25×
C C) 10×
D D) 80×

Leave a Reply

Your email address will not be published. Required fields are marked *