Chapter 4: Computer Organization and Architecture (Set-10)

A system is byte-addressable and uses 48-bit virtual addresses; the theoretical size of its virtual address space is

A 1 PB
B 16 TB
C 256 TB
D 4 GB

A cache has 32 KB size with 64-byte blocks; the number of cache lines is

A 128 lines
B 512 lines
C 256 lines
D 1024 lines

For 64-byte cache blocks, how many block offset bits are required

A 4 bits
B 5 bits
C 6 bits
D 8 bits

A 32 KB 4-way set-associative cache with 64-byte blocks has how many sets

A 128 sets
B 64 sets
C 256 sets
D 512 sets

In a 4-way set-associative cache with 128 sets, the set index bits count is

A 6 bits
B 8 bits
C 10 bits
D 7 bits

In a write-back cache, a dirty block must be written to memory

A On every write
B On eviction only
C On every read
D Never written

In write-through caching, the biggest disadvantage compared to write-back is usually

A More cache hits
B Less memory traffic
C More memory writes
D No tag checks

A page fault service time is large mainly because it requires

A Register access
B ALU operation
C Cache hit check
D Disk or SSD access

The most direct purpose of a TLB in virtual memory systems is to

A Increase disk capacity
B Speed address translation
C Lower clock speed
D Replace cache memory

A TLB miss typically causes the CPU to

A Consult page table
B Skip the instruction
C Flush the cache
D Reset program counter

In a pipeline, a structural hazard happens when

A Branch changes flow
B Data dependency exists
C Hardware resource conflicts
D Cache hit occurs

A control hazard is mostly caused by

A Register shortage
B Branch instruction
C Cache replacement
D DMA transfer

A data hazard occurs when

A ALU is too fast
B Disk is full
C Instruction depends on result
D Bus is too wide

Forwarding reduces data hazard stalls because it

A Bypasses register write-back
B Delays all writes
C Removes branching
D Increases ROM size

Branch prediction improves pipeline performance mainly by

A Increasing cache size
B Changing ISA rules
C Lowering bus width
D Reducing control stalls

In a von Neumann design, the main bottleneck arises because

A Separate memories exist
B Same path for code/data
C ALU has no flags
D Registers are missing

Harvard architecture reduces contention mainly by

A No CPU clock
B No interrupts used
C Separate code/data paths
D No memory hierarchy

CPI is best defined as

A Cycles per instruction
B Cores per interrupt
C Cache per index
D Clocks per input

If CPI falls from 2.0 to 1.5 while clock stays same, performance change is about

A 25% faster
B 33% faster
C 50% slower
D No change

Amdahl’s law states overall speedup is limited mainly by the

A Cache size only
B Disk speed only
C Serial program part
D Bus width only

If 90% of a program is parallelizable, the maximum speedup with infinite cores is

A
B
C 90×
D 10×

In shared-memory multiprocessors, cache coherence is needed because

A RAM never changes
B Caches may hold stale data
C CPU has no registers
D Bus has no control lines

The main purpose of an instruction set (ISA) is to define

A Physical motherboard design
B Screen resolution rules
C Software-visible behavior
D Disk partition format

A typical RISC advantage for pipelining is that RISC often uses

A Fixed-length formats
B Variable-length formats
C No load/store
D No branch instructions

In classic RISC, memory is usually accessed using

A Any instruction type
B ALU instructions only
C Load/store only
D Branch instructions only

In cache terms, miss penalty mainly depends on

A Printer speed
B Lower-level access time
C Monitor refresh rate
D Keyboard latency

Increasing block size can improve hit rate due to

A Spatial locality
B Temporal locality
C Disk locality
D Printer locality

Too large a cache block may hurt performance mainly by causing

A More tag bits
B Faster RAM access
C More cache pollution
D Less miss penalty

In cache mapping, conflict misses are most common in

A Fully associative
B Direct mapped
C Larger RAM
D Write-through only

In set-associative caches, LRU replacement tends to work well mainly due to

A Spatial locality only
B Disk caching
C CPU word length
D Temporal locality

In a byte-addressable system, the lowest address bits are used for

A Tag selection
B Set selection
C Block offset
D Opcode decode

A non-maskable interrupt is typically used for

A Critical hardware faults
B Normal timer ticks
C Keyboard typing
D Printer ready

In interrupt handling, saving PC and flags is required mainly to

A Speed up DMA
B Resume correctly later
C Increase cache hits
D Change word length

A microcontroller differs from a microprocessor mainly because microcontrollers usually

A Have no control unit
B Use no ROM
C Include on-chip peripherals
D Use no ALU

SMT improves utilization mainly because it

A Fills idle execution slots
B Doubles cache always
C Removes branch hazards
D Lowers RAM latency

A “cache hit” means the data is found in

A Disk storage
B Page file only
C Cache memory
D ROM firmware

A “TLB hit” means the required translation is found in

A Hard disk
B TLB cache
C ALU register
D Control bus

If both TLB miss and page fault happen, the main reason for large delay is

A Extra ALU steps
B Larger bus width
C More registers used
D Disk read required

In a CPU, instruction throughput increases most directly when

A RAM decreases
B Disk increases
C CPI decreases
D ROM decreases

If clock speed increases but CPI increases equally, the instruction throughput

A Increases a lot
B Stays roughly same
C Drops to zero
D Doubles always

In DMA operations, which component temporarily becomes bus master to control transfers

A CPU control unit
B Output unit
C Cache memory
D DMA controller

In caches, “write allocate” means on a write miss the cache

A Loads block into cache
B Ignores the write
C Writes only to disk
D Flushes entire cache

A common pairing is “write-back” with

A No allocation
B Read through
C Write allocate
D Disk paging

Which cache miss type happens when the cache is too small for the working set

A Compulsory miss
B Capacity miss
C Conflict miss
D TLB miss

The first access to a block causing a miss is called

A Conflict miss
B Capacity miss
C Compulsory miss
D Dirty miss

In a multi-level cache, L1 is usually smaller than L2 mainly because L1 must be

A Non-volatile
B Very high capacity
C Located on disk
D Extremely low latency

A CPU core stalling frequently on memory suggests improvement from

A Smaller cache
B Better cache hierarchy
C Lower address width
D Fewer registers

A classic advantage of microprogrammed control is that it

A Removes control unit
B Eliminates memory access
C Simplifies adding instructions
D Eliminates registers

Hardwired control is often faster than microprogrammed control because it

A Uses fixed logic gates
B Uses disk storage
C Uses larger RAM
D Uses more paging

The best statement about “hard” CPU performance questions is that real speed depends on

A Clock only
B Many architecture factors
C MIPS only
D Cache only

Leave a Reply

Your email address will not be published. Required fields are marked *