Chapter 4: Computer Organization and Architecture (Set-8)
In a system bus, if the address bus is 16-bit wide, the maximum number of unique memory addresses it can represent is
A 16 addresses
B 32,768 addresses
C 1,024 addresses
D 65,536 addresses
A 16-bit address bus can represent 216216 unique addresses. That equals 65,536 different locations. Actual memory size also depends on how many bytes are stored per address, but the address count is fixed.
In CPU design, increasing word length from 32-bit to 64-bit directly increases the size of
A CPU registers
B Monitor resolution
C Keyboard buffer
D Printer memory
Word length reflects how many bits the CPU processes at once and typically matches general-purpose register size. A 64-bit CPU uses wider registers, supporting larger integers and wider address handling.
Which cache mapping method allows a memory block to be placed in any cache line, giving maximum flexibility
A Direct mapped
B Set associative
C Fully associative
D Write through
Fully associative cache allows any memory block to go into any cache line. This reduces conflict misses compared to direct mapping, but it requires more complex hardware for tag comparison.
In a direct-mapped cache, two different memory blocks can repeatedly remove each other from cache mainly due to
A Higher bandwidth
B Larger word length
C Faster clock speed
D Mapping conflict
Direct mapping forces each memory block to a single cache line. If two frequently used blocks map to the same line, they continuously replace each other, causing conflict misses and lowering cache effectiveness.
In set-associative cache, the “set” for a block is usually determined by
A Opcode field
B Index bits
C Carry flag
D Clock ticks
Cache address is split into tag, index, and offset. Index bits select the set, then the tag is compared within that set’s lines, allowing limited flexibility and reducing conflicts versus direct mapping.
Compared to polling, interrupt-driven I/O is more efficient mainly because the CPU
A Runs slower always
B Increases disk capacity
C Avoids constant checking
D Uses more ROM
Polling wastes CPU time by repeatedly checking device status. Interrupt-driven I/O lets the CPU perform other work until the device signals that service is needed, improving overall efficiency.
A typical reason DMA improves system performance is that DMA
A Transfers blocks quickly
B Removes cache memory
C Lowers CPU frequency
D Shortens instruction set
DMA moves large blocks of data between an I/O device and main memory without the CPU handling each byte. This reduces CPU overhead and speeds up I/O-heavy tasks such as disk transfers.
When a DMA transfer completes, the CPU is commonly notified using a
A Cache hit signal
B Address bus width
C Opcode field
D Interrupt signal
DMA controller typically raises an interrupt when the transfer finishes or needs attention. This informs the CPU that the data movement is complete, so the CPU can continue processing or handle next steps.
During the fetch step, the CPU normally places the Program Counter value on the
A Data bus
B Control bus
C Address bus
D I/O bus
The address bus carries the memory address to access. During fetch, the PC value is sent on the address bus so memory can output the instruction stored at that location.
In the instruction cycle, “decode” mainly determines
A Monitor output format
B Required micro-operations
C Disk partition type
D Network routing path
Decoding interprets opcode and operand fields to decide what internal actions are needed. The control unit then generates a sequence of micro-operations and control signals to execute the instruction.
In a basic CPU, which element mainly generates control signals that coordinate datapath operations
A Control unit
B ALU
C Cache memory
D Data bus
The control unit sequences operations by generating control signals for registers, ALU, memory, and buses. These signals ensure each micro-operation occurs at the correct time for correct instruction execution.
The instruction register is important because it holds
A Next instruction address
B Memory address always
C Final program output
D Current instruction bits
The instruction register stores the fetched instruction so decoding and execution can proceed. The control unit reads its opcode and operand fields to decide how to control the CPU hardware.
Which register is most directly updated during a conditional branch if the condition is true
A MAR
B MDR
C Program Counter
D Accumulator
A taken branch changes program flow by loading a new target address into the Program Counter. If the condition is false, the PC continues with the next sequential instruction address.
For memory read/write operations, the correct address-data register pair is
A MAR and MDR
B PC and IR
C IR and ALU
D CU and cache
MAR holds the memory location being accessed, while MDR holds the data being transferred. This separation helps manage correct memory operations during instruction fetch and data access steps.
In performance terms, CPU speed is not determined by clock speed alone because performance also depends on
A Wallpaper color
B Keyboard size
C Screen brightness
D Instructions per cycle
Overall performance depends on clock speed and how much work is done per clock cycle (IPC). Cache behavior, pipeline efficiency, and memory latency also strongly affect real execution time.
A CPU with higher IPC but lower clock speed can still be faster because it
A Uses less power
B Has bigger keyboard
C Does more per cycle
D Uses more ROM
IPC means instructions completed per clock cycle. If a CPU completes more useful work per cycle, it can outperform a higher-clock CPU that stalls more due to memory delays or pipeline issues.
Which measure best matches “work completed per unit time” in a system
A Latency
B Throughput
C Addressing
D Word length
Throughput refers to total work done per time, like tasks per second. It is improved by parallelism, efficient pipelines, and reduced waiting from memory or I/O bottlenecks.
Which term best matches “delay before the response begins” for memory or I/O
A Bandwidth
B Capacity
C Latency
D MIPS
Latency is the time delay between a request and the start of the response. Lower latency improves responsiveness, especially for frequent small accesses like instruction fetches and cache lookups.
A common reason for a CPU to stall during execution is
A Fast cache hits
B Larger monitor
C Extra keyboard keys
D Cache miss delay
When data or instructions are not found in cache, the CPU must wait for slower memory. This stall increases execution time and reduces effective IPC, even if clock speed is high.
MIPS is often considered an imperfect comparison metric because
A Instruction mixes differ
B It measures disk space
C It equals cache size
D It ignores clock speed
Different CPUs may execute different types of instructions with different complexities. A million simple instructions may do less work than fewer complex ones, so MIPS can mislead across architectures.
FLOPS is most relevant for evaluating performance in
A Text editing work
B File naming tasks
C Scientific calculations
D Keyboard response
FLOPS measures floating-point operations per second. Floating-point math is heavily used in science, simulations, and graphics, so FLOPS is useful for comparing math-heavy computing performance.
A bottleneck in a balanced computer system often occurs when
A CPU is idle always
B One resource is slow
C Cache hit rate is 100%
D Bus is unused
A bottleneck happens when a slower component like memory, storage, or bus limits system speed. The CPU may wait for data, reducing throughput even if the processor is powerful.
In RISC design, a typical characteristic is
A Many complex instructions
B No registers present
C No pipelining possible
D Simple fixed instructions
RISC typically uses simpler, often fixed-length instructions that are easier to decode and pipeline. This supports efficient execution, though more instructions may be needed for complex operations.
CISC design typically includes instructions that
A Always one cycle
B Avoid memory access
C Perform multiple steps
D Use no addressing
CISC instructions can do more work per instruction, sometimes including memory access and arithmetic together. This may reduce instruction count, but decoding and pipelining can be more complex.
The stored-program concept is important because it allows
A Programs as memory data
B Fixed hardware only
C No RAM required
D No CPU needed
Stored-program concept stores instructions in memory like data. This allows the CPU to fetch and execute different programs by changing memory contents, enabling flexible general-purpose computing.
Which statement best describes Instruction Set Architecture (ISA)
A Keyboard layout rules
B Monitor color settings
C Disk partition method
D Software-visible CPU rules
ISA defines the instructions, registers, addressing modes, and data formats that software uses. It is the interface between hardware and compiled programs, ensuring compatibility across CPUs implementing that ISA.
A common effect of increasing bus width is
A Less data per cycle
B Smaller cache always
C More data per cycle
D Lower clock always
A wider data bus can transfer more bits in one transfer. This can improve memory and I/O bandwidth, reducing transfer time for large data blocks when other parts can keep up.
If cache size increases, a likely benefit is
A More cache hits
B More disk latency
C Less CPU cores
D Less address space
A larger cache can store more frequently used data and instructions. This increases the probability of cache hits, reducing slow main memory accesses and improving average execution performance.
In a memory hierarchy, registers are placed at the top mainly due to their
A Very low speed
B Very high capacity
C Very cheap cost
D Very fast access
Registers are inside the CPU and provide the fastest access for operands and results. Their small size is offset by speed, making them essential for high-performance execution of instructions.
Which memory is usually slower than RAM but non-volatile and large-capacity
A Cache
B SSD or HDD
C Registers
D ALU
Secondary storage like SSDs and HDDs is non-volatile and provides large capacity, but access is slower than RAM. It stores OS, programs, and files permanently for long-term use.
A typical “conflict miss” is most strongly associated with
A Fully associative cache
B Larger main memory
C Direct-mapped cache
D Faster ALU
Direct-mapped cache forces each block to one specific line. Different blocks mapping to the same line can repeatedly replace each other, causing conflict misses even if cache still has free space elsewhere.
In a pipelined CPU, a key idea is to
A Run one instruction at time
B Remove instruction decode
C Remove control signals
D Overlap instruction stages
Pipelining overlaps stages like fetch, decode, execute, and write-back across multiple instructions. This increases throughput by keeping different CPU units busy, though hazards must be handled carefully.
A pipeline hazard that occurs due to dependent instructions is called a
A Data hazard
B Power hazard
C Cache hazard
D Output hazard
Data hazards happen when an instruction depends on the result of a previous instruction that has not completed. The pipeline may need stalling or forwarding to maintain correct execution.
A CPU may use an interrupt to handle
A Wallpaper change
B Disk label rename
C Keyboard press event
D Font selection
Hardware devices like keyboards generate interrupts to signal events. The CPU runs an interrupt service routine to handle the event quickly, then returns to the interrupted task, improving responsiveness.
Which interrupt type is generated by executing a special instruction to request OS service
A Hardware interrupt
B Thermal interrupt
C Cache interrupt
D Software interrupt
Software interrupts are triggered by program instructions, often for system calls. They request services from the operating system in a controlled way, such as file access or device operations.
When a CPU handles an interrupt, it typically first
A Deletes current program
B Saves current state
C Clears all registers
D Formats the disk
Before servicing an interrupt, the CPU saves key state like PC and flags so it can return correctly afterward. After handling, it restores state and continues the interrupted program safely.
Register Transfer Language (RTL) is mainly used to describe
A Internet routing
B File encryption keys
C CPU micro-operations
D Monitor refresh logic
RTL represents internal operations like register transfers and ALU actions, such as “R1 ← R2 + R3.” It helps explain how instructions execute at the hardware level through micro-operations.
A microprocessor differs from a microcontroller mainly because a microcontroller usually
A Integrates peripherals
B Has no CPU
C Has no memory
D Has no registers
Microcontrollers combine CPU, memory, and I/O peripherals on one chip for embedded control. Microprocessors focus mainly on CPU and typically require external memory and peripherals for full operation.
Multiprocessor systems can improve throughput mainly by
A Reducing RAM size
B Lowering bus width
C Parallel task execution
D Removing cache memory
With multiple processors, tasks or threads can run in parallel. This increases total work completed per time when workloads can be divided, improving throughput in servers and compute-heavy environments.
Which statement best describes “benchmarking basics”
A Free disk space increase
B Screen resolution upgrade
C Keyboard replacement plan
D Standard performance measurement
Benchmarking uses standard workloads to measure performance consistently. It helps compare systems fairly and identify bottlenecks, though results depend on the benchmark type and configuration used.
In cache addressing, the “tag” field is mainly used to
A Choose cache set
B Identify correct block
C Count clock cycles
D Store operand value
Tag bits distinguish which memory block is stored in a selected cache line or set. After selecting a set using index bits, tag comparison confirms whether the requested block is present.
A wider address bus directly allows
A Higher FLOPS rate
B More cache lines
C Larger address space
D Faster ALU only
Address bus width determines how many unique memory addresses can be generated. More address bits mean the CPU can address more memory locations, increasing maximum directly addressable memory space.
In a system where each memory address stores 1 byte, 20 address bits can address up to
A 1 KB memory
B 1 GB memory
C 1 TB memory
D 1 MB memory
With 20 address bits, the number of addresses is 220220 = 1,048,576. If each address stores 1 byte, total addressable memory is 1,048,576 bytes, which is about 1 MB.
An advantage of set-associative cache over direct-mapped cache is
A Fewer conflict misses
B More conflict misses
C No tags required
D No index bits
Set-associative cache lets a block go into any line within a set, reducing conflicts. Direct-mapped cache forces one fixed line, causing more conflict misses when multiple blocks map together.
When comparing two CPUs, a higher benchmark score usually indicates
A Bigger storage only
B Smaller instruction set
C Better tested performance
D Lower address space
A benchmark score reflects performance on a specific standard workload. A higher score usually means faster execution for that test, though real-world performance depends on workload similarity and system configuration.
If memory latency is high, which effect is most likely seen in CPU performance
A More cache hits
B More CPU stalls
C More disk space
D More instruction width
High memory latency makes the CPU wait longer for data and instructions when cache misses occur. This increases stall cycles and reduces effective IPC, lowering real performance despite high clock speed.
In a typical CPU, which factor best improves responsiveness for a single quick operation
A Higher latency
B Lower throughput
C Smaller cache lines
D Lower latency
Responsiveness for a single operation depends mainly on how quickly the first response arrives. Lower latency reduces waiting time, making actions like opening a small file or fetching data feel faster.
Which technique can reduce data hazard delays in pipelines by passing results directly to later stages
A Spooling
B Formatting
C Forwarding
D Paging
Forwarding (bypassing) sends results from one pipeline stage directly to a later stage without waiting for register write-back. This reduces stalls caused by dependent instructions and improves pipeline throughput.
In CPU terms, increasing cache hit rate mainly helps to reduce
A Memory access delays
B Screen brightness
C Disk capacity
D Keyboard lag
Higher cache hit rate means more accesses are served by fast cache instead of slow RAM. This reduces average memory delay and stalls, improving overall execution time and effective CPU performance.
Which statement is most accurate about clock speed and performance
A Higher clock always wins
B Clock irrelevant always
C Performance depends on more
D Only MIPS matters
Clock speed affects how fast cycles occur, but real performance also depends on IPC, cache behavior, memory latency, and workload. A balanced system and efficient architecture often matter more than GHz alone.