Chapter 4: Computer Organization and Architecture (Set-6)
In a computer’s functional units, which unit is directly responsible for holding currently executing program code and active data for fast CPU access
A Optical disk
B Output unit
C Input unit
D Main memory
Main memory (RAM) stores the instructions and data currently needed by the CPU. This close, fast storage allows smooth program execution, but contents are lost when power is turned off.
When you save a file and can reopen it after restarting the computer, the data is mainly stored in
A CPU registers
B Cache memory
C Secondary storage
D Control unit
Secondary storage like SSDs and HDDs is non-volatile, so it keeps data without power. It stores the operating system, applications, and user files for long-term use.
Which CPU component mainly selects the sequence of micro-operations needed to execute a decoded instruction
A Control unit
B ALU
C Output unit
D Address bus
The control unit decodes the instruction and generates a timed sequence of control signals. These signals trigger micro-operations such as register transfers, ALU actions, and memory read/write steps.
In a typical CPU datapath, which element provides the arithmetic result after an ADD instruction
A Control bus
B ALU
C MAR
D ROM
The ALU performs arithmetic operations like addition and places the computed result on internal paths. The result is then written to a destination register during the write-back stage.
Which register is most directly used to hold the address of the next instruction, especially during sequential execution
A Accumulator
B Instruction Register
C Program Counter
D Memory Data Register
The Program Counter holds the memory address of the next instruction to fetch. It increments for sequential flow or is updated to a new value for branch, jump, or interrupt handling.
Which register holds the instruction that the CPU is currently decoding and executing
A Instruction Register
B MAR
C MDR
D Stack Pointer
The Instruction Register stores the fetched instruction. The control unit decodes its opcode and operands from this register and then generates control signals to execute it correctly.
In memory read/write operations, which register stores the memory address being accessed
A MDR
B IR
C PC
D MAR
The Memory Address Register holds the address of the memory location to read from or write to. It ensures the correct cell is targeted during data transfer operations.
Which register temporarily holds the data value being transferred between CPU and main memory
A MAR
B PC
C MDR
D CU
The Memory Data Register contains the actual data being read from memory or written to memory. It acts as a buffer between the CPU’s internal circuitry and the memory system.
Which internal bus type mainly carries actual data values between CPU and memory
A Data bus
B Address bus
C Control bus
D Expansion bus
The data bus transfers data bits and instruction bits between CPU, memory, and I/O. A wider data bus can move more bits per transfer, improving data throughput.
Which bus type carries signals such as memory read, memory write, and interrupt acknowledge
A Address bus
B Data bus
C Control bus
D Video bus
The control bus carries command and timing signals that coordinate operations. Signals like read/write and interrupt control ensure the correct direction and timing of data transfers across the system.
Which statement best describes an I/O interface in a basic computer system
A Makes CPU faster
B Connects device to bus
C Stores program permanently
D Replaces main memory
An I/O interface allows peripheral devices to communicate with the system bus and CPU. It handles control lines, buffering, and speed differences so data moves reliably between device and memory/CPU.
A key reason for using registers instead of RAM for operands during execution is that registers
A Hold more data
B Are non-volatile
C Are cheaper per bit
D Are much faster
Registers are inside the CPU and offer the fastest access. Using registers reduces delays from memory access, allowing the CPU to perform operations quickly during the instruction execution process.
Cache memory mainly improves performance by
A Increasing disk space
B Slowing CPU clock
C Reducing average access
D Removing RAM need
Cache stores frequently used data and instructions near the CPU. This reduces the average time to access memory, so the CPU waits less often for slower main memory.
Which memory type is designed to store firmware instructions that remain available at startup without power
A ROM
B RAM
C Registers
D Cache
ROM is non-volatile memory that stores firmware like BIOS/UEFI. It is used during startup to initialize hardware and begin the boot process before the operating system loads into RAM.
Virtual memory is mainly useful when
A CPU is overheated
B Cache is empty
C RAM is limited
D Disk is full
Virtual memory uses secondary storage as an extension of RAM by swapping pages. It allows programs larger than physical RAM to run, but access becomes slower than direct RAM access.
In the memory hierarchy, which order is correct from fastest to slowest for typical access
A Disk → RAM → cache
B Cache → RAM → disk
C RAM → cache → disk
D Disk → cache → RAM
Cache is faster than RAM and holds frequently used items. Secondary storage is much slower but larger and non-volatile, so it sits lower in the hierarchy for long-term storage.
Which term best describes the delay before memory begins delivering requested data
A Bandwidth
B Capacity
C Refresh
D Latency
Latency is the waiting time between requesting data and the start of data availability. Lower latency improves responsiveness, especially when many small memory accesses happen during program execution.
Memory bandwidth refers to
A Address size only
B Disk rotation rate
C Bits per second
D Instruction length
Bandwidth is the amount of data that can be transferred per unit time, such as GB/s. Higher memory bandwidth supports faster bulk transfers, reducing delays in data-heavy operations.
In the instruction cycle, the fetch step usually uses which register to locate the next instruction
A Program Counter
B Instruction Register
C Accumulator
D MDR
During fetch, the CPU places the Program Counter value on the address bus to read the next instruction from memory. The instruction then loads into the Instruction Register for decoding.
In a machine instruction format, the opcode mainly indicates
A Operand value
B Cache size
C Operation type
D Clock rate
The opcode tells the CPU what action to perform, such as add, load, store, or jump. The control unit decodes the opcode to generate control signals for execution.
Which instruction-cycle stage mainly interprets the opcode and prepares control signals for execution
A Fetch
B Decode
C Execute
D Store
Decode identifies the instruction meaning and operand details. The control unit then prepares the needed micro-operations and control signals so the correct hardware steps occur in the execute stage.
A key feature of the stored-program concept is that
A Programs stored in ROM
B Only data stored in memory
C Instructions stored in memory
D CPU stores all files
Stored-program concept means instructions are kept in memory like data. The CPU fetches instructions from memory, enabling flexible computing where changing memory contents changes the program.
A clock cycle is best described as
A One timing tick
B One instruction always
C One disk transfer
D One file operation
A clock cycle is a single pulse of the CPU clock used to synchronize internal operations. Instructions may take multiple cycles depending on micro-operations, memory access, and pipeline behavior.
Micro-operations are best described as
A Large program modules
B Internet packets
C Hardware failures
D Small CPU actions
Micro-operations are tiny internal steps like register transfers, ALU selection, and PC increment. A full instruction is carried out through an organized sequence of these actions.
Which performance term means “work completed per unit time” for a system
A Latency
B Addressing
C Throughput
D Word length
Throughput measures total output or completed tasks per time, like jobs per second. It improves with parallel cores, effective caching, and reduced waiting due to memory or I/O delays.
Which factor can improve throughput for parallel programs without necessarily increasing single-thread speed
A More cores
B Lower RAM size
C Smaller cache
D Slower bus
Multiple cores allow parallel execution of tasks. If software is multi-threaded, cores can run threads simultaneously, increasing total work done per time even if one thread’s speed stays similar.
Which metric is most associated with floating-point calculation performance
A MIPS
B RPM
C FLOPS
D DPI
FLOPS measures floating-point operations per second, important for scientific computing, simulations, and graphics. It indicates how effectively a processor handles decimal-based mathematical operations.
MIPS is commonly used as a rough measure of
A Cache hit rate
B Instruction execution rate
C Disk temperature
D Memory size
MIPS means Million Instructions Per Second. It provides a rough performance estimate, but comparisons across different architectures can be misleading because instruction complexity varies.
A bottleneck in a system means
A All parts equal
B CPU never waits
C Cache always hits
D One part limits speed
A bottleneck is a slow component that restricts overall performance, such as slow storage or memory. The CPU may stay idle waiting for the bottleneck, reducing system throughput.
RISC and CISC mainly differ in
A Instruction complexity
B Screen resolution
C File system type
D Keyboard language
RISC generally uses simpler, fixed-length instructions optimized for pipelining. CISC often provides more complex instructions that can do more work per instruction but may need more decoding steps.
Which statement matches RISC design more accurately
A Many complex instructions
B No registers used
C Simple instruction set
D Only microcontrollers
RISC emphasizes simple instructions, efficient pipelining, and heavy register usage. This often leads to faster execution per instruction stage, though complex tasks may require more instructions.
A typical advantage of CISC is that it may
A Use fewer instructions
B Need more instructions
C Remove main memory
D Remove control unit
CISC instructions can perform multi-step operations, sometimes reducing the number of instructions needed for a program. This can reduce program size, though decoding and execution may be more complex.
DMA is mainly used to
A Increase CPU clock
B Decode instructions
C Transfer data directly
D Reduce ROM size
DMA lets an I/O device transfer data to or from main memory without CPU handling every byte. This improves efficiency in large transfers, with the CPU only managing setup and completion.
Interrupt-driven I/O mainly helps by
A Forcing constant polling
B Increasing disk noise
C Decreasing bus width
D Reducing CPU polling
Interrupt-driven I/O allows devices to notify the CPU only when service is needed. This avoids wasteful polling loops and lets the CPU perform other processing until an interrupt arrives.
A hardware interrupt is typically caused by
A External device event
B Program instruction
C File rename
D Screen saver
Hardware interrupts come from peripherals such as keyboard, timer, disk, or network devices. They alert the CPU to handle urgent events, improving responsiveness without constant checking.
Immediate addressing mode means the operand is
A Inside memory
B Inside disk
C Inside instruction
D Inside ROM only
Immediate addressing places the operand value in the instruction itself. This avoids an extra memory access, making it fast for constants, though limited by the size of the operand field.
Direct addressing mode means the instruction contains
A Operand constant
B Operand address
C Only opcode bits
D Cache block tag
Direct addressing includes the memory address where the operand is stored. The CPU uses this address to fetch or store the operand during execution, requiring at least one memory access.
Indirect addressing mode uses
A Immediate constant
B Output buffer
C Cache policy
D Pointer address
Indirect addressing means the instruction points to a location holding the effective address. This supports pointer-based structures, but often needs additional memory access to obtain the final operand address.
Register addressing mode means the operand is taken from
A Hard disk block
B RAM page file
C CPU register
D ROM firmware
Register addressing uses operands stored in CPU registers. This speeds execution because register access is much faster than main memory access, reducing delays during ALU operations and data movement.
Instruction Set Architecture (ISA) mainly defines
A CPU instruction rules
B Monitor display rules
C Disk format rules
D Network routing rules
ISA defines the instructions, registers, addressing modes, and data formats visible to software. It is the specification that compilers target and that CPU hardware must implement for compatibility.
A microprocessor is typically
A Complete computer case
B CPU on chip
C Memory-only chip
D Input-only device
A microprocessor integrates CPU functions on a single chip. It generally needs external memory and peripherals to form a complete system, and it is designed for general-purpose processing.
A microcontroller usually includes
A Only ALU section
B Only cache module
C CPU with peripherals
D Only control bus
A microcontroller integrates CPU, memory, and I/O peripherals on one chip. It is ideal for embedded control tasks, like appliances and sensors, where compact design and low power are important.
Cache mapping is mainly about
A Cache placement method
B Screen color mapping
C Keyboard layout map
D File permission map
Cache mapping decides where a memory block can be placed in cache. Methods include direct-mapped and set-associative, balancing speed, hardware complexity, and the chance of conflict misses.
Direct-mapped cache means a block maps to
A Any cache line
B Any set line
C One fixed line
D Two random lines
In direct mapping, each memory block has exactly one cache location determined by address bits. It is simple and fast but may suffer conflict misses when blocks compete for the same line.
Set-associative cache means a block can be placed in
A Any cache line
B Only main memory
C Only registers
D A specific set
Set-associative cache divides cache into sets. A block maps to one set but can be stored in any line within that set, reducing conflicts compared to direct-mapped caches.
In system organization, an I/O interface helps mainly with
A CPU instruction creation
B Device speed matching
C Increasing word length
D Reducing clock ticks
I/O interfaces handle communication between the fast system bus and slower devices. They provide buffering and control signals, preventing data loss and ensuring reliable transfers despite speed differences.
A common advantage of cache memory is that it helps reduce
A Memory latency
B Disk capacity
C Monitor pixels
D Keyboard noise
Cache reduces average memory access delay by keeping frequently used items close to the CPU. With more cache hits, the CPU waits less for main memory, improving overall performance.
The ALU is best known for performing
A Input conversion
B Disk scheduling
C Arithmetic and logic
D File encryption
The ALU performs calculations and logical operations like comparisons and bitwise AND/OR. It works with registers and flags, supporting both numeric computation and decision-making instructions.
The control unit is best known for
A Storing user files
B Printing output pages
C Sending internet packets
D Coordinating CPU actions
The control unit directs the sequence and timing of operations in the CPU. It decodes instructions and issues control signals so registers, ALU, memory, and I/O perform the correct actions.
In CPU operation, a common reason for using interrupts is to
A Increase disk space
B Handle urgent events
C Reduce RAM size
D Change ISA format
Interrupts let hardware or software request immediate CPU service. The CPU temporarily pauses current work, runs an interrupt service routine for the event, then resumes the interrupted task safely.