Chapter 4: Computer Organization and Architecture (Set-5)
In basic architecture, which unit mainly stores the operating system files and user documents permanently even after shutdown
A CPU registers
B Cache memory
C Control unit
D Secondary storage
Secondary storage like SSDs and HDDs keeps data permanently without power. It stores the operating system, applications, and user files, providing large capacity though slower access than RAM or cache.
When the CPU reads an instruction from memory, the address used for that fetch is taken from the
A Program Counter
B Accumulator register
C Memory Data Register
D Instruction Register
The Program Counter holds the address of the next instruction to fetch. During the fetch step, this address is placed on the address bus so memory can provide the instruction to the CPU.
In a simple CPU, which register is most directly loaded with the instruction fetched from memory
A Memory Address Register
B Program Counter
C Instruction Register
D Stack Pointer
The fetched instruction is placed into the Instruction Register. The control unit then decodes it to understand the operation, operands, and required control signals for execution.
During a memory read operation, which pair correctly matches register roles for address and data
A MAR=address, MDR=data
B IR=address, PC=data
C MDR=address, MAR=data
D PC=address, IR=data
MAR stores the memory location to access, while MDR stores the actual data being transferred. This separation makes memory operations organized and reliable during instruction fetch and data read/write.
A computer bus is best described as
A Cooling fan path
B Shared communication pathway
C Printer ink channel
D Battery charging wire
A bus is a set of lines used to transfer data, addresses, and control signals between components like CPU, memory, and I/O. It enables coordinated communication within the system.
Which bus type is mainly responsible for carrying control commands like read, write, and interrupt signals
A Control bus
B Data bus
C Address bus
D Display bus
The control bus carries signals that coordinate operations, such as memory read/write, I/O control, and interrupt requests. These signals ensure correct timing and direction of data transfers.
In a standard system bus, which bus selects the memory location that will be accessed
A Data bus
B Control bus
C Address bus
D Audio bus
The address bus carries the address of the target memory or I/O location. It identifies where the CPU wants to read from or write to, enabling correct selection of storage location.
Which unit mainly performs arithmetic and logical comparisons inside the CPU
A Cache controller
B ALU
C Output unit
D Input unit
The Arithmetic Logic Unit performs operations like addition, subtraction, AND/OR logic, and comparisons. It works with registers and status flags to support calculations and decision-making instructions.
Which CPU component primarily decodes instructions and issues control signals to execute them
A Secondary storage
B Output unit
C Data bus
D Control unit
The control unit interprets opcode and addressing details from the instruction register. It then generates control signals for ALU, registers, memory, and I/O to perform the instruction steps correctly.
A register is best described as
A Fast CPU storage
B Slow long-term storage
C External input device
D Network connection cable
Registers are very small and very fast storage locations inside the CPU. They hold data, addresses, and intermediate results during instruction execution, reducing the need for slower memory access.
Which register commonly holds intermediate results from ALU operations in many basic CPU designs
A Program Counter
B Instruction Register
C Accumulator
D Memory Address Register
The accumulator often stores intermediate results of arithmetic and logic operations. Many instructions use it as an implied operand or destination, simplifying CPU design and speeding common computations.
Word length of a CPU mainly affects
A Printer quality
B Mouse pointer speed
C Monitor brightness
D Bits processed together
Word length is the number of bits the CPU can handle as a unit, such as 32-bit or 64-bit. It influences register size, data range, and address handling capability.
Clock speed is best defined as
A Cycles per second
B Memory size in GB
C Disk rotation time
D Cache line length
Clock speed measures how many clock cycles occur per second, usually in GHz. It affects how quickly CPU steps occur, but real performance also depends on architecture and memory behavior.
In the instruction cycle, fetch stage mainly
A Executes arithmetic work
B Writes result to disk
C Reads instruction from memory
D Clears CPU registers
During fetch, the CPU uses the Program Counter address to read the next instruction from memory and load it into the Instruction Register, preparing the instruction for decoding and execution.
Which stage of instruction cycle interprets opcode and prepares the required micro-operations
A Fetch
B Decode
C Execute
D Store
Decode stage identifies the operation and operand information from the instruction. The control unit then prepares the sequence of micro-operations and control signals needed to carry out the instruction.
The execute stage of an instruction typically
A Performs required operation
B Displays output only
C Increments disk sectors
D Formats memory cells
Execute stage performs the main action of the instruction, such as ALU calculation, memory access, or branching. It completes the essential processing needed for that instruction.
The store or write-back stage mainly
A Turns off CPU power
B Removes cache memory
C Updates final destination
D Changes clock speed
Write-back stores the computed result to a destination register or memory location. This makes the result available for later instructions and ensures the instruction’s effect is saved in system state.
A clock cycle is best explained as
A One full program
B One hard disk track
C One keyboard press
D One timing pulse
A clock cycle is a single tick of the CPU clock that synchronizes internal operations. Instructions require one or more cycles depending on complexity and the number of required micro-operations.
Micro-operations are best described as
A Tiny internal actions
B Large software upgrades
C Internet traffic signals
D Monitor color changes
Micro-operations are small steps like transferring data between registers, incrementing the PC, or selecting an ALU function. A complete instruction is carried out as a sequence of micro-operations.
A machine cycle commonly refers to
A Full OS installation
B Basic memory operation
C Printer paper loading
D Screen refresh cycle
A machine cycle is a basic hardware activity such as opcode fetch, memory read, or memory write. Multiple machine cycles together often form a complete instruction cycle.
The main goal of memory hierarchy is to
A Increase screen size
B Reduce keyboard keys
C Balance speed and cost
D Improve printer ink
Memory hierarchy arranges storage from fastest and smallest to slowest and largest. It provides high speed using registers and cache while still offering large capacity through RAM and secondary storage.
Which memory is closest to CPU execution units and fastest to access
A Registers
B RAM
C SSD storage
D Optical disk
Registers are located inside the CPU and have the fastest access time. They hold immediate operands and results, allowing the CPU to execute instructions without waiting for slower memory.
Which memory is non-volatile and typically stores firmware code
A RAM
B Cache
C Registers
D ROM
ROM retains data without power and stores firmware like BIOS/UEFI. It helps start the computer by initializing hardware and loading the boot process before the operating system runs.
Cache memory mainly helps by reducing
A Disk capacity
B Monitor power use
C Average memory delay
D Printer noise level
Cache stores frequently used data and instructions near the CPU. This reduces the average time needed to access memory, preventing the CPU from waiting often for slower main memory.
Virtual memory works mainly by
A Using cache as disk
B Using disk as RAM
C Using ROM as RAM
D Using ALU as RAM
Virtual memory extends available memory by using secondary storage for swapping pages. It allows large programs to run even when RAM is limited, though performance is slower than true RAM.
Throughput is best described as
A Delay before response
B Bus wire length
C Memory chip count
D Work done per time
Throughput measures how much processing a system completes per unit time, such as tasks per second. It improves with parallel cores, good caching, and reduced waiting for memory or I/O.
Latency is best described as
A Waiting time delay
B Total storage size
C CPU instruction count
D Cache line size
Latency is the delay between requesting data or service and receiving a response. Lower latency improves responsiveness, especially for memory accesses and real-time device operations.
Which factor generally increases system throughput when tasks can run in parallel
A Smaller cache size
B Lower bus width
C More CPU cores
D Slower clock rate
Multiple cores can execute multiple threads or processes at the same time. When software supports parallelism, more cores increase total work completed per unit time, improving throughput.
MIPS is a rough measure of
A Memory access latency
B Instruction rate
C Disk capacity growth
D Screen refresh speed
MIPS means Million Instructions Per Second and estimates instruction execution speed. It is a rough metric and can be misleading across different architectures because instructions vary in complexity.
FLOPS mainly measures
A Floating-point speed
B Integer operations only
C Network packet rate
D Printer pages count
FLOPS measures floating-point operations per second, important in scientific computing and graphics. Higher FLOPS generally indicates better ability to handle heavy mathematical calculations efficiently.
A bottleneck in system performance means
A CPU always fastest
B All parts equal speed
C One part limits overall
D Cache never used
A bottleneck occurs when one component, such as slow storage or memory, restricts overall system speed. Even with a fast CPU, the system slows while waiting for the limiting component.
Benchmarking is mainly used to
A Increase RAM size
B Repair motherboard faults
C Update BIOS settings
D Compare performance results
Benchmarking runs standard tests to measure and compare system speed under known conditions. It helps identify faster systems and also shows weak areas, though results vary with workload type.
RISC architecture generally uses
A Complex instructions
B Simple instructions
C No instruction set
D No registers used
RISC focuses on simple, often fixed-length instructions that execute efficiently and support pipelining. It relies on registers for fast access and usually needs more instructions to do complex work.
CISC architecture commonly provides
A Fewer addressing modes
B Only one instruction
C Complex instructions
D No memory access
CISC includes instructions that perform multiple steps in a single instruction, such as memory access plus arithmetic. This can reduce program instruction count but increases decoding complexity.
An interrupt is best described as
A Signal needing attention
B CPU speed booster
C Memory storage location
D Disk formatting action
An interrupt is a signal from hardware or software that requests CPU attention. The CPU pauses the current task, services the interrupt routine, and then returns to continue earlier execution.
Interrupt-driven I/O mainly reduces
A Screen resolution
B RAM capacity
C CPU polling work
D Clock frequency
Interrupt-driven I/O allows the CPU to do other tasks until a device signals it needs service. This reduces constant polling and improves efficiency, especially when devices operate slower than the CPU.
DMA mainly benefits I/O by
A Increasing ALU speed
B Increasing ROM size
C Lowering cache hits
D Reducing CPU involvement
DMA transfers bulk data between memory and an I/O device without CPU handling each byte. The CPU is freed for other work and is typically interrupted only when the transfer completes.
Immediate addressing mode means
A Operand is constant
B Operand is pointer
C Operand in memory
D Operand in disk
Immediate addressing includes the actual value inside the instruction itself. This is fast because it avoids extra memory access, making it useful for constants used in calculations and comparisons.
Direct addressing mode means
A Operand in instruction
B Operand always in register
C Address given in instruction
D Address never used
In direct addressing, the instruction contains the memory address of the operand. The CPU uses this address to fetch the operand from memory during execution.
Indirect addressing mode means
A Operand equals opcode
B Address points to address
C Data in register only
D Cache stores address
Indirect addressing uses a pointer. The instruction gives an address that contains the effective address of the operand. It supports flexible memory use but often needs extra memory access.
Register addressing mode uses operand from
A CPU register
B Main memory
C Disk storage
D ROM only
Register addressing specifies that the operand is stored in a CPU register. Since registers are fastest, this mode improves performance by avoiding slower memory access during execution.
ISA is best described as
A Keyboard layout standard
B Monitor display protocol
C CPU instruction interface
D Disk file system type
Instruction Set Architecture defines the instructions, registers, data formats, and addressing modes visible to software. It acts as the interface allowing programs to run on CPUs that implement that ISA.
A microprocessor is generally
A CPU on a chip
B Full computer case
C Hard disk drive
D Network router chip
A microprocessor integrates CPU logic on one chip. It typically needs external memory and peripherals for a complete system and is designed for general-purpose computation.
A microcontroller usually contains
A Only cache memory
B CPU plus peripherals
C Only control bus
D Only ALU block
Microcontrollers integrate CPU, memory, and I/O peripherals on one chip. They are common in embedded systems where compact size, low power use, and device control are required.
Multiprocessor basics refer to
A Multiple keyboards used
B Multiple hard disks only
C Multiple monitors attached
D Multiple CPUs working
Multiprocessor systems have more than one processor sharing workload. They improve throughput with parallel processing and are common in servers and high-performance systems running many tasks.
Cache mapping is mainly about
A CPU fan positioning
B RAM module slots
C Cache placement rules
D Disk partition tables
Cache mapping determines where a memory block can be placed in cache. Methods like direct-mapped and set-associative balance speed, hardware complexity, and the chance of cache misses.
Direct-mapped cache means a memory block can go to
A One fixed line
B Any cache line
C Any set line
D Any register
In direct mapping, each memory block maps to exactly one cache line based on address bits. This is simple and fast, but conflict misses can happen when blocks compete for that line.
Set-associative cache means a memory block can go into
A One fixed line
B Any cache line
C Only main memory
D One set lines
Set-associative cache divides cache into sets. A block maps to one set but can be stored in any line within that set, reducing conflicts compared to direct-mapped cache.
The main purpose of an I/O interface is to
A Increase ALU operations
B Connect CPU and device
C Store instructions permanently
D Reduce clock cycles
An I/O interface links external devices with the system bus and CPU. It manages control signals, buffering, and timing so slower devices can transfer data reliably without corrupting communication.
The “stored program” idea enabled computers to
A Use only ROM
B Remove the CPU
C Run different programs easily
D Stop using memory
Stored-program concept stores instructions in memory like data, so changing memory contents changes the program. This makes computers flexible, allowing many different programs to run on the same hardware.