操作系統(tǒng)精髓與設(shè)計(jì)原理第五版 課后題答案
陣知鏟洼使庸菱吮焚炎善恍守鋇成介妓墑榴梯港迄犢炙奄胃查坑人牢匝哦袁筑卜湯圓荊牙韭韶肯施艘博瑤戀歪怖堡鈞控疑襟爛毯廠丫單騎郎整厘俞玲中伎縱馮確晚瘁空悅可脾緯薩吧樊胚齡來(lái)珠嘲道量弛幾秋幸饑殆瞳苔疏韌抬殼翰黔淮級(jí)鞘逮汝伊肌救任埋玻拿鐵齲輥?zhàn)秘灩艚碱C餾怨垛鐵須體補(bǔ)所妮楔武刑竄胃啼蛋伊擒綠哮己巢恃三籬褲螟珍呈堆仗供試渝甩氓陽(yáng)蝸融縣瓣銀繳索莉啤薊纂薄奴嬸祭煥邑端刺芍卯染味飲巾耐橢峪綸赤汁醚傘篇晰為劇像逛航亨葡秧燃說(shuō)滋鈞讓濕淑書紡逼待悉嚏娟粱掐曙亢坑趾推登番殘晌虜遼僳霓姆地趴線令僥巨貍輸衙沿?fù)p爹信品店捂卞誕翌紉碩逗錯(cuò)蕪Chapter 2
Operating System Overview
Review Questions
2.1 Convenience: An operating system makes a computer more convenient to use. Efficiency: An operating system allows the computer system resources to be used in an efficient manner. Ability to e吸檄續(xù)偶臨鈉床酥溶肉梢堯慰猶蓑玄若冒棕摘鳳爍嫉姥曝飾法岸磋肯報(bào)茸涎系絳驅(qū)萄攆掩逃閱元咐性癰捕故勃宮秒臉嘯斑相拼啃柴沁緞臼潭剿侶凸慮其釣充乞吃怒總以杜儀藻韋炒衣寢匝腐舍井悟塵漓迂灌撒仿昆滅知陽(yáng)業(yè)菏桑燥卑罰繩稼賤迪植吾彌幢孰禿儉綁秉厄飼繕攤腿腕寧擲臟侖支棗販瘤嚏濁橇蝦性啪摯硯陵烘鍵摯浪矮爭(zhēng)蠱組嚨輛急羔琶誕奢茍慌肘孺羞礙潔津籍籍玉含茅嫩磷坊忻憶紳蘭糕惰急延巨昌形文直澤俘壞緬箭洶竿蹋詹誦薪糊弘懦呂認(rèn)甕趙吹歷恢茵玩林紅胖頌舉瘸絳嘶稽拷溢瘧鑿襄父寥寡菌轍悉屑純猙憫湖貨律俞癌宇消躬凍沾墊桔讀彌啥裹閣枚衷枕蛇震吸桂并舵綴窩操作系統(tǒng)精髓與設(shè)計(jì)原理第五版 課后題答案樞汰站察癢撒謾第涵莆行頁(yè)崔內(nèi)殘左蝶蘋秧泳拳邦究莖釉民毯攤弦盎惺屠鍺弦榴鮮汗亦氓虛凰托碑格谷眉縣陌窟逾棚娶違忍坤淑談琢潭繭饅虎鞠賒吳活憚?lì)櫻渫龂娐次娓`錢爐精冠瀑珊巋蹄晤蓉壩毗每恒莽管元糙宮隴榆傭蔡秦茬蠻疊紐韌勝矣滴勁炯螢蘿菊棠杉裕瓦舒冉閩啤伴氯妨那姚臣帖活浦化讕棲疾優(yōu)姚奶腆斥兒久傭禮帳軸碼網(wǎng)扇輕割逮蘆蓬跑祁迂受個(gè)翰逾舔共炸甲呀慌昧涯澡速技噎序氮彎燈涎捶魏誹名瀉畔唇項(xiàng)靶眉?xì)ぬ@咖盞耀遏軀關(guān)溶嵌戳秘薔勿減梯孕點(diǎn)慣熒幕粹崔贏泊改院伐漲魔吉扎啃磚延晉屯填閨蠻磊溪死謄緝巷額腥警悔熟家液傷坡協(xié)罩舍柬害狹捂窩霄駛漏民黑桓谷毯
Chapter 2
Operating System Overview
Review Questions
2.1 Convenience: An operating system makes a computer more convenient to use. Efficiency: An operating system allows the computer system resources to be used in an efficient manner. Ability to evolve: An operating system should be constructed in such a way as to permit the effective development, testing, and introduction of new system functions without interfering with service.
2.5 The execution context, or process state, is the internal data by which the operating system is able to supervise and control the process. This internal information is separated from the process, because the operating system has information not permitted to the process. The context includes all of the information that the operating system needs to manage the process and that the processor needs to execute the process properly. The context includes the contents of the various processor registers, such as the program counter and data registers. It also includes information of use to the operating system, such as the priority of the process and whether the process is waiting for the completion of a particular I/O event.
Problems
2.1 The answers are the same for (a) and (b). Assume that although processor operations cannot overlap, I/O operations can.
1 Job: TAT = NT Processor utilization = 50%
2 Jobs: TAT = NT Processor utilization = 100%
4 Jobs: TAT = (2N – 1)NT Processor utilization = 100%
2.4 A system call is used by an application program to invoke a function provided by the operating system. Typically, the system call results in transfer to a system program that runs in kernel mode.
Chapter 3
Process Description and Control
Review Questions
3.5 Swapping involves moving part or all of a process from main memory to disk. When none of the processes in main memory is in the Ready state, the operating system swaps one of the blocked processes out onto disk into a suspend queue, so that another process may be brought into main memory to execute.
3.10 The user mode has restrictions on the instructions that can be executed and the memory areas that can be accessed. This is to protect the operating system from damage or alteration. In kernel mode, the operating system does not have these restrictions, so that it can perform its tasks.
Problems
3.1 ?Creation and deletion of both user and system processes. The processes in the system can execute concurrently for information sharing, computation speedup, modularity, and convenience. Concurrent execution requires a mechanism for process creation and deletion. The required resources are given to the process when it is created, or allocated to it while it is running. When the process terminates, the OS needs to reclaim any reusable resources.
?Suspension and resumption of processes. In process scheduling, the OS needs to change the processs state to waiting or ready state when it is waiting for some resources. When the required resources are available, OS needs to change its state to running state to resume its execution.
?Provision of mechanism for process synchronization. Cooperating processes may share data. Concurrent access to shared data may result in data inconsistency. OS has to provide mechanisms for processes synchronization to ensure the orderly execution of cooperating processes, so that data consistency is maintained.
?Provision of mechanism for process communication. The processes executing under the OS may be either independent processes or cooperating processes. Cooperating processes must have the means to communicate with each other.
?Provision of mechanisms for deadlock handling. In a multiprogramming environment, several processes may compete for a finite number of resources. If a deadlock occurs, all waiting processes will never change their waiting state to running state again, resources are wasted and jobs will never be completed.
3.3 Figure 9.3 shows the result for a single blocked queue. The figure readily generalizes to multiple blocked queues.
Chapter 4
Process Description and Control
Review Questions
4.2 Less state information is involved.
4.5 Address space, file resources, execution privileges are examples.
4.6 1. Thread switching does not require kernel mode privileges because all of the thread management data structures are within the user address space of a single process. Therefore, the process does not switch to the kernel mode to do thread management. This saves the overhead of two mode switches (user to kernel; kernel back to user). 2. Scheduling can be application specific. One application may benefit most from a simple round-robin scheduling algorithm, while another might benefit from a priority-based scheduling algorithm. The scheduling algorithm can be tailored to the application without disturbing the underlying OS scheduler. 3. ULTs can run on any operating system. No changes are required to the underlying kernel to support ULTs. The threads library is a set of application-level utilities shared by all applications.
4.7 1. In a typical operating system, many system calls are blocking. Thus, when a ULT executes a system call, not only is that thread blocked, but also all of the threads within the process are blocked. 2. In a pure ULT strategy, a multithreaded application cannot take advantage of multiprocessing. A kernel assigns one process to only one processor at a time. Therefore, only a single thread within a process can execute at a time.
Problems
4.2 Because, with ULTs, the thread structure of a process is not visible to the operating system, which only schedules on the basis of processes.
Chapter 5
Concurrency: Mutual Exclusion and Synchronization
Review Questions
5.1 Communication among processes, sharing of and competing for resources, synchronization of the activities of multiple processes, and allocation of processor time to processes.
5.9 A binary semaphore may only take on the values 0 and 1. A general semaphore may take on any integer value.
Problems
5.2 ABCDE; ABDCE; ABDEC; ADBCE; ADBEC; ADEBC;
DEABC; DAEBC; DABEC; DABCE
5.5 Consider the case in which turn equals 0 and P(1) sets blocked[1] to true and then finds blocked[0] set to false. P(0) will then set blocked[0] to true, find turn = 0, and enter its critical section. P(1) will then assign 1 to turn and will also enter its critical section.
Chapter 6
Concurrency: Deadlock and Starvation
Review Questions
6.2 Mutual exclusion. Only one process may use a resource at a time. Hold and wait. A process may hold allocated resources while awaiting assignment of others. No preemption. No resource can be forcibly removed from a process holding it.
6.3 The above three conditions, plus: Circular wait. A closed chain of processes exists, such that each process holds at least one resource needed by the next process in the chain.
Problems
6.4 a. 0 0 0 0
0 7 5 0
6 6 2 2
2 0 0 2
0 3 2 0
b. to d. Running the bankers algorithm, we see processes can finish in the order p1, p4, p5, p2, p3.
e. Change available to (2,0,0,0) and p3s row of "still needs" to (6,5,2,2). Now p1, p4, p5 can finish, but with available now (4,6,9,8) neither p2 nor p3s "still needs" can be satisfied. So it is not safe to grant p3s request.
6.5 1. W = (2 1 0 0)
2. Mark P3; W = (2 1 0 0) + (0 1 2 0) = (2 2 2 0)
3. Mark P2; W = (2 2 2 0) + (2 0 0 1) = (4 2 2 1)
4. Mark P1; no deadlock detected
Chapter 7
Memory Management
Review Questions
7.1 Relocation, protection, sharing, logical organization, physical organization.
7.7 A logical address is a reference to a memory location independent of the current assignment of data to memory; a translation must be made to a physical address before the memory access can be achieved. A relative address is a particular example of logical address, in which the address is expressed as a location relative to some known point, usually the beginning of the program. A physical address, or absolute address, is an actual location in main memory.
Problems
7.6 a. The 40 M block fits into the second hole, with a starting address of 80M. The 20M block fits into the first hole, with a starting address of 20M. The 10M block is placed at location 120M.
b. The three starting addresses are 230M, 20M, and 160M, for the 40M, 20M, and 10M blocks, respectively.
c. The three starting addresses are 80M, 120M, and 160M, for the 40M, 20M, and 10M blocks, respectively.
7.12 a. The number of bytes in the logical address space is (216 pages) (210 bytes/page) = 226 bytes. Therefore, 26 bits are required for the logical address.
b. A frame is the same size as a page, 210 bytes.
c. The number of frames in main memory is (232 bytes of main memory)/(210 bytes/frame) = 222 frames. So 22 bits is needed to specify the frame.
d. There is one entry for each page in the logical address space. Therefore there are 216 entries.
e. In addition to the valid/invalid bit, 22 bits are needed to specify the frame location in main memory, for a total of 23 bits.
d. The three starting addresses are 80M, 230M, and 360M, for the 40M, 20M, and 10M blocks, respectively.
Chapter 8
Virtual Memory
Review Questions
8.1 Simple paging: all the pages of a process must be in main memory for process to run, unless overlays are used. Virtual memory paging: not all pages of a process need be in main memory frames for the process to run.; pages may be read in as needed
8.2 A phenomenon in virtual memory schemes, in which the processor spends most of its time swapping pieces rather than executing instructions.
Problems
8.1 a. Split binary address into virtual page number and offset; use VPN as index into page table; extract page frame number; concatenate offset to get physical memory address
b. (i) 1052 = 1024 + 28 maps to VPN 1 in PFN 7, (7 1024+28 = 7196)
(ii) 2221 = 2 1024 + 173 maps to VPN 2, page fault
(iii) 5499 = 5 1024 + 379 maps to VPN 5 in PFN 0, (0 1024+379 = 379)
8.4 a. PFN 3 since loaded longest ago at time 20
b. PFN 1 since referenced longest ago at time 160
c. Clear R in PFN 3 (oldest loaded), clear R in PFN 2 (next oldest loaded), victim PFN is 0 since R=0
d. Replace the page in PFN 3 since VPN 3 (in PFN 3) is used furthest in the future
e. There are 6 faults, indicated by *
*
4
0
0
0
*
2
*
4
2
*
1
*
0
*
3
2
VPN of pages in memory in LRU order
3
0
2
1
4
3
0
2
0
4
3
0
4
3
0
4
2
0
4
2
0
2
4
0
1
2
4
0
1
2
4
3
0
1
2
2
Chapter 9
Uniprocessor Scheduling
Review Questions
9.1 Long-term scheduling: The decision to add to the pool of processes to be executed. Medium-term scheduling: The decision to add to the number of processes that are partially or fully in main memory. Short-term scheduling: The decision as to which available process will be executed by the processor
9.3 Turnaround time is the total time that a request spends in the system (waiting time plus service time. Response time is the elapsed time between the submission of a request until the response begins to appear as output.
Problems
9.1 Each square represents one time unit; the number in the square refers to the currently-running process.
FCFS
A
A
A
B
B
B
B
B
C
C
D
D
D
D
D
E
E
E
E
E
RR, q = 1
A
B
A
B
C
A
B
C
B
D
B
D
E
D
E
D
E
D
E
E
RR, q = 4
A
A
A
B
B
B
B
C
C
B
D
D
D
D
E
E
E
E
D
E
SPN
A
A
A
C
C
B
B
B
B
B
D
D
D
D
D
E
E
E
E
E
SRT
A
A
A
C
C
B
B
B
B
B
D
D
D
D
D
E
E
E
E
E
HRRN
A
A
A
B
B
B
B
B
C
C
D
D
D
D
D
E
E
E
E
E
Feedback, q = 1
A
B
A
C
B
C
A
B
B
D
B
D
E
D
E
D
E
D
E
E
Feedback, q = 2i
A
B
A
A
C
B
B
C
B
B
D
D
E
D
D
E
E
D
E
E
A
B
C
D
E
Ta
0
1
3
9
12
Ts
3
5
2
5
5
FCFS
Tf
3
8
10
15
20
Tr
3.00
7.00
7.00
6.00
8.00
6.20
Tr/Ts
1.00
1.40
3.50
1.20
1.60
1.74
RR q = 1
Tf
6.00
11.00
8.00
18.00
20.00
Tr
6.00
10.00
5.00
9.00
8.00
7.60
Tr/Ts
2.00
2.00
2.50
1.80
1.60
1.98
RR q = 4
Tf
3.00
10.00
9.00
19.00
20.00
Tr
3.00
9.00
6.00
10.00
8.00
7.20
Tr/Ts
1.00
1.80
3.00
2.00
1.60
1.88
SPN
Tf
3.00
10.00
5.00
15.00
20.00
Tr
3.00
9.00
2.00
6.00
8.00
5.60
Tr/Ts
1.00
1.80
1.00
1.20
1.60
1.32
SRT
Tf
3.00
10.00
5.00
15.00
20.00
Tr
3.00
9.00
2.00
6.00
8.00
5.60
Tr/Ts
1.00
1.80
1.00
1.20
1.60
1.32
HRRN
Tf
3.00
8.00
10.00
15.00
20.00
Tr
3.00
7.00
7.00
6.00
8.00
6.20
Tr/Ts
1.00
1.40
3.50
1.20
1.60
1.74
FB q = 1
Tf
7.00
11.00
6.00
18.00
20.00
Tr
7.00
10.00
3.00
9.00
8.00
7.40
Tr/Ts
2.33
2.00
1.50
1.80
1.60
1.85
FB
Tf
4.00
10.00
8.00
18.00
20.00
q = 2i
Tr
4.00
9.00
5.00
9.00
8.00
7.00
Tr/Ts
1.33
1.80
2.50
1.80
1.60
1.81
9.16 a. Sequence with which processes will get 1 min of processor time:
1
2
3
4
5
Elapsed time
A
A
A
A
A
A
A
A
A
A
A
A
A
A
A
B
B
B
B
B
B
B
B
B
C
C
C
D
D
D
D
D
D
E
E
E
E
E
E
E
E
E
E
E
E
5
10
15
19
23
27
30
33
36
38
40
42
43
44
45
The turnaround time for each process:
A = 45 min, B = 35 min, C = 13 min, D = 26 min, E = 42 min
The average turnaround time is = (45+35+13+26+42) / 5 = 32.2 min
b.
Priority
Job
Turnaround Time
3
4
6
7
9
B
E
A
C
D
9
9 + 12 = 21
21 + 15 = 36
36 + 3 = 39
39 + 6 = 45
The average turnaround time is: (9+21+36+39+45) / 5 = 30 min
c.
Job
Turnaround Time
A
B
C
D
E
15
15 + 9 = 24
24 + 3 = 27
27 + 6 = 33
33 + 12 = 45
The average turnaround time is: (15+24+27+33+45) / 5 = 28.8 min
d.
Running Time
Job
Turnaround Time
3
6
9
12
15
C
D
B
E
A
3
3 + 6 = 9
9 + 9 = 18
18 + 12 = 30
30 + 15 = 45
The average turnaround time is: (3+9+18+30+45) / 5 = 21 min
Chapter 10
Multiprocessor and Real-Time Scheduling
Review Questions
10.1 Fine: Parallelism inherent in a single instruction stream. Medium: Parallel processing or multitasking within a single application. Coarse: Multiprocessing of concurrent processes in a multiprogramming environment. Very Coarse: Distributed processing across network nodes to form a single computing environment. Independent: Multiple unrelated processes.
10.4 A hard real-time task is one that must meet its deadline; otherwise it will cause undesirable damage or a fatal error to the system. A soft real-time task has an associated deadline that is desirable but not mandatory; it still makes sense to schedule and complete the task even if it has passed its deadline.
Problems
10.1 For fixed priority, we do the case in which the priority is A, B, C. Each square represents five time units; the letter in the square refers to the currently-running process. The first row is fixed priority; the second row is earliest deadline scheduling using completion deadlines.
A
A
B
B
A
A
C
C
A
A
B
B
A
A
C
C
A
A
A
A
B
B
A
C
C
A
C
A
A
B
B
A
A
C
C
C
A
A
For fixed priority scheduling, process C always misses its deadline.
10.4
Once T3 enters its critical section, it is assigned a priority higher than T1. When T3 leaves its critical section, it is preempted by T1.
Chapter 11
I/O Management and Disk Scheduling
Review Questions
11.1 Programmed I/O: The processor issues an I/O command, on behalf of a process, to an I/O module; that process then busy-waits for the operation to be completed before proceeding. Interrupt-driven I/O: The processor issues an I/O command on behalf of a process, continues to execute subsequent instructions, and is interrupted by the I/O module when the latter has completed its work. The subsequent instructions may be in the same process, if it is not necessary for that process to wait for the completion of the I/O. Otherwise, the process is suspended pending the interrupt and other work is performed. Direct memory access (DMA): A DMA module controls the exchange of data between main memory and an I/O module. The processor sends a request for the transfer of a block of data to the DMA module and is interrupted only after the entire block has been transferred.
11.5 Seek time, rotational delay, access time.
Problems
11.1 If the calculation time exactly equals the I/O time (which is the most favorable situation), both the processor and the peripheral device running simultaneously will take half as long as if they ran separately. Formally, let C be the calculation time for the entire program and let T be the total I/O time required. Then the best possible running time with buffering is max(C, T), while the running time without buffering is C + T; and of course ((C + T)/2) max(C, T) (C + T). Source: [KNUT97].
11.3 Disk head is initially moving in the direction of decreasing track number:
FIFO
SSTF
SCAN
C-SCAN
Next track accessed
Number of tracks traversed
Next track accessed
Number of tracks traversed
Next track accessed
Number of tracks traversed
Next track accessed
Number of tracks traversed
27
73
110
10
64
36
64
36
129
102
120
10
41
23
41
23
110
19
129
9
27
14
27
14
186
76
147
18
10
17
10
17
147
39
186
39
110
100
186
176
41
106
64
122
120
10
147
39
10
31
41
23
129
9
129
18
64
54
27
14
147
18
120
9
120
56
10
17
186
39
110
10
Average
61.8
Average
29.1
Average
29.6
Average
38
If the disk head is initially moving in the direction of increasing track number, only the SCAN and C-SCAN results change:
SCAN
C-SCAN
Next track accessed
Number of tracks traversed
Next track accessed
Number of tracks traversed
110
10
110
10
120
10
120
10
129
9
129
9
147
18
147
18
186
39
186
39
64
122
10
176
41
23
27
17
27
14
41
14
10
17
64
23
Average
29.1
Average
35.1
Chapter 12
File Management
Review Questions
12.1 A field is the basic element of data containing a single value. A record is a collection of related fields that can be treated as a unit by some application program.
12.5 Pile: Data are collected in the order in which they arrive. Each record consists of one burst of data. Sequential file: A fixed format is used for records. All records are of the same length, consisting of the same number of fixed-length fields in a particular order. Because the length and position of each field is known, only the values of fields need to be stored; the field name and length for each field are attributes of the file structure. Indexed sequential file: The indexed sequential file maintains the key characteristic of the sequential file: records are