Thursday, December 13, 2007

Assignment No.3

Page 104
Question #4
What is the cause of thrashing?
-Thrashing is caused by under allocation of the minimum number of pages required by a process, forcing it to continuously page fault.

b.) How does the system detect thrashing?
- The system can detect thrashing by evaluating the level of CPU utilization as compared to the level of multiprogramming.

c.) Once it detects thrashing, what can the system do to eliminate this problem?
-It can be eliminated by reducing the level of multiprogramming.

Page 56
Question 1-3

a.) Multiprogrammng. Why is it to use?
-Multiprogramming is the technique of running several programs at a time using timesharing. It allows a computer to do several things at the same time. Multiprogramming creates logical parallelism.

b.) Internal Fragmentation.How does it occur?
-Internal fragmentation is the space wasted inside of allocated memory blocks because of restriction on the allowed sizes of allocated blocks. Allocated memory may be slightly larger than requested memory; this size difference is memory internal to a partition, but not being used

c.) External Fragmentation. How does it occur?
-External Fragmentation happens when a dynamic memory allocation algorithm allocates some memory and a small piece is left over that cannot be effectively used. If too much external fragmentation occurs, the amount of usable memory is drastically reduced. Total memory space exists to satisfy a request, but it is not contiguous.

d.)Compaction. Why is it nedded?
e.)Relocation. How often should it perform?

2.) The advantages of the memory allocation are as follows:
- It avoids wastage of CPU idle time
-Operating system is easy to implement

3.) The disadvantages of the memory allocation are as follows:
- degree of multiprogramming is fixed- only 1 job per partition
- waste of main storage- some partitions not used

Thursday, November 29, 2007

Assignment No.2


The Windows NT Virtual Memory Manager
In Windows NT, responsibility for managing the relationship between the virtual organization of memory (as seen by applications) with the physical organization of memory (as seen by the operating system) falls on a component of the Windows NT executive called the virtual memory (VM) manager .
Memory Management Goals
Windows NT is a portable, secure, multithreaded, multiprocessing operating system. As a result, its virtual memory manager must:
• Be compatible with multiple processor types • Protect the NT Executive from applications • Protect applications from each other • Provide mechanisms for programs to efficiently share physical memory (RAM). • Be efficient
An Application's View of Memory
In Windows NT, applications access memory using a 32-bit linear addressing scheme. This scheme is sometimes referred to as flat memory model because applications view memory as one linear (or flat) array of memory locations. Applications address memory using simple 32-bit offsets from address zero (0). Since a 32-bit offset can specify 232 memory addresses, each application can access up to 4 Gb of (virtual) memory. The range of addresses an application can access is called the application's address space (Figure 7).
The 32-bit flat memory model makes Windows NT portable because it is compatible with the memory addressing of processors such as the MIPS R4000 and DEC Alpha. It also simplifies porting of applications originally written for flat memory model environments such as Unix and the Apple Macintosh.
The flat memory model used in Windows NT contrasts sharply with the segmented model used in Windows for MS-DOS. In Windows for MS-DOS, memory is broken into many differently sized segments, each with a maximum length of 64K. This has been a major difficulty for developers of Windows applications for a very long time because changing segments is somewhat difficult and slow. This difficulty has led to many 64K limits in a lot of software, including the Windows 3.1 resource heap. The 32-bit flat memory model does away with all of these constraints.


Most corporations have UNIX systems for handling heavy-duty applications. Microsoft Windows 2000 has been rapidly gaining ground because it provides better performance at lower cost. But companies aren't going to replace UNIX with Windows 2000—they've invested too much in their UNIX systems over the years. So many companies are choosing to add Windows 2000 to support departmental functions. It's expensive and inefficient to run two separate systems side by side so network and IT managers need to learn how to integrate Windows 2000 with their existing UNIX systems. This book shows them how to do just that and much more. The expert authors of this book approach Windows 2000 from a UNIX Systems administrator's point of view.

Tuesday, November 20, 2007

Assignment No.1

The operating system (OS) can be considered as the most important program that runs on a computer. Every
general-purpose computer must have an operating system to provide a software platform on top of which other
programs (the application software) can run. It is also the main control program of a computer that schedules
tasks, manages storage, and handles communication with peripherals. The central module of an operating
system is the 'kernel'. It is the part of the operating system that loads first, and it remains in main memory.
Because it stays in memory, it is important for the kernel to be as small as possible while still providing all the
essential services required by other parts of the operating system and applications. Typically, the kernel is
responsible for memory management, process and task management, and disk management.
In general an application software must be written to run on top of a particular operating system. Your choice of
operating system, therefore, determines to a great extent the applications you can run. For PCs, the most
popular operating systems are Windows 95/98, MS-DOS (Microsoft-Disk Operating System), OS/2, but others
are available, such as Linux, BeOs. In this article, explains the operating system clearly and easy to understand to the reader. (

2. The two reason why regional bank buy six server computers than one supercomputer:
  • Because six server computers can store more data than one supercomputer.
  • Six server computers are easy to manage and use.