Tuesday, January 6, 2009

System Resource Management

Post your comments or questions below this subject.

ENDTERM STUFF
follow the link below-->>

http://itsrm.blogspot.com/

24 comments:

Lavz said...

Group 3:
Member:
Rosita Matandag
Lovina Pahuriray
Elizabeth Garido
Evangeline Garrido
John Sevilla

Topic:How Hard Disk Work?
Sub Topic:
Introduction to How Hard Disk Work
Hard Disk Basics
Cassette Tape vs. Hard Disk
Capacity and Performance

Unknown said...

Microprocessor History

A microprocessor - also known as a CPU or Central Processing Unit - is a complete computation engine that is fabricated on a single chip. The first microprocessor was the Intel 4004, introduced in 1971. The 4004 was not very powerful - all it could do was add and subtract, and it could only do that four bits at a time. But it was amazing that everything was on one chip. Prior to the 4004, engineers built computers either from collections of chips or from discrete components (transistors wired one at a time). The 4004 powered one of the first portable electronic calculators.

The first microprocessor to make it into a home computer was the Intel 8080, a complete 8-bit computer on one chip introduced in 1974. The first microprocessor to make a real splash in the market was the Intel 8088, introduced in 1979 and incorporated into the IBM PC (which first appeared in 1982 or so). If you are familiar with the PC market and its history, you know that the PC market moved from the 8088 to the 80286 to the 80386 to the 80486 to the Pentium to the Pentium-II to the new Pentium-III. All of these microprocessors are made by Intel and all of them are improvements on the basic design of the 8088. The new Pentiums-IIIs can execute any piece of code that ran on the original 8088, but the Pentium-III runs about 3,000 times faster!

The following table helps you to understand the differences between the different processors that Intel has introduced over the years.
Name Date Transistors Microns Clock speed Data width MIPS
8080 1974 6,000 6 2 MHz 8 0.64 MIPS First home computers
8088 1979 29,000 3 5 MHz 16 bits, 8 bit bus 0.33 MIPS First IBM PC
80286 1982 134,000 1.5 6 MHz 16 bits 1 MIPS IBM ATs. Up to 2.66 MIPS at 12 MHz
80386 1985 275,000 1.5 16 MHz 32 bits 5 MIPS Eventually 33 MHz, 11.4 MIPS
80486 1989 1,200,000 1 25 MHz 32 bits 20 MIPS Eventually 50 MHz, 41 MIPS
Pentium 1993 3,100,000 0.8 60 MHz 32 bits, 64 bit bus 100 MIPS Eventually 200 MHz
Pentium II 1997 7,500,000 0.35 233 MHz 32 bits, 64 bit bus 400 MIPS? Eventually 450 MHz, 800 MIPS?
Pentium III 1999 9,500,000 0.25 450 MHz 32 bits, 64 bit bus 1,000 MIPS?


What is a Chip?
A chip is also called an integrated circuit. Generally it is a small, thin piece of silicon onto which the transistors making up the microprocessor have been etched. A chip might be as large as an inch on a side and can contain as many as 10 million transistors. Simpler processors might consist of a few thousand transistors etched onto a chip just a few millimeters square. See How Silicon Chips Are Made for details on how transistors are fabricated on silicon.
Information about this table:

* The date is the year that the processor was first introduced. Many processors are re-introduced at higher clock speeds for many years after the original release date.
* Transistors is the number of transistors on the chip. You can see that the number of transistors on a single chip has risen steadily over the years.
* Microns is the width, in microns, of the smallest wire on the chip. For comparison, a human hair is 100 microns thick. As the feature size on the chip goes down, the number of transistors rises.
* Clock speed is the maximum rate that the chip can be clocked. Clock speed will make more sense in the next section.
* Data Width is the width of the ALU. An 8-bit ALU can add/subtract/multiply/etc. two 8-bit numbers, while a 32-bit ALU can manipulate 32-bit numbers. An 8-bit ALU would have to execute 4 instructions to add two 32-bit numbers, while a 32-bit ALU can do it in one instruction. In many cases the external data bus is the same width as the ALU, but not always. The 8088 had a 16-bit ALU and an 8-bit bus, while the modern Pentiums fetch data 64 bits at a time for their 32-bit ALUs.
* MIPS stands for Millions of Instructions Per Second, and is a rough measure of the performance of a CPU. Modern CPUs can do so many different things that MIPS ratings lose a lot of their meaning, but you can get a general sense of the relative power of the CPUs from this column.

From this table you can see that, in general, there is a relationship between clock speed and MIPS. The maximum clock speed is a function of the manufacturing process and delays within the chip. There is also a relationship between the number of transistors and MIPS. For example, the 8088 clocked at 5 MHz but only executed at 0.33 MIPS (about 1 instruction per 15 clock cycles). Modern processors can often execute at a rate of 2 instructions per clock cycle. That improvement is directly related to the number of transistors on the chip and will make more sense in the next section.

Inside a Microprocessor

To understand how a microprocessor works, it is helpful to look inside and learn about the logic used to create one. In the process you can also learn about assembly language - the native language of a microprocessor - and many of the things that engineers can do to boost the speed of a processor.

A microprocessor executes a collection of machine instructions that tell the processor what to do. Based on the instructions, a microprocessor does three basic things:

* Using its ALU (Arithmetic/Logic Unit), a microprocessor can perform mathematical operations like addition, subtraction, multiplication and division. Modern microprocessors contain complete floating point processors that can perform extremely sophisticated operations on large floating point numbers.
* A microprocessor can move data from one memory location to another
* A microprocessor can make decisions and jump to a new set of instructions based on those decisions.

There may be very sophisticated things that a microprocessor does, but those are its three basic activities. The following diagram shows an extremely simple microprocessor capable of doing those three things:

This is about as simple as a microprocessor gets. This microprocessor has:

* an address bus (that may be 8, 16 or 32 bits wide) that sends an address to memory
* a data bus (that may be 8, 16 or 32 bits wide) that can send data to memory or receive data from memory
* a RD (Read) and WR (Write) line to tell the memory whether it wants to set or get the addressed location
* a clock line that lets a clock pulse sequence the processor
* A reset line that resets the program counter to zero (or whatever) and restarts execution.

Let's assume that both the address and data buses are 8 bits wide in this example.

Here are the components of this simple microprocessor:

* Registers A, B and C are simply latches made out of flip-flops (See the section on "edge-triggered latches" in How Boolean Logic Works for details).
* The address latch is just like registers A, B and C.
* The program counter is a latch with the extra ability to increment by 1 when told to do so, and also to reset to zero when told to do so.
* The ALU could be as simple as an 8-bit adder (See the section on adders in How Boolean Logic Works for details), or it might be able to add, subtract, multiply and divide 8-bit values. Let's assume the latter here.
* The test register is a special latch that can hold values from comparisons performed in the ALU. An ALU can normally compare two numbers and determine if they are equal, if one is greater than the other, etc. The test register can also normally hold a carry bit from the last stage of the adder. It stores these values in flip-flops and then the instruction decoder can use the values to make decisions.
* There are 6 boxes marked "3-State" in the diagram. These are tri-state buffers. A tri-state buffer can pass a 1, a 0 or it can essentially disconnect its output (imagine a switch that totally disconnects the output line from the wire the output is heading toward). A tri-state buffer allows multiple outputs to connect to a wire, but only one of them to actually drive a 1 or a 0 onto the line.
* The instruction register and instruction decoder are responsible for controlling all of the other components.

Although they are not shown in this diagram, there would be control lines from the instruction decoder that would:

* Tell the A register to latch the value currently on the data bus.
* Tell the B register to latch the value currently on the data bus.
* Tell the C register to latch the value currently on the data bus.
* Tell the program counter register to latch the value currently on the data bus.
* Tell the address register to latch the value currently on the data bus.
* Tell the instruction register to latch the value currently on the data bus.
* Tell the program counter to increment
* Tell the program counter to reset to zero
* Activate any of the 6 tri-state buffers (6 separate lines)
* Tell the ALU what operation to perform
* Tell the test register to latch the ALUs test bits
* Activate the RD line
* Activate the WR line

Coming into the instruction decoder are the bits from the test register and the clock line, as well as the bits from the instruction register.

RAM and ROM

The previous section talked about the address and data buses, as well as the RD and WR lines. These buses and lines connect either to RAM or ROM - generally both. In our sample microprocessor we have an address bus 8 bits wide and a data bus 8 bits wide. That means that the microprocessor can address 28 = 256 bytes of memory, and it can read or write 8 bits of the memory at a time. Let's assume that this simple microprocessor has 128 bytes of ROM starting at address 0 and 128 bytes of RAM starting at address 128.

ROM stands for Read-Only Memory. A ROM chip is programmed with a permanent collection of pre-set bytes. The address bus tells the ROM chip which byte to get and place on the data bus. When the RD line changes state, the ROM chip presents the selected byte onto the data bus.

RAM stands for Random Access Memory. RAM contains bytes of information and the microprocessor can read or write to those bytes depending on whether the RD or WR line is signaled. One problem with today's RAM chips is that they forget everything once they power goes off. That is why the computer needs ROM.

By the way, nearly all computers contain some amount of ROM (it is possible to create a simple computer that contains no RAM (many microcontrollers do this by placing a handful of RAM bytes on the processor chip itself), but generally impossible to create one that contains no ROM). On a PC, the ROM is called the BIOS (Basic Input/Output System). When the microprocessor starts, it begins executing instructions it finds in the BIOS. The BIOS instructions do things like testing the hardware in the machine, and then it goes to the hard disk to fetch the boot sector (see How How Disks Work for details). This boot sector is another small program, and the BIOS stores it in RAM after reading it off the disk. The microprocessor then begins executing the boot sector's instructions from RAM. The boot sector program will tell the microprocessor to fetch something else from the hard disk into RAM, which the microprocessor then executes, and so on. This is how the microprocessor loads and executes the entire operating system.

Performance

The number of transistors available has a huge effect on the performance of a processor. As seen earlier, a typical instruction in a processor like an 8088 took 15 clock cycles to execute. Because of the design of the multiplier, it took approximately 80 cycles just to do one 16-bit multiplication on the 8088. With more transistors, much more powerful multipliers capable of single-cycle speeds become possible.

More transistors also allow a technology called pipelining. In a pipelined architecture, instruction execution overlaps. So even though it might take 5 clock cycles to execute each instruction, there can be 5 instructions in various stages of execution simultaneously. That way it looks like one instruction completes every clock cycle.

Many modern processors have multiple instruction decoders, each with its own pipeline. This allows multiple instruction streams, which means more than one instruction can complete during each clock cycle. This technique can be quite complex to implement, so it takes lots of transistors.

The trend in processor design has been toward full 32-bit ALUs with fast floating point processors built in and pipelined execution with multiple instruction streams. There has also been a tendency toward special instructions (like the MMX instructions) that make certain operations particularly efficient. There has also been the addition of hardware virtual memory support and L1 caching on the processor chip. All of these trends push up the transistor count, leading to the multi-million transistor powerhouses available today. These processors can execute about one billion instructions per second!

danely said...

Group 1:
Member:
Jimmy Baling
Rhea Damiray
Danely Baja
Cris Bautista


Topic: How ROM Works?
Sub Topic:
.Introduction to How ROM Works
.ROM Types
.ROM at Work
.PROM
.EPROM
.EEPROMs and Flash

HOW ROM WORKS
Read-only memory (ROM), also known as firmware, is an integrated circuit programmed with specific data when it is manufactured. ROM chips are used not only in computers, but in most other electronic items as well. ¬ ¬

¬In this article, you will learn about the different types of ROM and how each works. This article is one in a series of articles dealing with computer memory, including:
ROM Types
There are five basic ROM types:
• ROM
• PROM
• EPROM
• EEPROM
• Flash memory
Each type has unique characteristics, which you'll learn about in this article, but they are all types of memory with two things in common:
• Data stored in these chips is nonvolatile -- it is not lost when power is removed.
• Data stored in these chips is either unchangeable or requires a special operation to change (unlike RAM, which can be changed as easily as it is read).
This means that removing the power source from the chip will not cause it to lose any data.
ROM at Work
Similar to RAM, ROM chips (Figure 1) contain a grid of columns and rows. But where the columns and rows intersect, ROM chips are fundamentally different from RAM chips. While RAM uses transistors to turn on or off access to a capacitor at each intersection, ROM uses a diode to connect the lines if the value is 1. If the value is 0, then the lines are not connected at all.
A diode normally allows current to flow in only one direction and has a certain threshold, known as the forward breakover, that determines how much current is required before the diode will pass it on. In silicon-based items such as processors and memory chips, the forward break over voltage is approximately 0.6 volts. By taking advantage of the unique properties of a diode, a ROM chip can send a charge that is above the forward break over down the appropriate column with the selected row grounded to connect at a specific cell. If a diode is present at that cell, the charge will be conducted through to the ground, and, under the binary system, the cell will be read as being "on" (a value of 1). The neat part of ROM is that if the cell's value is 0, there is no diode at that intersection to connect the column and row. So the charge on the column does not get transferred to the row
.
As you can see, the way a ROM chip works necessitates the programming of perfect and complete data when the chip is created. You cannot reprogram or rewrite a standard ROM chip. If it is incorrect, or the data needs to be updated, you have to throw it away and start over. Creating the original template for a ROM chip is often a laborious process full of trial and error. But the benefits of ROM chips outweigh the drawbacks. Once the template is completed, the actual chips can cost as little as a few cents each. They use very little power, are extremely reliable and, in the case of most small electronic devices, contain all the necessary programming to control the device. A great example is the small chip in the singing fish toy. This chip, about the size of your fingernail, contains the 30-second song clips in ROM and the control codes to synchronize the motors to the music.
PROM
Creating ROM chips totally from scratch is time-consuming and very expensive in small quantities. For this reason, mainly, developers created a type of ROM known as programmable read-only memory (PROM). Blank PROM chips can be bought inexpensively and coded by anyone with a special tool called a programmer.
PROM chips (Figure 2) have a grid of columns and rows just as ordinary ROMs do. The difference is that every intersection of a column and row in a PROM chip has a fuse connecting them. A charge sent through a column will pass through the fuse in a cell to a grounded row indicating a value of 1. Since all the cells have a fuse, the initial (blank) state of a PROM chip is all 1s. To change the value of a cell to 0, you use a programmer to send a specific amount of current to the cell. The higher voltage breaks the connection between the column and row by burning out the fuse. This process is known as burning the PROM.

Figure 2
PROMs can only be programmed once. They are more fragile than ROMs. A jolt of static electricity can easily cause fuses in the PROM to burn out, changing essential bits from 1 to 0. But blank PROMs are inexpensive and are great for prototyping the data for a ROM before committing to the costly ROM
EPROM
Working with ROMs and PROMs can be a wasteful business. Even though they are inexpensive per chip, the cost can add up over time. Erasable programmable read-only memory (EPROM) addresses this issue. EPROM chips can be rewritten many times. Erasing an EPROM requires a special tool that emits a certain frequency of ultraviolet (UV) light. EPROMs are configured using an EPROM programmer that provides voltage at specified levels depending on the type of EPROM used.
Once again we have a grid of columns and rows. In an EPROM, the cell at each intersection has two transistors. The two transistors are separated from each other by a thin oxide layer. One of the transistors is known as the floating gate and the other as the control gate. The floating gate's only link to the row (wordline) is through the control gate. As long as this link is in place, the cell has a value of 1. To change the value to 0 requires a curious process called Fowler-Nordheim tunneling. Tunneling is used to alter the placement of electrons in the floating gate. An electrical charge, usually 10 to 13 volts, is applied to the floating gate. The charge comes from the column (bitline), enters the floating gate and drains to a ground.
This charge causes the floating-gate transistor to act like an electron gun. The excited electrons are pushed through and trapped on the other side of the thin oxide layer, giving it a negative charge. These negatively charged electrons act as a barrier between the control gate and the floating gate. A device called a cell sensor monitors the level of the charge passing through the floating gate. If the flow through the gate is greater than 50 percent of the charge, it has a value of 1. When the charge passing through drops below the 50-percent threshold, the value changes to 0. A blank EPROM has all of the gates fully open, giving each cell a value of 1.
To rewrite an EPROM, you must erase it first. To erase it, you must supply a level of energy strong enough to break through the negative electrons blocking the floating gate. In a standard EPROM, this is best accomplished with UV light at a frequency of 253.7. Because this particular frequency will not penetrate most plastics or glasses, each EPROM chip has a quartz window on top of it. The EPROM must be very close to the eraser's light source, within an inch or two, to work properly.
An EPROM eraser is not selective, it will erase the entire EPROM. The EPROM must be removed from the device it is in and placed under the UV light of the EPROM eraser for several minutes. An EPROM that is left under too long can become over-erased. In such a case, the EPROM's floating gates are charged to the point that they are unable to hold the electrons at all.
EEPROMs and Flash Memory
Though EPROMs are a big step up from PROMs in terms of reusability, they still require dedicated equipment and a labor-intensive process to remove and reinstall them each time a change is necessary. Also, changes cannot be made incrementally to an EPROM; the whole chip must be erased. Electrically erasable programmable read-only memory (EEPROM) chips remove the biggest drawbacks of EPROMs.
In EEPROMs:
• The chip does not have to removed to be rewritten.
• The entire chip does not have to be completely erased to change a specific portion of it.
• Changing the contents does not require additional dedicated equipment.
Instead of using UV light, you can return the electrons in the cells of an EEPROM to normal with the localized application of an electric field to each cell. This erases the targeted cells of the EEPROM, which can then be rewritten. EEPROMs are changed 1 byte at a time, which makes them versatile but slow. In fact, EEPROM chips are too slow to use in many products that make quick changes to the data stored on the chip.
Manufacturers responded to this limitation with Flash memory, a type of EEPROM that uses in-circuit wiring to erase by applying an electrical field to the entire chip or to predetermined sections of the chip called blocks. Flash memory works much faster than traditional EEPROMs because it writes data in chunks, usually 512 bytes in size, instead of 1 byte at a time. See How Flash Memory Works to learn more about this type of ROM and its applications.

Unknown said...

group member:
Arnel Arangco
Stevy Franz Saldavia
Murphy Europeo
Rahf Jason Vallena

Topic:How Microprocessor Works?
Sub TOpic:History of Microprocessor
:What is Chip?
:Inside a Microprocessor
:RAM and ROM
:Performance


A microprocessor - also known as a CPU or Central Processing Unit - is a complete computation engine that is fabricated on a single chip. The first microprocessor was the Intel 4004, introduced in 1971. The 4004 was not very powerful - all it could do was add and subtract, and it could only do that four bits at a time. But it was amazing that everything was on one chip. Prior to the 4004, engineers built computers either from collections of chips or from discrete components (transistors wired one at a time). The 4004 powered one of the first portable electronic calculators.

The first microprocessor to make it into a home computer was the Intel 8080, a complete 8-bit computer on one chip introduced in 1974. The first microprocessor to make a real splash in the market was the Intel 8088, introduced in 1979 and incorporated into the IBM PC (which first appeared in 1982 or so). If you are familiar with the PC market and its history, you know that the PC market moved from the 8088 to the 80286 to the 80386 to the 80486 to the Pentium to the Pentium-II to the new Pentium-III. All of these microprocessors are made by Intel and all of them are improvements on the basic design of the 8088. The new Pentiums-IIIs can execute any piece of code that ran on the original 8088, but the Pentium-III runs about 3,000 times faster!

The following table helps you to understand the differences between the different processors that Intel has introduced over the years.
Name Date Transistors Microns Clock speed Data width MIPS
8080 1974 6,000 6 2 MHz 8 0.64 MIPS First home computers
8088 1979 29,000 3 5 MHz 16 bits, 8 bit bus 0.33 MIPS First IBM PC
80286 1982 134,000 1.5 6 MHz 16 bits 1 MIPS IBM ATs. Up to 2.66 MIPS at 12 MHz
80386 1985 275,000 1.5 16 MHz 32 bits 5 MIPS Eventually 33 MHz, 11.4 MIPS
80486 1989 1,200,000 1 25 MHz 32 bits 20 MIPS Eventually 50 MHz, 41 MIPS
Pentium 1993 3,100,000 0.8 60 MHz 32 bits, 64 bit bus 100 MIPS Eventually 200 MHz
Pentium II 1997 7,500,000 0.35 233 MHz 32 bits, 64 bit bus 400 MIPS? Eventually 450 MHz, 800 MIPS?
Pentium III 1999 9,500,000 0.25 450 MHz 32 bits, 64 bit bus 1,000 MIPS?

Information about this table:

* The date is the year that the processor was first introduced. Many processors are re-introduced at higher clock speeds for many years after the original release date.
* Transistors is the number of transistors on the chip. You can see that the number of transistors on a single chip has risen steadily over the years.
* Microns is the width, in microns, of the smallest wire on the chip. For comparison, a human hair is 100 microns thick. As the feature size on the chip goes down, the number of transistors rises.
* Clock speed is the maximum rate that the chip can be clocked. Clock speed will make more sense in the next section.
* Data Width is the width of the ALU. An 8-bit ALU can add/subtract/multiply/etc. two 8-bit numbers, while a 32-bit ALU can manipulate 32-bit numbers. An 8-bit ALU would have to execute 4 instructions to add two 32-bit numbers, while a 32-bit ALU can do it in one instruction. In many cases the external data bus is the same width as the ALU, but not always. The 8088 had a 16-bit ALU and an 8-bit bus, while the modern Pentiums fetch data 64 bits at a time for their 32-bit ALUs.
* MIPS stands for Millions of Instructions Per Second, and is a rough measure of the performance of a CPU. Modern CPUs can do so many different things that MIPS ratings lose a lot of their meaning, but you can get a general sense of the relative power of the CPUs from this column.

From this table you can see that, in general, there is a relationship between clock speed and MIPS. The maximum clock speed is a function of the manufacturing process and delays within the chip. There is also a relationship between the number of transistors and MIPS. For example, the 8088 clocked at 5 MHz but only executed at 0.33 MIPS (about 1 instruction per 15 clock cycles). Modern processors can often execute at a rate of 2 instructions per clock cycle. That improvement is directly related to the number of transistors on the chip and will make more sense in the next section.

What is a Chip?
A chip is also called an integrated circuit. Generally it is a small, thin piece of silicon onto which the transistors making up the microprocessor have been etched. A chip might be as large as an inch on a side and can contain as many as 10 million transistors. Simpler processors might consist of a few thousand transistors etched onto a chip just a few millimeters square. See How Silicon Chips Are Made for details on how transistors are fabricated on silicon.


Inside a Microprocessor

To understand how a microprocessor works, it is helpful to look inside and learn about the logic used to create one. In the process you can also learn about assembly language - the native language of a microprocessor - and many of the things that engineers can do to boost the speed of a processor.

A microprocessor executes a collection of machine instructions that tell the processor what to do. Based on the instructions, a microprocessor does three basic things:

* Using its ALU (Arithmetic/Logic Unit), a microprocessor can perform mathematical operations like addition, subtraction, multiplication and division. Modern microprocessors contain complete floating point processors that can perform extremely sophisticated operations on large floating point numbers.
* A microprocessor can move data from one memory location to another
* A microprocessor can make decisions and jump to a new set of instructions based on those decisions.

There may be very sophisticated things that a microprocessor does, but those are its three basic activities. The following diagram shows an extremely simple microprocessor capable of doing those three things:

This is about as simple as a microprocessor gets. This microprocessor has:

* an address bus (that may be 8, 16 or 32 bits wide) that sends an address to memory
* a data bus (that may be 8, 16 or 32 bits wide) that can send data to memory or receive data from memory
* a RD (Read) and WR (Write) line to tell the memory whether it wants to set or get the addressed location
* a clock line that lets a clock pulse sequence the processor
* A reset line that resets the program counter to zero (or whatever) and restarts execution.

Let's assume that both the address and data buses are 8 bits wide in this example.

Here are the components of this simple microprocessor:

* Registers A, B and C are simply latches made out of flip-flops (See the section on "edge-triggered latches" in How Boolean Logic Works for details).
* The address latch is just like registers A, B and C.
* The program counter is a latch with the extra ability to increment by 1 when told to do so, and also to reset to zero when told to do so.
* The ALU could be as simple as an 8-bit adder (See the section on adders in How Boolean Logic Works for details), or it might be able to add, subtract, multiply and divide 8-bit values. Let's assume the latter here.
* The test register is a special latch that can hold values from comparisons performed in the ALU. An ALU can normally compare two numbers and determine if they are equal, if one is greater than the other, etc. The test register can also normally hold a carry bit from the last stage of the adder. It stores these values in flip-flops and then the instruction decoder can use the values to make decisions.
* There are 6 boxes marked "3-State" in the diagram. These are tri-state buffers. A tri-state buffer can pass a 1, a 0 or it can essentially disconnect its output (imagine a switch that totally disconnects the output line from the wire the output is heading toward). A tri-state buffer allows multiple outputs to connect to a wire, but only one of them to actually drive a 1 or a 0 onto the line.
* The instruction register and instruction decoder are responsible for controlling all of the other components.

Although they are not shown in this diagram, there would be control lines from the instruction decoder that would:

* Tell the A register to latch the value currently on the data bus.
* Tell the B register to latch the value currently on the data bus.
* Tell the C register to latch the value currently on the data bus.
* Tell the program counter register to latch the value currently on the data bus.
* Tell the address register to latch the value currently on the data bus.
* Tell the instruction register to latch the value currently on the data bus.
* Tell the program counter to increment
* Tell the program counter to reset to zero
* Activate any of the 6 tri-state buffers (6 separate lines)
* Tell the ALU what operation to perform
* Tell the test register to latch the ALUs test bits
* Activate the RD line
* Activate the WR line

Coming into the instruction decoder are the bits from the test register and the clock line, as well as the bits from the instruction register.

RAM and ROM

The previous section talked about the address and data buses, as well as the RD and WR lines. These buses and lines connect either to RAM or ROM - generally both. In our sample microprocessor we have an address bus 8 bits wide and a data bus 8 bits wide. That means that the microprocessor can address 28 = 256 bytes of memory, and it can read or write 8 bits of the memory at a time. Let's assume that this simple microprocessor has 128 bytes of ROM starting at address 0 and 128 bytes of RAM starting at address 128.

ROM stands for Read-Only Memory. A ROM chip is programmed with a permanent collection of pre-set bytes. The address bus tells the ROM chip which byte to get and place on the data bus. When the RD line changes state, the ROM chip presents the selected byte onto the data bus.

RAM stands for Random Access Memory. RAM contains bytes of information and the microprocessor can read or write to those bytes depending on whether the RD or WR line is signaled. One problem with today's RAM chips is that they forget everything once they power goes off. That is why the computer needs ROM.

By the way, nearly all computers contain some amount of ROM (it is possible to create a simple computer that contains no RAM (many microcontrollers do this by placing a handful of RAM bytes on the processor chip itself), but generally impossible to create one that contains no ROM). On a PC, the ROM is called the BIOS (Basic Input/Output System). When the microprocessor starts, it begins executing instructions it finds in the BIOS. The BIOS instructions do things like testing the hardware in the machine, and then it goes to the hard disk to fetch the boot sector (see How How Disks Work for details). This boot sector is another small program, and the BIOS stores it in RAM after reading it off the disk. The microprocessor then begins executing the boot sector's instructions from RAM. The boot sector program will tell the microprocessor to fetch something else from the hard disk into RAM, which the microprocessor then executes, and so on. This is how the microprocessor loads and executes the entire operating system.

Performance

The number of transistors available has a huge effect on the performance of a processor. As seen earlier, a typical instruction in a processor like an 8088 took 15 clock cycles to execute. Because of the design of the multiplier, it took approximately 80 cycles just to do one 16-bit multiplication on the 8088. With more transistors, much more powerful multipliers capable of single-cycle speeds become possible.

More transistors also allow a technology called pipelining. In a pipelined architecture, instruction execution overlaps. So even though it might take 5 clock cycles to execute each instruction, there can be 5 instructions in various stages of execution simultaneously. That way it looks like one instruction completes every clock cycle.

Many modern processors have multiple instruction decoders, each with its own pipeline. This allows multiple instruction streams, which means more than one instruction can complete during each clock cycle. This technique can be quite complex to implement, so it takes lots of transistors.

The trend in processor design has been toward full 32-bit ALUs with fast floating point processors built in and pipelined execution with multiple instruction streams. There has also been a tendency toward special instructions (like the MMX instructions) that make certain operations particularly efficient. There has also been the addition of hardware virtual memory support and L1 caching on the processor chip. All of these trends push up the transistor count, leading to the multi-million transistor powerhouses available today. These processors can execute about one billion instructions per second!

Bacaron Sharolyn said...

Group 4 - BSIT IV-A
Member:
Sharolyn Bacaron
Lovely Jarina
Jennisa Lomocso
Myline Muyco

Topic: How Hard Disk Work? (continuation of Group 3's Report)

Sub Topic:
Inside: Electronics Board
Inside: Beneath the Board
Inside: Platters and Heads
Storing the Data


INSIDE: ELECTRONICS BOARD
The best way to understand how a hard disk works is to take a look inside. (Note that OPENING A HARD DISK RUINS IT, so this is not something to try at home unless you have a defunct drive.)

It is a sealed aluminum box with controller electronics attached to one side. The electronics control the read/write mechanism and the motor that spins the platters. The electronics also assemble the magnetic domains on the drive into bytes (reading) and turn bytes into magnetic domains (writing). The electronics are all contained on a small board that detaches from the rest of the drive.


INSIDE: BENEATH THE BOARD

Underneath the board are the connections for the motor that spins the platters, as well as a highly-filtered vent hole that lets internal and external air pressures equalize.
* The platters - These typically spin at 3,600 or 7,200 rpm when the drive is operating. These platters are manufactured to amazing tolerances and are mirror-smooth (as you can see in this interesting self-portrait of the author... no easy way to avoid that!).

* The arm - This holds the read/write heads and is controlled by the mechanism in the upper-left corner. The arm is able to move the heads from the hub to the edge of the drive. The arm and its movement mechanism are extremely light and fast. The arm on a typical hard-disk drive can move from hub to edge and back up to 50 times per second -- it is an amazing thing to watch!


INSIDE: PLATTERS AND HEADS

In order to increase the amount of information the drive can store, most hard disks have multiple platters. The mechanism that moves the arms on a hard disk has to be incredibly fast and precise. It can be constructed using a high-speed linear motor. Many drives use a "voice coil" approach -- the same technique used to move the cone of a speaker on your stereo is used to move the arm.

* HARD DISK PLATTER - The magnetic surface of each platter is divided into small sub-micrometer-sized magnetic regions, each of which is used to represent a single binary unit of information. A typical magnetic region on a hard disk platter (in 2006) is about 200-250 nanometers wide (in the radial direction of the platter) and extends about 25-30 nanometers in the down-track direction (the circumferential direction on the platter), corresponding to about 100 billion bits (100 gigabits) per square inch of disk area. The material of the main magnetic medium layer is usually a cobalt-based alloy. In today's hard drives each of these magnetic regions is composed of a few hundred magnetic grains, which are the base material that gets magnetized. However, future hard drives may use different systems to create the magnetic regions. As a whole, each magnetic region will have a magnetization.

*HARD DISK HEAD - The read/write heads of the hard disk are the interface between the magnetic physical media on which the data is stored and the electronic components that make up the rest of the hard disk (and the PC). The heads do the work of converting bits to magnetic pulses and storing them on the platters, and then reversing the process when the data needs to be read back.

Read/write heads are an extremely critical component in determining the overall performance of the hard disk, since they play such an important role in the storage and retrieval of data. They are usually one of the more expensive parts of the hard disk, and to enable areal densities and disk spin speeds to increase, they have had to evolve from rather humble, clumsy beginnings to being extremely advanced and complicated technology. New head technologies are often the triggering point to increasing the speed and size of modern hard disks.


STORING THE DATA
Data is stored on the surface of a platter in sectors and tracks. Tracks are concentric circles, and sectors are pie-shaped wedges on a track.
A sector contains a fixed number of bytes -- for example, 256 or 512. Either at the drive or the operating system level, sectors are often grouped together into clusters.

The process of low-level formatting a drive establishes the tracks and sectors on the platter. The starting and ending points of each sector are written onto the platter. This process prepares the drive to hold blocks of bytes. High-level formatting then writes the file-storage structures, like the file-allocation table, into the sectors. This process prepares the drive to hold files.

iamme said...

Frequently Asked Questions during Job Interview:

1. Tell me about yourself. Use "Picture Frame Approach"
2. Did you bring your resume?
3. What do you know about our organization?
4. What experience do you have?
5. According to your definition of success, how successful have you been so far?
(Is this person mature and self aware?)
6. In your current or last position, what were your most significant accomplishments? In your career so far?
7. Had you thought of leaving your present position before? If yes, what do you think held you there?
8. Would you describe a few situations in which your work was criticized?
9. If I spoke with your previous boss, what would he or she say are your greatest strengths and weaknesses?
10. How would you describe your personality?
11. What are your strong points?
12. What are your weak points?
13. In your current or last position, what features did you like most? Least?
14. How did you do in school?
15. What do you look for in a job?
16. How long would it take you to make a meaningful contribution to our firm?
17. How long would you stay with us?
18. If you have never supervised, how do you feel about assuming those responsibilities?


Ma. Cristina M. Lopez
BSIT IV-C

iamme said...

Group 3:
Members:

Alastra, Christian
Leones, Arnold
Lopez, Ma. Cristina
Pergoni, Cristito
Pormiento, Mars Michtam

Topic: How Flash Memory Works?

Sub Topic:
Introduction to Flash Memory
Flash Memory: Tunneling and Erasing
Removable Flash Memory Cards
Flash Memory Standards


Flash memory (sometimes called "flash RAM") is a type of constantly-powered nonvolatile memory that can be erased and reprogrammed in units of memory called blocks. It is a variation of electrically erasable programmable read-only memory (EEPROM) which, unlike flash memory, is erased and rewritten at the byte level, which is slower than flash memory updating. Flash memory is often used to hold control code such as the basic input/output system (BIOS) in a personal computer. When BIOS needs to be changed (rewritten), the flash memory can be written to in block (rather than byte) sizes, making it easy to update. On the other hand, flash memory is not useful as random access memory (RAM) because RAM needs to be addressable at the byte (not the block) level.

Flash memory gets its name because the microchip is organized so that a section of memory cells are erased in a single action or "flash." The erasure is caused by Fowler-Nordheim tunneling in which electrons pierce through a thin dielectric material to remove an electronic charge from a floating gate associated with each memory cell. Intel offers a form of flash memory that holds two bits (rather than one) in each memory cell, thus doubling the capacity of memory without a corresponding increase in price.

Flash memory is used in digital cellular phones, digital cameras, LAN switches, PC Cards for notebook computers, digital set-up boxes, embedded controllers, and other devices.

Here are a few examples of flash memory:

* Your computer's BIOS chip
* CompactFlash (most often found in digital cameras)
* SmartMedia (most often found in digital cameras)
* Memory Stick (most often found in digital cameras)
* PCMCIA Type I and Type II memory cards (used as solid-state disks in laptops)
* Memory cards for video game consoles

Flash memory is a type of EEPROM chip, which stands for Electronically Erasable Programmable Read Only Memory. It has a grid of columns and rows with a cell that has two transistors at each intersection.

Flash Memory: Tunneling and Erasing

Tunneling is used to alter the placement of electrons in the floating gate. An electrical charge, usually 10 to 13 volts, is applied to the floating gate. The charge comes from the column, or bitline, enters the floating gate and drains to a ground.

This charge causes the floating-gate transistor to act like an electron gun. The excited electrons are pushed through and trapped on other side of the thin oxide layer, giving it a negative charge. These negatively charged electrons act as a barrier between the control gate and the floating gate. A special device called a cell sensor monitors the level of the charge passing through the floating gate. If the flow through the gate is above the 50 percent threshold, it has a value of 1. When the charge passing through drops below the 50-percent threshold, the value changes to 0. A blank EEPROM has all of the gates fully open, giving each cell a value of 1.

The electrons in the cells of a flash-memory chip can be returned to normal ("1") by the application of an electric field, a higher-voltage charge. Flash memory uses in-circuit wiring to apply the electric field either to the entire chip or to predetermined sections known as blocks. This erases the targeted area of the chip, which can then be rewritten. Flash memory works much faster than traditional EEPROMs because instead of erasing one byte at a time, it erases a block or the entire chip, and then rewrites it.

Removable Flash Memory Cards

While your computer's BIOS chip is the most common form of Flash memory, removable solid-state storage devices are also popular. SmartMedia and CompactFlash cards are both well-known, especially as "electronic film" for digital cameras. Other removable flash-memory products include Sony's Memory Stick, PCMCIA memory cards, and memory cards for video game systems. We'll focus on SmartMedia and CompactFlash, but the essential idea is the same for all of these products -- every one of them is simply a form of flash memory.

There are a few reasons to use flash memory instead of a hard disk:

* It has no moving parts, so it's noiseless.
* It allows faster access.
* It's smaller in size and lighter.

The solid-state floppy-disk card (SSFDC), better known as SmartMedia, was originally developed by Toshiba. SmartMedia cards are available in capacities ranging from 2 MB to 128 MB. The card itself is quite small, approximately 45 mm long, 37 mm wide and less than 1 mm thick.

As shown below, SmartMedia cards are extremely simple. A plane electrode is connected to the flash-memory chip by bonding wires. The flash-memory chip, plane electrode and bonding wires are embedded in a resin using a technique called over-molded thin package (OMTP). This allows everything to be integrated into a single package without the need for soldering.

The OMTP module is glued to a base card to create the actual card. Power and data is carried by the electrode to the Flash-memory chip when the card is inserted into a device. A notched corner indicates the power requirements of the SmartMedia card. Looking at the card with the electrode facing up, if the notch is on the left side, the card needs 5 volts. If the notch is on the right side, it requires 3.3 volts.

SmartMedia cards erase, write and read memory in small blocks (256- or 512-byte increments). This approach means that they are capable of fast, reliable performance while allowing you to specify which data you wish to keep.They are less rugged than other forms of removable solid-state storage, so you should be very careful when handling and storing them. Because of newer, smaller cards with bigger storage capacities, such as xD-Picture Cards and Secure Digital cards, Toshiba has essentially discontinued the production of SmartMedia cards, so they're now difficult to find.


CompactFlash cards were developed by Sandisk in 1994, and they're different from SmartMedia cards in two important ways:

* They're thicker.
* They utilize a controller chip.

CompactFlash consists of a small circuit board with flash-memory chips and a dedicated controller chip, all encased in a rugged shell that is thicker than a SmartMedia card. CompactFlash cards are 43 mm wide and 36 mm long, and come in two thicknesses: Type I cards are 3.3 mm thick, and Type II cards are 5.5 mm thick.

CompactFlash cards support dual voltage and will operate at either 3.3 volts or 5 volts.

The increased thickness of the card allows for greater storage capacity than SmartMedia cards. CompactFlash sizes range from 8 MB to as much as 100GB. The onboard controller can increase performance, particularly in devices that have slow processors. The case and controller chip add size, weight and complexity to the CompactFlash card when compared to the SmartMedia card.

Flash Memory Standards

Both SmartMedia and CompactFlash, as well as PCMCIA Type I and Type II memory cards, adhere to standards developed by the Personal Computer Memory Card International Association (PCMCIA). Because of these standards, it is easy to use CompactFlash and SmartMedia products in a variety of devices. You can also buy adapters that allow you to access these cards through a standard floppy drive, USB port or PCMCIA card slot (available in some laptop computers). For example, games for Sony's original PlayStation and the PlayStation 2 are backwards-compatible with the latest console, PlayStation 3, but there is no slot for the memory cards used by the older systems. Gamers who want to import their saved game data on the newest system have to buy an adapter. Sony's Memory Stick is available in a large array of products offered by Sony, and is now showing up in products from other manufacturers as well.

Although standards are flourishing, there are many flash-memory products that are completely proprietary in nature, such as the memory cards in some video game systems. But it is good to know that as electronic components become increasingly interchangeable and are able to communicate with each other (by way of technologies such as Bluetooth), standardized removable memory will allow you to keep your world close at hand.

In September 2006, Samsung announced the development of PRAM -- Phase-change Random Access Memory. This new type of memory combines the fast processing speed of RAM with the non-volatile features of flash memory, leading some to nickname it "Perfect RAM." PRAM is supposed to be 30 times faster than conventional flash memory and have 10 times the lifespan. Samsung plans to make the first PRAM chips commercially available in 2010, with a capacity of 512 MB [source: Numonyx]. They'll probably be used in cell phones and other mobile devices, and may even replace flash memory altogether.


(",)

welvie_kretz said...

Group 6:
Member:
Welvie Tupas
Gerardo Gurtones
Joyce Ann Ello
Gretchen Magbanua

Topic: How OS works?

How Operating Systems Work?

The purpose of an operating system is to organize and control hardware and software so that the device it lives in behaves in a flexible but predictable way.
The Bare Bones

It has one set of tasks to perform, very straightforward input to expect (a numbered keypad and a few pre-set buttons) and simple, never-changing hardware to control. For a computer like this, an operating system would be unnecessary baggage, driving up the development and manufacturing costs significantly and adding complexity where none is required.

For other devices, an operating system creates the ability to:
• serve a variety of purposes
• interact with users in more complicated ways
• keep up with needs that change over time
What Does It Do?
At the simplest level, an operating system does two things:
It manages the hardware and software resources of the system. In a desktop computer, these resources include such things as the processor, memory, disk space, etc. (On a cell phone, they include the keypad, the screen, the address book, the phone dialer, the battery and the network connection.)
• It provides a stable, consistent way for applications to deal with the hardware without having to know all the details of the hardware.
• The first task, managing the hardware and software resources, is very important, as various programs and input methods compete for the attention of the central processing unit (CPU) and demand memory, storage and input/output (I/O) bandwidth for their own purposes. In this capacity, the operating system plays the role of the good parent, making sure that each application gets the necessary resources while playing nicely with all the other applications, as well as husbanding the limited capacity of the system to the greatest good of all the users and applications.
• The second task, providing a consistent application interface, is especially important if there is to be more than one of a particular type of computer using the operating system, or if the hardware making up the computer is ever open to change. A consistent application program interface (API) allows a software developer to write an application on one computer and have a high level of confidence that it will run on another computer of the same type, even if the amount of memory or the quantity of storage is different on the two machines.
Even if a particular computer is unique, an operating system can ensure that applications continue to run when hardware upgrades and updates occur. This is because the operating system and not the application is charged with managing the hardware and the distribution of its resources. One of the challenges facing developers is keeping their operating systems flexible enough to run hardware from the thousands of vendors manufacturing computer equipment. Today's systems can accommodate thousands of different printers, disk drives and special peripherals in any possible combination.
What Kinds Are There?
Within the broad family of operating systems, there are generally four types, categorized based on the types of computers they control and the sort of applications they support. The broad categories are:
Real-time operating system (RTOS) - Real-time operating systems are used to control machinery, scientific instruments and industrial systems. An RTOS typically has very little user-interface capability, and no end-user utilities, since the system will be a "sealed box" when delivered for use. A very important part of an RTOS is managing the resources of the computer so that a particular operation executes in precisely the same amount of time every time it occurs. In a complex machine, having a part move more quickly just because system resources are available may be just as catastrophic as having it not move at all because the system is busy.
Single-user, single task - As the name implies, this operating system is designed to manage the computer so that one user can effectively do one thing at a time. The Palm OS for Palm handheld computers is a good example of a modern single-user, single-task operating system.
Single-user, multi-tasking - This is the type of operating system most people use on their desktop and laptop computers today. Microsoft's Windows and Apple's MacOS platforms are both examples of operating systems that will let a single user have several programs in operation at the same time. For example, it's entirely possible for a Windows user to be writing a note in a word processor while downloading a file from the Internet while printing the text of an e-mail message.
Multi-user - A multi-user operating system allows many different users to take advantage of the computer's resources simultaneously. The operating system must make sure that the requirements of the various users are balanced, and that each of the programs they are using has sufficient and separate resources so that a problem with one user doesn't affect the entire community of users. Unix, VMS and mainframe operating systems, such as MVS, are examples of multi-user operating systems.

It's important to differentiate here between multi-user operating systems and single-user operating systems that support networking. Windows 2000 and Novell Netware can each support hundreds or thousands of networked users, but the operating systems themselves aren't true multi-user operating systems. The system administrator is the only "user" for Windows 2000 or Netware. The network support and all of the remote user logins the network enables are, in the overall plan of the operating system, a program being run by the administrative user.
Wake-Up Call
When you turn on the power to a computer, the first program that runs is usually a set of instructions kept in the computer's read-only memory (ROM). This code examines the system hardware to make sure everything is functioning properly. This power-on self test (POST) checks the CPU, memory, and basic input-output systems (BIOS) for errors and stores the result in a special memory location. Once the POST has successfully completed, the software loaded in ROM (sometimes called the BIOS or firmware) will begin to activate the computer's disk drives. In most modern computers, when the computer activates the hard disk drive, it finds the first piece of the operating system: the bootstrap loader.
The bootstrap loader is a small program that has a single function: It loads the operating system into memory and allows it to begin operation. In the most basic form, the bootstrap loader sets up the small driver programs that interface with and control the various hardware subsystems of the computer. It sets up the divisions of memory that hold the operating system, user information and applications. It establishes the data structures that will hold the myriad signals, flags and semaphores that are used to communicate within and between the subsystems and applications of the computer. Then it turns control of the computer over to the operating system.
The operating system's tasks, in the most general sense, fall into six categories:
• Processor management
• Memory management
• Device management
• Storage management
• Application interface
• User interface
Processor Management
The heart of managing the processor comes down to two related issues:
• Ensuring that each process and application receives enough of the processor's time to function properly.
• Using as many processor cycles for real work as is possible.
• The basic unit of software that the operating system deals with in scheduling the work done by the processor is either a process or a thread, depending on the operating system.
A process, then, is software that performs some action and can be controlled -- by a user, by other applications or by the operating system.
It is processes, rather than applications, that the operating system controls and schedules for execution by the CPU. In a single-tasking system, the schedule is straightforward. The operating system allows the application to begin running, suspending the execution only long enough to deal with interrupts and user input.
Interrupts are special signals sent by hardware or software to the CPU. It's as if some part of the computer suddenly raised its hand to ask for the CPU's attention in a lively meeting. Sometimes the operating system will schedule the priority of processes so that interrupts are masked -- that is, the operating system will ignore the interrupts from some sources so that a particular job can be finished as quickly as possible. There are some interrupts (such as those from error conditions or problems with memory) that are so important that they can't be ignored. These non-maskable interrupts (NMIs) must be dealt with immediately, regardless of the other tasks at hand.
While interrupts add some complication to the execution of processes in a single-tasking system, the job of the operating system becomes much more complicated in a multi-tasking system. Now, the operating system must arrange the execution of applications so that you believe that there are several things happening at once. This is complicated because the CPU can only do one thing at a time. In order to give the appearance of lots of things happening at the same time, the operating system has to switch between different processes thousands of times a second. Here's how it happens:
A process occupies a certain amount of RAM. It also makes use of registers, stacks and queues within the CPU and operating-system memory space.
When two processes are multi-tasking, the operating system allots a certain number of CPU execution cycles to one program.
After that number of cycles, the operating system makes copies of all the registers, stacks and queues used by the processes, and notes the point at which the process paused in its execution.
It then loads all the registers, stacks and queues used by the second process and allows it a certain number of CPU cycles.
When those are complete, it makes copies of all the registers, stacks and queues used by the second program, and loads the first program.
All of the information needed to keep track of a process when switching is kept in a data package called a process control block. The process control block typically contains:
-An ID number that identifies the process
-Pointers to the locations in the program and its data where processing last occurred
-Register contents
-States of various flags and switches
-Pointers to the upper and lower bounds of the memory required for the process
-A list of files opened by the process
-The priority of the process
-The status of all I/O devices needed by the process
Memory Storage and Management
When an operating system manages the computer's memory, there are two broad tasks to be accomplished:
Each process must have enough memory in which to execute, and it can neither run into the memory space of another process nor be run into by another process.
The different types of memory in the system must be used properly so that each process can run most effectively.
The first task requires the operating system to set up memory boundaries for types of software and for individual applications.
RAM space at no cost. This technique is called virtual memory management.
Disk storage is only one of the memory types that must be managed by the operating system, and is the slowest. Ranked in order of speed, the types of memory in a computer system are:
High-speed cache - This is fast, relatively small amounts of memory that are available to the CPU through the fastest connections. Cache controllers predict which pieces of data the CPU will need next and pull it from main memory into high-speed cache to speed up system performance.
Main memory - This is the RAM that you see measured in megabytes when you buy a computer.
Secondary memory - This is most often some sort of rotating magnetic storage that keeps applications and data available to be used, and serves as virtual RAM under the control of the operating system.
The operating system must balance the needs of the various processes with the availability of the different types of memory, moving data in blocks (called pages) between available memory as the schedule of processes dictates.
Device Management
The path between the operating system and virtually all hardware not on the computer's motherboard goes through a special program called a driver. Much of a driver's function is to be the translator between the electrical signals of the hardware subsystems and the high-level programming languages of the operating system and application programs. Drivers take data that the operating system has defined as a file and translate them into streams of bits placed in specific locations on storage devices, or a series of laser pulses in a printer.
User Interface
Just as the API provides a consistent way for applications to use the resources of the computer system, a user interface (UI) brings structure to the interaction between a user and the computer. In the last decade, almost all development in user interfaces has been in the area of the graphical user interface (GUI), with two models, Apple's Macintosh and Microsoft's Windows, receiving most of the attention and gaining most of the market share. The popular, open-source Linux operating system also supports a graphical user interface.

marj said...

Group Members:
Christine Pearl Espada
Eduardo Jarina
Margie Piolo
Kim Martin Rocero
Topic:
How OS Works? (continuation)
Sub Topic:
• Process Control Block
• Memory Storage and Management
• Device Management
• Application Program Interfaces
• User Interface
• Operating System Development
Processor Management
The basic unit of software that the operating system deals with in scheduling the work done by the processor is either a process or a thread, depending on the operating system.
It's tempting to think of a process as an application, but that gives an incomplete picture of how processes relate to the operating system and hardware. The application you see (word processor, spreadsheet or game) is, indeed, a process, but that application may cause several other processes to begin, for tasks like communications with other devices or other computers. There are also numerous processes that run without giving you direct evidence that they ever exist. For example, Windows XP and UNIX can have dozens of background processes running to handle the network, memory management, disk management, virus checks and so on. A process, then, is software that performs some action and can be controlled -- by a user, by other applications or by the operating system.
Process Control Block
All of the information needed to keep track of a process when switching is kept in a data package called a process control block. The process control block typically contains:
• An ID number that identifies the process
• Pointers to the locations in the program and its data where processing last occurred
• Register contents
• States of various flags and switches
• Pointers to the upper and lower bounds of the memory required for the process
• A list of files opened by the process
• The priority of the process
• The status of all I/O devices needed by the process
Memory Storage and Management
When an operating system manages the computer's memory, there are two broad tasks to be accomplished:
1. Each process must have enough memory in which to execute, and it can neither run into the memory space of another process nor be run into by another process.
2. The different types of memory in the system must be used properly so that each process can run most effectively. Disk storage is only one of the memory types that must be managed by the operating system, and it's also the slowest. Ranked in order of speed, the types of memory in a computer system are:
• High-speed cache -- This is fast, relatively small amounts of memory that are available to the CPU through the fastest connections. Cache controllers predict which pieces of data the CPU will need next and pull it from main memory into high-speed cache to speed up system performance.
• Main memory -- This is the RAM that you see measured in megabytes when you buy a computer.
• Secondary memory -- This is most often some sort of rotating magnetic storage that keeps applications and data available to be used, and serves as virtual RAM under the control of the operating system.
The operating system must balance the needs of the various processes with the availability of the different types of memory, moving data in blocks (called pages) between available memory as the schedule of processes dictates.
Device Management
The path between the operating system and virtually all hardware not on the computer's motherboard goes through a special program called a driver. Much of a driver's function is to be the translator between the electrical signals of the hardware subsystems and the high-level programming languages of the operating system and application programs. Drivers take data that the operating system has defined as a file and translate them into streams of bits placed in specific locations on storage devices, or a series of laser pulses in a printer..
Application Program Interfaces
Just as drivers provide a way for applications to make use of hardware subsystems without having to know every detail of the hardware's operation, application program interfaces (APIs) let application programmers use functions of the computer and operating system without having to directly keep track of all the details in the CPU's operation. Let's look at the example of creating a hard disk file for holding data to see why this can be important.
A programmer writing an application to record data from a scientific instrument might want to allow the scientist to specify the name of the file created. The operating system might provide an API function named MakeFile for creating files.
User Interface
Just as the API provides a consistent way for applications to use the resources of the computer system, a user interface (UI) brings structure to the interaction between a user and the computer. In the last decade, almost all development in user interfaces has been in the area of the graphical user interface (GUI), with two models, Apple's Macintosh and Microsoft's Windows, receiving most of the attention and gaining most of the market share. The popular open-source Linux operating system also supports a graphical user interface.
There are other user interfaces, some graphical and some not, for other operating systems.
Operating System Development
A process called NetBooting has streamlined the capability to move the working operating system of a standard consumer desktop computer -- kernel, user interface and all -- off of the machine it controls. This was previously only possible for experienced power-users on multi-user platforms like UNIX and with a suite of specialized applications. NetBooting allows the operating system for one computer to be served over a network connection, by a remote computer connected anywhere in the network. One NetBoot server can serve operating systems to several dozen client computers simultaneously, and to the user sitting in front of each client computer the experience is just like they are using their familiar desktop operating system like Windows or Mac OS.

IT Logo said...

Group 1:
Cezar MIranda
Michael Belgira
Grace Entes
Susie Espina

Topic: "How Removable Storage works"
Sub Topic: 1. Introduction to How Removable Storage Works
2. Portable Memory
3. Magnetic Storage

4. Magnetic: Direct Access
5. Magnetic: Zip
6. Magnetic: Cartridges
7. Magnetic: Portable Drives
8. Optical Storage
9. Optical: CD-R/CD-RW
10. Solid-State Storage
11. Solid-State: Cards

Introduction to How Removable Storage Works

Removable storage has been around almost as long as the computer itself. Early removable storage was based on magnetic tape like that used by an audio cassette. Before that, some computers even used paper punch cards to store information!
We've come a long way since the days of punch cards. New removable storage devices can store hundreds of megabytes (and even gigabytes) of data on a single disk, cassette, card or cartridge. In this article, you will learn about the three major storage technologies. We'll also talk about which devices use each technology and what the future holds for this medium. But first, let's see why you would want removable storage.

Portable Memory

There are several reasons why removable storage is useful:
• Commercial software
• Making back-up copies of important information
• Transporting data between two computers
• Storing software and information that you don't need to access constantly
• Copying information to give to someone else
• Securing information that you don't want anyone else to access
Modern removable storage devices offer an incredible number of options, with storage capacities ranging from the 1.44 megabytes (MB) of a standard floppy to the upwards of 20-gigabyte (GB) capacity of some portable drives. All of these devices fall into one of three categories:
• Magnetic storage
• Optical storage
• Solid-state storage


Magnetic Storage
The most common and enduring form of removable-storage technology is magnetic storage. For example, 1.44-MB floppy-disk drives using 3.5-inch diskettes have been around for about 15 years, and they are still found on almost every computer sold today. In most cases, removable magnetic storage uses a drive, which is a mechanical device that connects to the computer. You insert the media, which is the part that actually stores the information, into the drive.
Just like a hard drive, the media used in removable magnetic-storage devices is coated with iron oxide. This oxide is a ferromagnetic material, meaning that if you expose it to a magnetic field it is permanently magnetized. The media is typically called a disk or a cartridge. The drive uses a motor to rotate the media at a high speed, and it accesses (reads) the stored information using small devices called heads.
Each head has a tiny electromagnet, which consists of an iron core wrapped with wire. The electromagnet applies a magnetic flux to the oxide on the media, and the oxide permanently "remembers" the flux it sees. During writing, the data signal is sent through the coil of wire to create a magnetic field in the core. At the gap, the magnetic flux forms a fringe pattern. This pattern bridges the gap, and the flux magnetizes the oxide on the media. When the data is read by the drive, the read head pulls a varying magnetic field across the gap, creating a varying magnetic field in the core and therefore a signal in the coil. This signal is then sent to the computer as binary data.


Magnetic: Direct Access

Magnetic disks or cartridges have a few things in common:
• They use a thin plastic or metal base material coated with iron oxide.
• They can record information instantly.
• They can be erased and reused many times.
• They are reasonably inexpensive and easy to use.
If you have ever used an audio cassette, you know that it has one big disadvantage -- it is a sequential device. The tape has a beginning and an end, and to move the tape to later song you have to use the fast forward and rewind buttons to find the start of the song. This is because the tape heads are stationary.
A disk or cartridge, like a cassette tape, is made from a thin piece of plastic coated with magnetic material on both sides. However, it is shaped like a disk rather than a long, thin ribbon. The tracks are arranged in concentric rings so the software can jump from "file 1" to "file 19" without having to fast forward through files 2 through 18. The disk or cartridge spins like a record and the heads move to the correct track, providing what is known as direct-access storage. Some removable devices actually have a platter of magnetic disks, similar to the set-up in a hard drive. Tape is still used for some long-term storage, such as backing up a server's hard drive, in which quick access to the data is not essential.

Magnetic: Zip

Over the years, magnetic technology has improved greatly. Because of the immense popularity and low cost of floppy disks, higher-capacity removable storage has not been able to completely replace the floppy drive. But there are a number of alternatives that have become very popular in their own right. One such example is the Zip from Iomega.

The main thing that separates a Zip disk from a floppy disk is the magnetic coating used. On a Zip disk, the coating is of a much higher quality. The higher-quality coating means that the read/write head on a Zip disk can be significantly smaller than on a floppy disk (by a factor of 10 or so). The smaller head, in conjunction with a head-positioning mechanism that is similar to the one used in a hard disk, means that a Zip drive can pack thousands of tracks per inch on the disk surface. Zip drives also use a variable number of sectors per track to make the best use of disk space. All of these features combine to create a floppy disk that holds a huge amount of data -- up to 750 MB at the moment.

Magnetic: Cartridges

Another method of using magnetic technology for removable storage is essentially taking a hard disk and putting it in a self-contained case. One of the more successful products using this method is the Iomega Jaz. Each Jaz cartridge is basically a hard disk, with several platters, contained in a hard, plastic case. The cartridge contains neither the heads nor the motor for spinning the disk; both of these items are in the drive unit.

Magnetic: Portable Drives

Completely external, portable hard drives are quickly becoming popular, due in a great part to USB technology. These units, like the ones inside a typical PC, have the drive mechanism and the media all in one sealed case. The drive connects to the PC via USB cable and, after the driver software is installed the first time, is automatically listed by Windows as an available drive.
Another type of portable hard drive is called a microdrive. These tiny hard drives are built into PCMCIA cards that can be plugged into any device with a PCMCIA slot, such as a laptop computer.

Optical Storage
The optical storage device that most of us are familiar with is the compact disc (CD). A CD can store huge amounts of digital information (783 MB) on a very small surface that is incredibly inexpensive to manufacture. The design that makes this possible is a simple one: The CD surface is a mirror covered with billions of tiny bumps that are arranged in a long, tightly wound spiral. The CD player reads the bumps with a precise laser and interprets the information as bits of data.
The spiral of bumps on a CD starts in the center. CD tracks are so small that they have to be measured in microns (millionths of a meter). The CD track is approximately 0.5 microns wide, with 1.6 microns separating one track from the next. The elongated bumps are each 0.5 microns wide, a minimum of 0.83 microns long and 125 nanometers (billionths of a meter) high.
Most of the mass of a CD is an injection-molded piece of clear polycarbonate plastic that is about 1.2 millimeters thick. During manufacturing, this plastic is impressed with the microscopic bumps that make up the long, spiral track. A thin, reflective aluminum layer is then coated on the top of the disc, covering the bumps. The tricky part of CD technology is reading all the tiny bumps correctly, in the right order and at the right speed. To do all of this, the CD player has to be exceptionally precise when it focuses the laser on the track of bumps.
When you play a CD, the laser beam passes through the CD's polycarbonate layer, reflects off the aluminum layer and hits an optoelectronic device that detects changes in light. The bumps reflect light differently than the flat parts of the aluminum layer, which are called lands. The optoelectronic sensor detects these changes in reflectivity, and the electronics in the CD-player drive interpret the changes as data bits.

Optical: CD-R/CD-RW

That is how a normal CD works, which is great for prepackaged software, but no help at all as removable storage for your own files. That's where CD-recordable (CD-R) and CD-rewritable (CD-RW) come in.
CD-R works by replacing the aluminum layer in a normal CD with an organic dye compound. This compound is normally reflective, but when the laser focuses on a spot and heats it to a certain temperature, it "burns" the dye, causing it to darken. When you want to retrieve the data you wrote to the CD-R, the laser moves back over the disc and thinks that each burnt spot is a bump. The problem with this approach is that you can only write data to a CD-R once. After the dye has been burned in a spot, it cannot be changed back.
CD-RW fixes this problem by using phase change, which relies on a very special mixture of antimony, indium, silver and tellurium. This particular compound has an amazing property: When heated to one temperature, it crystallizes as it cools and becomes very reflective; when heated to another, higher temperature, the compound does not crystallize when it cools and so becomes dull in appearance.


CD-RW drives have three laser settings to make use of this property:
• Read - The normal setting that reflects light to the optoelectronic sensor
• Erase - The laser set to the temperature needed to crystallize the compound
• Write - The laser set to the temperature needed to de-crystallize the compound
Other optical devices that deviate from the CD standard, such as DVD, employ approaches comparable to CD-R and CD-RW. An older, hybrid technology called magneto-optical (MO) is seldom used anymore. MO uses a laser to heat the surface of the media. Once the surface reaches a particular temperature, a magnetic head moves across the media, changing the polarity of the particles as needed.


Solid-State Storage

A very popular type of removable storage for small devices, such as digital cameras and PDAs, is Flash memory. Flash memory is a type of solid-state technology, which basically means that there are no moving parts. Inside the chip is a grid of columns and rows, with a two-transistor cell at each intersecting point on the grid. The two transistors are separated by a thin oxide layer. One of the transistors is known as the floating gate, and the other one is the control gate. The floating gate's only link to the row, or wordline, is through the control gate. As long as this link is in place, the cell has a value of "1."
To change the cell value to a "0" requires a curious process called Fowler-Nordheim tunneling. Tunneling is used to alter the placement of electrons in the floating gate. An electrical charge, usually between 10 and 13 volts, is applied to the floating gate. The charge comes from the column, or bitline, enters the floating gate and drains to a ground.
This charge causes the floating-gate transistor to act like an electron gun. The excited, negatively charged electrons are pushed through and trapped on the other side of the oxide layer, which acquires a negative charge. The electrons act as a barrier between the control gate and the floating gate. A device called a cell sensor monitors the level of the charge passing through the floating gate. If the flow through the gate is greater than fifty percent of the charge, it has a value of "1." If the charge passing through drops below the fifty-percent threshold, the value changes to "0."


Solid-State: Cards

Flash-memory storage devices such as CompactFlash or SmartMedia cards are today's most common form of electronic nonvolatile memory. CompactFlash cards were developed by Sandisk in 1994, and they are different from SmartMedia cards in two important ways: They are thicker, and they utilize a controller chip.
CompactFlash consists of a small circuit board with Flash-memory chips and a dedicated controller chip, all encased in a rugged shell that is several times thicker than a SmartMedia card. The increased thickness of the card allows for greater storage capacity.
CompactFlash sizes range from 8 MB to an incredible 4 GB. The onboard controller can increase performance, particularly on devices that have slow processors. However, the case and controller chip add size, weight and complexity to the CompactFlash card when compared to the SmartMedia card.
The solid-state floppy-disk card (SSFDC), better known as SmartMedia, was originally developed by Toshiba. SmartMedia cards are available in capacities ranging from 2 MB to 128 MB. As seen below, the card itself is quite small.

Magnetic

Magnetic storage is moving in two parallel directions. There are products coming out that use small cartridges with capacity measured in megabytes, and there are portable hard drives that range in the gigabytes.

Optical
A company named DataPlay has introduced a micro-optical drive. This tiny drive, about the size of a matchbox, uses tiny optical discs that are encased in a plastic shell. Each disc is capable of holding 500 MB of information. The drive actually reads both sides of the disc, meaning that the disc stores 250 MB per side.

Solid State
SmartMedia and CompactFlash cards continue to increase in capacity while maintaining their tiny size. Other solid-state memory devices, such as Sony's Memory Stick, are even smaller

Lavz said...

Summary Reports:
Introduction to How Hard Disk Work
Nearly every desktop computer and server in use today contains one or more hard-disk drives. Every mainframe and supercomputer is normally connected to hundreds of them. You can even find VCR-type devices and camcorders that use hard disks instead of tape. These billions of hard disks do one thing well -- they store changing digital information in a relatively permanent form. They give computers the ability to remember things when the power goes out.

Hard Disk Basics

Hard disks were invented in the 1950s. They started as large disks up to 20 inches in diameter holding just a few megabytes. They were originally called "fixed disks" or "Winchesters" (a code name used for a popular IBM product). They later became known as "hard disks" to distinguish them from "floppy disks." Hard disks have a hard platter that holds the magnetic medium, as opposed to the flexible plastic film found in tapes and floppies.

At the simplest level, a hard disk is not that different from a cassette tape. Both hard disks and cassette tapes use the same magnetic recording techniques described in How Tape Recorders Work. Hard disks and cassette tapes also share the major benefits of magnetic storage -- the magnetic medium can be easily erased and rewritten, and it will "remember" the magnetic flux patterns stored onto the medium for many years.

Cassette Tape vs. Hard Disk

Let's look at the big differences between cassette tapes and hard disks:

* The magnetic recording material on a cassette tape is coated onto a thin plastic strip. In a hard disk, the magnetic recording material is layered onto a high-precision aluminum or glass disk. The hard-disk platter is then polished to mirror-type smoothness.

* With a tape, you have to fast-forward or reverse to get to any particular point on the tape. This can take several minutes with a long tape. On a hard disk, you can move to any point on the surface of the disk almost instantly.

* In a cassette-tape deck, the read/write head touches the tape directly. In a hard disk, the read/write head "flies" over the disk, never actually touching it.

* The tape in a cassette-tape deck moves over the head at about 2 inches (about 5.08 cm) per second. A hard-disk platter can spin underneath its head at speeds up to 3,000 inches per second (about 170 mph or 272 kph)!

* The information on a hard disk is stored in extremely small magnetic domains compared to a cassette tape's. The size of these domains is made possible by the precision of the platter and the speed of the medium.

­ Because of these differences, a modern hard disk is able to store an amazing amount of information in a small space. A hard disk can also access any of its information in a fraction of a second.

Capacity and Performance

A typical desktop machine will have a hard disk with a capacity of between 10 and 40 gigabytes. Data is stored onto the disk in the form of files. A file is simply a named collection of bytes. The bytes might be the ASCII codes for the characters of a text file, or they could be the instructions of a software application for the computer to execute, or they could be the records of a data base, or they could be the pixel colors for a GIF image. No matter what it contains, however, a file is simply a string of bytes. When a program running on the computer requests a file, the hard disk retrieves its bytes and sends them to the CPU one at a time.

There are two ways to measure the performance of a hard disk:

* Data rate - The data rate is the number of bytes per second that the drive can deliver to the CPU. Rates between 5 and 40 megabytes per second are common.

* Seek time - The seek time is the amount of time between when the CPU requests a file and when the first byte of the file is sent to the CPU. Times between 10 and 20 milliseconds are common.

The other important parameter is the capacity of the drive, which is the number of bytes it can hold.

JUPITER said...

Group 5:
Member:

Zeus B. Fernandez
Susan Escatin
Rian Villaceran
Joebeth Buenavista

Topic: How Hard Disk Works?

Sub Topic:

I. Brief History of the Hard Disk Drive
II IDE Hard Disk Drive
III. HARD DISK ASSEMBLY

I. Brief History of the HARD DISK DRIVE

The hard disk drive has short and fascinating history. In 24 years it evolved from a monstrosity with fifty two-foot diameter disks holding five MBytes (5,000,000 bytes) of data to today's drives measuring 3 /12 inches wide and an inch high (and smaller) holding 400 GBytes (400,000,000,000 bytes/characters). Here, then, is the short history of this marvelous device.

Before the disk drive there were drums... In 1950 Engineering Research Associates of Minneapolis built the first commercial magnetic drum storage unit for the U.S. Navy, the ERA 110. It could store one million bits of data and retrieve a word in 5 thousandths of a second.

In 1956 IBM invented the first computer disk storage system, the 305 RAMAC (Random Access Method of Accounting and Control). This system could store five MBytes. It had fifty, 24-inch diameter disks!

By 1961 IBM had invented the first disk drive with air bearing heads and in 1963 they introduced the removable disk pack drive.

In 1970 the eight inch floppy disk drive was introduced by IBM. My first floppy drives were made by Shugart who was one of the "dirty dozen" who left IBM to start their own companies. In 1981 two Shugart 8 inch floppy drives with enclosure and power supply cost me about $350.00. They were for my second computer. My first computer had no drives at all.

In 1973 IBM shipped the model 3340 Winchester sealed hard disk drive, the predecessor of all current hard disk drives. The 3340 had two spindles each with a capacity of 30 MBytes, and the term "30/30 Winchester" was thus coined.

Seagate ST4053 40 MByte
5 1/4 inch, full-height "clunker"
with ST506 interface and voice coil
circa 1987. My cost was $435.00.

In 1980, Seagate Technology introduced the first hard disk drive for microcomputers, the ST506. It was a full height (twice as high as most current 5 1/4" drives) 5 1/4" drive, with a stepper motor, and held 5 Mbytes. My first hard disk drive was an ST506. I cannot remember exactly how much it cost, but it plus its enclosure, etc. was well over a thousand dollars. It took me three years to fill the drive. Also, in 1980 Phillips introduced the first optical laser drive. In the early 80's, the first 5 1/4" hard disks with voice coil actuators (more on this later) started shipping in volume, but stepper motor drives continued in production into the early 1990's. In 1981, Sony shipped the first 3 1/2" floppy drives.

In 1983 Rodime made the first 3.5 inch rigid disk drive. The first CD-ROM drives were shipped in 1984, and "Grolier's Electronic Encyclopedia," followed in 1985. The 3 1/2" IDE drive started its existence as a drive on a plug-in expansion board, or "hard card." The hard card included the drive on the controller which, in turn, evolved into Integrated Device Electronics (IDE) hard disk drive, where the controller became incorporated into the printed circuit on the bottom of the hard disk drive. Quantum made the first hard card in 1985.

In 1986 the first 3 /12" hard disks with voice coil actuators were introduced by Conner in volume, but half (1.6") and full height 5 1/4" drives persisted for several years. In 1988 Conner introduced the first one inch high 3 1/2" hard disk drives. In the same year PrairieTek shipped the first 2 1/2" hard disks.

In 1997 Seagate introduced the first 7,200 RPM, Ultra ATA hard disk drive for desktop computers and in February of this year they introduced the first 15,000 RPM hard disk drive, the Cheetah X15. Milestones for IDE DMA, ATA/33, and ATA/66 drives follow:

* 1994 DMA, Mode 2 at 16.6 MB/s
* 1997 Ultra ATA/33 at 33.3 MB/s
* 1999 Ultra ATA/66 at 66.6 MB/s

6/20/00 IBM triples the capacity of the world's smallest hard disk drive. This drive holds one gigabyte on a disk which is the size of an American quarter. The world's first gigabyte-capacity disk drive, the IBM 3380, introduced in 1980, was the size of a refrigerator, weighed 550 pounds (about 250 kg), and had a price tag of $40,000.

II IDE Hard Disk Drive

Integrated Drive Electronics (IDE) hard disks have been around for quite a few years. Prior to these drives, hard disks were interfaced to a PC motherboard via an expansion board known as a hard disk controller. The drive did most of the mechanical stuff and performed basic electronic/servo functions; the controller told it in detail what to do. The development of the IDE hard moved most of the electronics and firmware (low-level software on a chip) from the controller to a printed circuit board on the drive itself. In the process, a buffer/cache' memory was added to the electronics to speed-up the process of reading and writing hard disk drive data. The drive got "smarter." Overall costs went down and performance went up.


III. HARD DISK ASSEMBLY
A hard disk drive consists of a motor, spindle, platters, read/write heads, actuator, frame, air filter, and electronics. The frame mounts the mechanical parts of the drive and is sealed with a cover. The sealed part of the drive is known as the Hard Disk Assembly or HDA. The drive electronics usually consists of one or more printed circuit boards mounted on the bottom of the HDA.

A head and platter can be visualized as being similar to a record and playback head on an old phonograph, except the data structure of a hard disk is arranged into concentric circles instead of in a spiral as it on a phonograph record (and CD-ROM). A hard disk has one or more platters and each platter usually has a head on each of its sides. The platters in modern drives are made from glass or ceramic to avoid the unfavorable thermal characteristics of the aluminum platters found in older drives. A layer of magnetic material is deposited/sputtered on the surface of the platters and those in most of the drives I've dissected have shiny, chrome-like surfaces. The platters are mounted on the spindle which is turned by the drive motor. Most current IDE hard disk drives spin at 5,400, 7,200, or 10,000 RPM and 15,000 RPM drives are emerging.

Thesis Groups said...

Group
Gizelle Hillana
Meliza Gallego
Marivee Candar
Analyn Gallarde

Topic: How OS works?
Subtopic:
OPERATING SYSTEM is a software that controls the computer and how it uses its resources . This software manages and controls what happens in the computer. When you buy a computer the operating system comes pre-installed on the hard disk and is ready for you to use. When the computer is turned on that is what you see, otherwise the computer would not know how to work.

"It's All in the Operation"

The operating system works as a go between for the computer hardware and the application software. The operating system of your computer is determined by the hardware requirements of your software. For example, if software is designed for WINDOWS 95 then a WINDOWS 95 operating system is needed. On the other hand, if an application requires only Disk Operating System (DOS), then the hardware capabilities of the computer do not need to be as complex. Which operating system is on your computer plays an important part in how you will access information, start programs, and how you the user will interact with the overall hardware on the system. You can find out which operating system you have by turning on the computer. The screen will display the current version.

The operating system provides two main functions.

* The first function is managing the basic hardware operations. The control of input and output, storage space,detecting equipment failure, and management of storage are just some of theresponsibilities of the O/S or Operating System.
* The second function is managing and interacting with the applications software. It takes over the tasks of printing and saving data.

How Do These Functions Work?

Manage Storage Space: The operating system stores data at some location on disk. It knows where to go to retrieve data when it is needed. It uses the filing cabinet system to keep track of the data stored on disks, tape drives, CD-ROMS,and external drives.

Detects Problems and Equipment Failure: The operating system also is the maintenance mechanic of the system. It checks the system for failures that will cause problems in processing. Messages will appear on the screen when there is a problem. Sometimes operating systems will have built in messages for quick fixes to the problem, or will refer you to a resource to get more information. A typical message that one would see is"System Failure, or "Your computer has performed an illegal operation".

When the computer is turned on, the computer checks all of the storage devices. You can see the system being checked by the lights going off and on at the various drive locations. All of the electronic parts are checked also. If the computer can not do a self fix, it will not let you continue working.

Traffic Controller: The operating system is also in charge of the data that is coming into thecomputer (input by way of the keyboard or mouse) and going out of the computer (output by way of printer or screen display). It directs the flow of data toand from the external devices and also takes care of control routing information along the bus to be processed by the processor.

System Resource Manager: What is the system resource? Well any hardware or part of the computer used by the computer program is considered a system resource. The memory, disk drive, external devices,etc. are all "mothered"by the operating system.

The O/S "allocates" or makes sure that enough space is there for the computer program to operate,allocates,time for each program to work, and also keeps the processor going after each instruction. Almost like a teacher standing over you to make sure you finish a problem.

Multitasking: Multitasking is the ability to run more than one program at a time. (You will find this feature in the Windows Operating System.) Multitasking will allow either the individual to work on more than one program at a time, or allow more than one user to share information and processing of the information.The O/S manages this operation.

Security Cop: Security on the system is also managed by the operating system. The O/S cangive you the option to set up passwords or ID logons inorder to use the computer. This provides security of data, and for those of us that don't want our parents to read what we've written.
DOS (Disk Operating System)
It was developed in 1981 for IBM's first personal computer. (Other names for this system are PC-DOS and MS-DOS.) There have been six versions of DOS (meaning the first DOS system has been improved six times). At first DOS had only a command line that the user could input instructions into. It was difficult to use, since the user had to memorize many commands. The DOS shell was later developed which provided a menu for users to choose from so that the system was easier to use. Deleting of files could be accomplished, but took several steps to complete.

Most computers come bundled with the latest version of DOS already installed on the hard drive. In WINDOWS 95, DOS is installed as an option on the WINDOWS 95 exit menu. DOS is still used, especially for those of us who like the DOS computer games.The games seem to run faster since the WINDOWS 95 graphical interface does not have to run. Also applications that do not need a graphical interface also run faster in DOS. Powerful business programs like Lotus 123 for DOS is an example of DOS being used. Basic word processing programs also do not need the power of the graphical interface if you are going to work on text or text based applications.
WINDOWS 3.1

WINDOWS 3.1 was Microsoft's answer to a graphical user interface (you can use on screen pictures to activate the computer to start programs). WINDOWS 3.1 provides icons that you can manipulate by way of a pointing device such as a mouse or keyboard. There are pull down menus, and allowed a lot easier way to learn and use new software. The best feature was letting the user work on more than one program at a time on the screen (multitasking). The data transfer rate was a lot faster, too.

Other Features:

* File manager included a copy utility.
* Deleting and highlighting the file in the File Manager allowed for easier file maintenance.
* You could double click on a program icon to start the program.
* You could have an 8 character filename with a 3 character extension. A period (.) was used to separate the filename from the extension.
* Software was downward WINDOWS compatible(you could use software designed for earlier versions of the operating system (such as WINDOWS 3.0).

WINDOWS forWORKGROUPS 3.11

In order for two or more people to work together on the same datafile on a local area network, a software called groupware was required. WINDOWS for WORKGROUPS was the answer to that problem. (Two or more individuals working together on a file or project is referred to as a workgroup). This is similar to the interactive games where a person can play against another person on the same game at different computers. (QUAKE is a good example of this type of file sharing.)
Features of Workgroups:

* Document Routing: This is the electronic forwarding of a document so that all the people who need to see and approve the document can do this electronically through the computer without having to physically walk it through the channels. It provides multi-level approval.
* Desktop Videoconferencing: This is real live video of a meeting in real-time through the computer. Group scheduling of meetings and confirmation of meetings.
* Group editing or revising of the same document from different locations. E-mail options

How Has WINDOWS Changed?

* A copy utility is included in the WINDOW'S EXPLORER and the file menu that allows the program to copy one or more files.
* There is a confirmation dialog box for file deletion which reduces user error.
* Documentcentricity: This allows you to select (mouse click) the data file you want to work on and it automatically starts up the application program associated with it so you can work on the document.
* Spaces are allowed in filenames and the maximum character filename is 255 characters (plus a 3 character file extension).
* There is a user recycle bin for drag and drop to delete items. (You drag the document or item to the recycle bin to get rid of it.) There is also drag and drop file management. You can drag a file to a drive to save or move it around into another folder.
* Plug and Play: This feature automatically detects new devices and installs them. Programs can be installed and removed automatically through the operating system.

OS/2

This is an operating system designed jointly by IBM Corporation and Microsoft Corporation. It is designed for more powerful and newer computers. This system has a graphical interface, but also can run DOS programs. The most important features of this software are the availability to use objects on the screen, to run tasks , and select options from menus while running DOS. Data transfer between both WINDOWS and DOS applications is enhanced. This system will run DOS software, WINDOWS software, and also software designed for OS/2.
UNIX

UNIX was originally developed for minicomputers (computers used in business and large industries that are programmed for specific tasks). These computers are much larger than the personal computers we use today. UNIX was developed by AT&T at Bell Laboratories in 1969. It is a multi-user operating system. Many people are able to run many applications from one central computer. . It is a very powerful system. Not many individuals run UNIX on their personal computers as an operating system. Many versions exist, but basically they are all the same in terms of operating systems. UNIX uses a command-line user interface. It does have a graphical interface that can be added with pull down menus, but most die hard UNIX users prefer the command line prompt. UNIX is used as the standard operating system on the INTERNET so that computer network computers and servers can manage and speak a universal language.

dario said...

Group Member:
Dario J. Bucabal
Adde Salimbot
Romel Cornillo
Lourdes Sevilleno
Joy carbaquil
BSIT IV-A

Read-only memory

Read-Only Memory (ROM)

One major type of memory that is used in PCs is called read-only memory, or ROM for short. ROM is a type of memory that normally can only be read, as opposed to RAM which can be both read and written. There are two main reasons that read-only memory is used for certain functions within the PC:
• Permanence: The values stored in ROM are always there, whether the power is on or not. A ROM can be removed from the PC, stored for an indefinite period of time, and then replaced, and the data it contains will still be there. For this reason, it is called non-volatile storage. A hard disk is also non-volatile, for the same reason, but regular RAM is not.
• Security: The fact that ROM cannot easily be modified provides a measure of security against accidental (or malicious) changes to its contents. You are not going to find viruses infecting true ROMs, for example; it's just not possible. (It's technically possible with erasable EPROMs, though in practice never seen.)
Read-only memory (usually known by its acronym, ROM) is a class of storage media used in computers and other electronic devices. Because data stored in ROM cannot be modified (at least not very quickly or easily), it is mainly used to distribute firmware (software that is very closely tied to specific hardware, and unlikely to require frequent updates).

In its strictest sense, ROM refers only to mask ROM (the oldest type of solid state ROM), which is fabricated with the desired data permanently stored in it, and thus can never be modified. However, more modern types such as EPROM and flash EEPROM can be erased and re-programmed multiple times; they are still described as "read-only memory"(ROM) because the reprogramming process is generally infrequent, comparatively slow, and often does not permit random access writes to individual memory locations. Despite the simplicity of mask ROM, economies of scale and field-programmability often make reprogrammable technologies more flexible and inexpensive, so that mask ROM is rarely used in new products as of 2007.

Contents
• History
o 1.1 Use of ROM for program storage
o 1.2 ROM for data storage
• Types of ROMs
o 2.1 Semiconductor based
o 2.2 Other technologies
 2.2.1 Historical examples
• Speed of ROMs
o 3.1 Reading speed
o 3.2 Writing speed
• Endurance and data retention
• ROM images

History
The simplest type of solid state ROM is as old as semiconductor technology itself. Combinational logic gates can be joined manually to map n-bit address input onto arbitrary values of m-bit data output (a look-up table). With the invention of the integrated circuit came mask ROM. Mask ROM consists of a grid of word lines (the address input) and bit lines (the data output), selectively joined together with transistor switches, and can represent an arbitrary look-up table with a regular physical layout and predictable propagation delay.
In mask ROM, the data is physically encoded in the circuit, so it can only be programmed during fabrication. This leads to a number of serious disadvantages:

1. It is only economical to buy mask ROM in large quantities, since users must contract with a foundry to produce a custom design.

2. The turnaround time between completing the design for a mask ROM and receiving the finished product is long, for the same reason.

3. Mask ROM is impractical for R&D work since designers frequently need to modify the contents of memory as they refine a design.

4. If a product is shipped with faulty mask ROM, the only way to fix it is to recall the product and physically replace the ROM.
Subsequent developments have addressed these shortcomings.

PROM, invented in 1956, allowed users to program its contents exactly once by physically altering its structure with the application of high-voltage pulses. This addresses problems 1 and 2 above, since a company can simply order a large batch of fresh PROM chips and program them with the desired contents at its designers' convenience. The 1971 invention of EPROM essentially solved problem 3, since EPROM (unlike PROM) can be repeatedly reset to its unprogrammed state by exposure to strong ultraviolet light. EEPROM, invented in 1983, went a long way to solving problem 4, since an EEPROM can be programmed in-place if the containing device provides a means to receive the program contents from an external source (e.g. a personal computer via a serial cable). Flash memory, invented at Toshiba in the mid-1980s, and commercialized in the early 1990s, is a form of EEPROM that makes very efficient use of chip area and can be erased and reprogrammed thousands of times without damage.
All of these technologies improved the flexibility of ROM, but at a significant cost-per-chip, so that in large quantities mask ROM would remain an economical choice for many years. (Decreasing cost of reprogrammable devices had almost eliminated the market for mask ROM by the year 2000.) Furthermore, despite the fact that newer technologies were increasingly less "read-only," most were envisioned only as replacements for the traditional use of mask ROM.

The most recent development is NAND flash, also invented by Toshiba. Its designers explicitly broke from past practice, stating plainly that "the aim of NAND Flash is to replace hard disks," rather than the traditional use of ROM as a form of non-volatile primary storage. As of 2007, NAND has partially achieved this goal by offering throughput comparable to hard disks, higher tolerance of physical shock, extreme miniaturization (in the form of USB flash drives and tiny microSD memory cards, for example), and much lower power consumption.
Use of ROM for program storage
Every stored-program computer requires some form of non-volatile storage to store the initial program that runs when the computer is powered on or otherwise begins execution (a process known as bootstrapping, often abbreviated to "booting" or "booting up"). Likewise, every non-trivial computer requires some form of mutable memory to record changes in its state as it executes.

Forms of read-only memory were employed as non-volatile storage for programs in most early stored-program computers, such as ENIAC after 1948 (until then it was not a stored-program computer as every program had to be manually wired into the machine, which could take days to weeks). Read-only memory was simpler to implement since it required only a mechanism to read stored values, and not to change them in-place, and thus could be implemented with very crude electromechanical devices (see historical examples below). With the advent of integrated circuits in the 1960s, both ROM and its mutable counterpart static RAM were implemented as arrays of transistors in silicon chips; however, a ROM memory cell could be implemented using fewer transistors than an SRAM memory cell, since the latter requires a latch (comprising 5-20 transistors) to retain its contents, while a ROM cell might consist of the absence (logical 0) or presence (logical 1) of a single transistor connecting a bit line to a word line. Consequently, ROM could be implemented at a lower cost-per-bit than RAM for many years.

Most home computers of the 1980s stored a BASIC interpreter or operating system in ROM as other forms of non-volatile storage such as magnetic disk drives were too expensive. For example, the Commodore 64 included 64 KiB of RAM and 20 KiB of ROM contained a BASIC interpreter and the "KERNAL" (sic) of its operating system. Later home or office computers such as the IBM PC XT often included magnetic disk drives, and larger amounts of RAM, allowing them to load their operating systems from disk into RAM, with only a minimal hardware initialization core and bootloader remaining in ROM (known as the BIOS in IBM-compatible computers). This arrangement allowed for a more complex and easily upgradeable operating system.
In modern PCs, "ROM" (or Flash) is used to store the basic bootstrapping firmware for the main processor, as well as the various firmware needed to internally control self contained devices such as graphic cards, hard disks, DVD drives, TFT screens, etc, in the system. Today, many of these "read-only" memories – especially the BIOS – are often replaced with Flash memory (see below), to permit in-place reprogramming should the need for a firmware upgrade arise. However, simple and mature sub-systems (such as the keyboard or some communication controllers in the ICs on the main board, for example) may employ mask ROM or OTP (one time programmable).

ROM and successor technologies such as Flash are prevalent in embedded systems. This governs everything from industrial robots to appliances and consumer electronics (MP3 players, set-top boxes, etc) all of which are designed for specific functions, but nonetheless based on general-purpose microprocessors in most cases. With software usually tightly coupled to hardware, program changes are rarely needed in such devices (which typically lack devices such as hard disks for reasons of cost, size, and/or power consumption). As of 2008, most products use Flash rather than mask ROM, and many provide some means for connection to a PC for firmware updates; a digital audio player's might be updated to support a new file format for instance. Some hobbyists have taken advantage of this flexibility to reprogram consumer products for new purposes; for example, the iPodLinux and OpenWRT projects have enabled users to run full-featured Linux distributions on their MP3 players and wireless routers, respectively.

ROM is also useful for binary storage of cryptographic data, as it makes them difficult to replace, which may be desirable in order to enhance information security.

ROM for data storage
Since ROM (at least in hard-wired mask form) cannot be modified, it is really only suitable for storing data which is not expected to need modification for the life of the device. To that end, ROM has been used in many computers to store look-up tables for the evaluation of mathematical and logical functions (for example, a floating-point unit might tabulate the sine function in order to facilitate faster computation). This was especially effective when CPUs were slow and ROM was cheap compared to RAM.
Notably, the display adapters of early personal computers stored tables of bitmapped font characters in ROM. This usually meant that the text display font could not be changed interactively. This was the case for both the CGA and MDA adapters available with the IBM PC XT.
The use of ROM to store such small amounts of data has disappeared almost completely in modern general-purpose computers. However, Flash ROM has taken over a new role as a medium for mass storage or secondary storage of files .BANNAS

Types of ROMs

The first EPROM, an Intel 1702, with the die and wire bonds clearly visible through the erase window.

Semiconductor based
Classic mask-programmed ROM chips are integrated circuits that physically encode the data to be stored, and thus it is impossible to change their contents after fabrication. Other types of non-volatile solid-state memory permit some degree of modification:
• Programmable read-only memory (PROM), or one-time programmable ROM (OTP), can be written to or programmed via a special device called a PROM programmer. Typically, this device uses high voltages to permanently destroy or create internal links (fuses or antifuses) within the chip. Consequently, a PROM can only be programmed once.
• Erasable programmable read-only memory (EPROM) can be erased by exposure to strong ultraviolet light (typically for 10 minutes or longer), then rewritten with a process that again requires application of higher than usual voltage. Repeated exposure to UV light will eventually wear out an EPROM, but the endurance of most EPROM chips exceeds 1000 cycles of erasing and reprogramming. EPROM chip packages can often be identified by the prominent quartz "window" which allows UV light to enter. After programming, the window is typically covered with a label to prevent accidental erasure. Some EPROM chips are factory-erased before they are packaged, and include no window; these are effectively PROM.
• Electrically erasable programmable read-only memory (EEPROM) is based on a similar semiconductor structure to EPROM, but allows its entire contents (or selected banks) to be electrically erased, then rewritten electrically, so that they need not be removed from the computer (or camera, MP3 player, etc.). Writing or flashing an EEPROM is much slower (milliseconds per bit) than reading from a ROM or writing to a RAM (nanoseconds in both cases).

o Electrically alterable read-only memory (EAROM) is a type of EEPROM that can be modified one bit at a time. Writing is a very slow process and again requires higher voltage (usually around 12 V) than is used for read access. EAROMs are intended for applications that require infrequent and only partial rewriting. EAROM may be used as non-volatile storage for critical system setup information; in many applications, EAROM has been supplanted by CMOS RAM supplied by mains power and backed-up with a lithium battery.
o Flash memory (or simply flash) is a modern type of EEPROM invented in 1984. Flash memory can be erased and rewritten faster than ordinary EEPROM, and newer designs feature very high endurance (exceeding 1,000,000 cycles). Modern NAND flash makes efficient use of silicon chip area, resulting in individual ICs with a capacity as high as 16 GB as of 2007; this feature, along with its endurance and physical durability, has allowed NAND flash to replace magnetic in some applications (such as USB flash drives). Flash memory is sometimes called flash ROM or flash EEPROM when used as a replacement for older ROM types, but not in applications that take advantage of its ability to be modified quickly and frequently.
By applying write protection, some types of reprogrammable ROMs may temporarily become read-only memory.

Other technologies

There are other types of non-volatile memory which are not based on solid-state IC technology, including:
• Optical storage media, such CD-ROM which is read-only (analogous to masked ROM). CD-R is Write Once Read Many (analogous to PROM), while CD-RW supports erase-rewrite cycles (analogous to EEPROM); both are designed for backwards-compatibility with CD-ROM.

Historical examples

Transformer matrix ROM (TROS), from the IBM System 360/20
• Diode matrix ROM, used in small amounts in many computers in the 1960s as well as electronic desk calculators and keyboard encoders for terminals. This ROM was programmed by installing discrete semiconductor diodes at selected locations between a matrix of word line traces and bit line traces on a printed circuit board.
• Resistor, capacitor, or transformer matrix ROM, used in many computers until the 1970s. Like diode matrix ROM, it was programmed by placing components at selected locations between a matrix of word lines and bit lines. ENIAC's Function Tables were resistor matrix ROM, programmed by manually setting rotary switches. Various models of the IBM System/360 and complex peripherial devices stored their microcode in either capacitor (called BCROS for Balanced Capacitor Read Only Storage on the 360/50 & 360/65 or CCROS for Card Capacitor Read Only Storage on the 360/30) or transformer (called TROS for Transformer Read Only Storage on the 360/20, 360/40 and others) matrix ROM.
• Core rope, a form of transformer matrix ROM technology used where size and/or weight were critical. This was used in NASA/MIT's Apollo Spacecraft Computers, DEC's PDP-8 computers, and other places. This type of ROM was programmed by hand by weaving "word line wires" inside or outside of ferrite transformer cores.
• The perforated metal character mask ("stencil") in Charactron cathode ray tubes, which was used as ROM to shape a wide electron beam to form a selected character shape on the screen either for display or a scanned electron beam to form a selected character shape as an overlay on a video signal.
• Various mechanical devices used in early computing equipment. A machined metal plate served as ROM in the dot matrix printers on the IBM 026 and IBM 029 key punches.

Speed of ROMs

Reading speed
Although the relative speed of RAM vs. ROM has varied over time, as of 2007 large RAM chips can be read faster than most ROMs. For this reason (and to make for uniform access), ROM content is sometimes copied to RAM or shadowed before its first use, and subsequently read from RAM.
Writing speed
For those types of ROM that can be electrically modified, writing speed is always much slower than reading speed, and it may require unusually high voltage, the movement of jumper plugs to apply write-enable signals, and special lock/unlock command codes. Modern NAND Flash achieves the highest write speeds of any rewritable ROM technology, with speeds as high as 15 MiB/s (or 70 ns/bit), by allowing (indeed requiring) large blocks of memory cells to be written simultaneously.
Endurance and data retention
Because they are written by forcing electrons through a layer of electrical insulation onto a floating transistor gate, rewriteable ROMs can withstand only a limited number of write and erase cycles before the insulation is permanently damaged. In the earliest EAROMs, this might occur after as few as 1,000 write cycles, while in modern Flash EEPROM the endurance may exceed 1,000,000, but it is by no means infinite. This limited endurance, as well as the higher cost per bit, means that Flash-based storage is unlikely to completely supplant magnetic disk drives in the near future.
The timespan over which a ROM remains accurately readable is not limited by write cycling. The data retention of EPROM, EAROM, EEPROM, and Flash may be limited by charge leaking from the floating gates of the memory cell transistors. Leakage is exacerbated at high temperatures or in high-radiation environments. Masked ROM and fuse/antifuse PROM do not suffer from this effect, as their data retention depends on physical rather than electrical permanence of the integrated circuit (although fuse re-growth was once a problem in some systems).

ROM images
The contents of ROM chips in video game console cartridges can be extracted with special software or hardware devices. The resultant memory dump files are known as ROM images, and can be used to produce duplicate cartridges, or in console emulators. The term originated when most console games were distributed on cartridges containing ROM chips, but achieved such widespread usage that it is still applied to images of newer games distributed on CD-ROMs or other optical media.
ROM images of commercial games usually contain copyrighted software. The unauthorized copying and distribution of copyrighted software is usually a violation of copyright laws (in some jurisdictions duplication of ROM cartridges for backup purposes may be considered fair use). Nevertheless, there is a thriving community engaged in the illegal distribution and trading of such software. In such circles, the term "ROM images" is sometimes shortened simply to "ROMs" or sometimes changed to "romz" to highlight the connection with "warez.

LYANN, LICE, LEONA, REZA said...

Group 1:
Member:
Lyann-Grace Reyes
Lice Dadivas
Reza Cuison
Leona Canillo

Topic: How Flah Memory works?
Function of flash memory
Types of Flash Memory

hard said...

group 6
sheena hortenila
jhelane minglanilla
charlyn hubahib
juanemee hubahib
BSIT IV-B
HARD DISK

A hard disk drive
(HDD), commonly referred to as a hard drive, hard disk, or fixed disk drive,[1] is a non-volatile storage device which stores digitally encoded data on rapidly rotating platters with magnetic surfaces. Strictly speaking, "drive" refers to a device distinct from its medium, such as a tape drive and its tape, or a floppy disk drive and its floppy disk. Early HDDs had removable media; however, an HDD today is typically a sealed unit (except for a filtered vent hole to equalize air pressure) with fixed media.[2]
HDDs (introduced in 1956 as data storage for an IBM accounting computer[3]) were originally developed for use with general purpose computers. In the 21st century, applications for HDDs have expanded to include digital video recorders, digital audio players, personal digital assistants, digital cameras and video game consoles. In 2005 the first mobile phones to include HDDs were introduced by Samsung and Nokia.[4] The need for large-scale, reliable storage, independent of a particular device, led to the introduction of embedded systems such as RAID arrays, network attached storage (NAS) systems and storage area network (SAN) systems that provide efficient and reliable access to large volumes of data.
Form factor Width Largest capacity Platters (Max) 5.25″ FH 146 mm 47 GB[14] (1998) 14 5.25″ HH 146 mm 19.3 GB[15] (1998) 4[16] 3.5″ 102 mm 1.5 TB[6] (2009) 5 2.5″ 69.9 mm 500 GB[17] (2008) 3 1.8″ (CE-ATA/ZIF) 54 mm 250 GB[18] (2008) 3 1.3″ 43 mm 40 GB[19] (2007) 1 1″ (CFII/ZIF/IDE-Flex) 42 mm 20 GB (2006) 1 0.85″ 24 mm 8 GB[20] (2004)
How a hard drive works
Hard Drive: a storage device that rapidly records data as magnetic pulses on spinning metal platters.
If a computer's CPU is the brain of the PC, the hard drive serves as the heart, pumping vital data to the rest of the system. As the workhorse component of virtually every computer, the hard drive is also the most mysterious. Most people never see the inside of a hard drive, shrouded in its aluminum housing, though they might be intimately familiar with the files and programs it stores, copies, moves, and deletes for them.
• Hard drives provide long-term storage for data on your PC.
• Storage capacities for new drives grow every year (the largest has reached 80GB this year), but the physical size of drives remains relatively constant.
• The faster a drive spins, the faster you can access and transfer data.
• As ever-larger hard drives reach the market, the cost of hard drives (measured as dollars per megabyte of storage) drops.
Hard drives provide the data storage on which all modern computers depend. A hard drive stores information by applying a magnetic field to the moving surface of a disk coated with a magnetic material.
Rules for Working With Hard Disk Drives and Safeguarding Data
Last updated: 2/25/02
I have learned a few lessons the hard way over the many years I have been working on computers and a few of them have been very expensive. Here are my rules for working with hard disk drives and safeguarding data:
Don't work on disk drives when you are tired. In one case I lost thirteen years of work due to multiple, dumb errors made because of fatigue--wiped-out all three copies, wrote stuff on top of them, and could not recover them. Fortunately, most of the real valuable stuff was still on paper. Know when to quit.
Don't be in a hurry, lax, or take unnecessary chances. Know what you doing, think before doing it, and do it a logical sequence of steps. Try to avoid a distracting environment. Know when to stop.
Observe Antistatic Procedures. Manufacturer's pack hard disk drives in anti-static bags for reason... Many people don't realize that computer components can be damaged by static electricity and a problem may not appear until months later when a power surge completes the damage. Ideally, you should wear a grounded anti-static wrist strap when working on computer equipment, especially when handling memory and CPUs. Also, the use of grounded anti-static mats on the floor and on the workbench is a good practice. However, these items can be too expensive if you are building or upgrading just one computer. As a minimum, my advise is to make sure your body is touching the metal on the computer case when handling the CPU and memory. It would also be a good idea to work with bare feet during this critical time. Try to avoid touching drives, boards, memory, etc. with your clothes. Clothing can quite often be charged with static electricity, especially during cold-dry, Winter days. When handling a drive, try to avoid touching the printed circuit board. If a computer can't find drive after being in use for a few months, it may because the printed circuit board got zapped when the drive was installed.
Scan for viruses before working on a customer's or your friend's computer. Take the time to make sure the virus data base is up-to-date before you do it. Scan the computer and any unknown floppies that will be used on the computer. This comes from someone who has had a whole shop full of infected computers... a big shop... more than once. Disinfect before it spreads.
Back-up your data before doing anything major to a hard disk drive. Backup critical data twice and to different media/drives. Murphy's Law applies here; the minute you do not have a good backup something will go wrong and you will lose it. In one case, one of my technicians made a couple of errors and wiped-out the only copies of a module in a customized accounting program. His biggest mistake was to assume that regular backups included the programs. It took three expert man days and many dollars to recreate the module. Besides backing-up My Documents on a Windows 9X/Me computer be sure to back-up such things as:
• user data located elsewhere such as Microsoft Works documents and Intuit Quicken Pro company files
• C:\Windows\Cookies
• C:\Windows\Favorites
• C:\Windows\*.pwl (password files)
• *.dbx and *.pst files (Outlook and Outlook Express personal folders -- search for them with the Windows Explorer)
• Netscape directory/folder if you use it.
Backup now or cry later.
Don't trust tape or tape drives. Don't assume you have a good backup just because a customer regularly backs-up his or her data to tape. I have found that tape drives and tapes are notoriously unreliable, especially if they have been in use for about the same amount of time as it takes to have the first disk drive problem. Many times the tape of the last backup, and often the last two backups, were no good. And, you know, backups of garbage are exactly that... Also, I have seen many cases where a customer or batch program was not actually backing up the data or was not backing-up all of the data. Take the time to back-it-up yourself and make two of them, one them preferably on something besides tape. I usually back up to a scratch drive(s) temporarily connected to the customer's computer and/or to a network server. These methods are much faster than tape and more reliable. When it comes to backups, trust no one except yourself.
When backing-up check to see if the drive is compressed. Be sure to backup the drivers and compressed volume(s).
Record configuration data before wiping that drive. You can save a lot of time by copying or writing down data on that hard disk drive before wiping it clean. ISP configuration, the Windows Product Key, MODEM info, local network info, passwords, phone numbers, product serial numbers and activation keys, etc. Record now or work more later.
Don't wipe a drive until you have to. Don't be in a hurry to clean-up data after or during a job. If you don't have to erase backup files or old disk drives right away, keep that data around until you need the space. If you move data from an old drive to a new one and put the old drive back in the computer with the new one, there is no need to erase the data on the old drive until the customer needs the space. If something goes wrong, the data will still be there. The minute you erase old data/format a drive when you don't have to, you will need it.
Don't prep a new drive while an old drive with data is still attached. It is far too easy to make a mistake and remove a partition with data with fdisk or format the wrong drive/partition. It only takes a few seconds to disconnect both cables to the drive with data.
Do not put a drive where someone can bump into it and knock it off a workbench, etc. They will.
Do not set a drive connected to power on anything with the printed circuit board down. The minute you do something will turn it on and short-it-out. It is OK to run a drive out of the case up-side-down. All modern drives that I know of will fly up-side-down.. Some real old ones (less than 100 Mbytes, as I recall) will not.
Do not over-torque a drive. Do not over-torque the screws securing a drive to a chassis. Never over-torque any screws securing any kind of drive. You can warp the frame and ruin the drive. Always use the correct screws for a given drive. They may vary with the type and manufacturer of a drive. Those supplied with various chassis vary. A screw that is too long can also ruin some drives.
Keep magnetic tools out of your shop. They shouldn't damage a hard disk drive, but they will wipe floppy disks and tapes with data, etc.
Hard Disk Basics
Hard disks were invented in the 1950s. They started as large disks up to 20 inches in diameter holding just a few megabytes. They were originally called "fixed disks" or "Winchesters" (a code name used for a popular IBM product). They later became known as "hard disks" to distinguish them from "floppy disks." Hard disks have a hard platter that holds the magnetic medium, as opposed to the flexible plastic film found in tapes and floppies.
At the simplest level, a hard disk is not that different from a cassette tape. Both hard disks and cassette tapes use the same magnetic recording techniques described in How Tape Recorders Work. Hard disks and cassette tapes also share the major benefits of magnetic storage -- the magnetic medium can be easily erased and rewritten, and it will "remember" the magnetic flux patterns stored onto the medium for many years.

Anonymous said...

group member
Aloro, June Irine
Baranggan, Realyn
Bausing Mary MAe
Medalla,Rezel
Minglanilla, Jhelanie

HOw does Microprocessor works?


How does Microprocessor Works?
A microprocessor or also called as CPU or Central Processing Unit is the heart of computer.It is a complete computational engine that is fabricated in a single chip.The first microprocessor is the Intel 4004 in 1971, it could do add, subtract and do it only in 4-bits at a time.The first microprocessor to make it into a home computer was the Intel 8080, a complete 8-bit computer on one chip introduced in 1974. The first microprocessor to make a real splash in the market was the Intel 8088, introduced in 1979 and incorporated into the IBM PC (which first appeared in 1982 or so).

INSIDE A MICROPROCESSOR
A microprocessor executes a collection of machine instructions that tell the processor what to do. Based on the instructions, a microprocessor does three basic things:

* Using its ALU (Arithmetic/Logic Unit), a microprocessor can perform mathematical operations like addition, subtraction, multiplication and division. Modern microprocessors contain complete floating point processors that can perform extremely sophisticated operations on large floating point numbers.
* A microprocessor can move data from one memory location to another
* A microprocessor can make decisions and jump to a new set of instructions based on those decisions.

The following diagram shows an extremely simple microprocessor capable of doing those three things:

This microprocessor has:

* an address bus (that may be 8, 16 or 32 bits wide) that sends an address to memory
* a data bus (that may be 8, 16 or 32 bits wide) that can send data to memory or receive data from memory
* a RD (Read) and WR (Write) line to tell the memory whether it wants to set or get the addressed location
* a clock line that lets a clock pulse sequence the processor
* A reset line that resets the program counter to zero (or whatever) and restarts execution.

rEyCh'Z said...

Group 2
Members:
Rhea S. Paglinawan
Reychell C. dela Torre
Jenny T. Sagal
Linlie R. Libo-on
Analyn L. Calaunan


FLASH MEMORY

Flash memory is non-volatile computer memory that can be electrically erased and reprogrammed. It is a technology that is primarily used in memory cards and USB flash drives for general storage and transfer of data between computers and other digital products. It is a specific type of EEPROM (Electrically Erasable Programmable Read-Only Memory) that is erased and programmed in large blocks; in early flash the entire chip had to be erased at once. Flash memory costs far less than byte-programmable EEPROM and therefore has become the dominant technology wherever a significant amount of non-volatile, solid state storage is needed. Example applications include PDAs (personal digital assistants), laptop computers, digital audio players, digital cameras and mobile phones. It has also gained popularity in the game console market, where it is often used instead of EEPROMs or battery-powered SRAM for game save data.

Flash memory is non-volatile, which means that no power is needed to maintain the information stored in the chip. In addition, flash memory offers fast read access times (although not as fast as volatile DRAM memory used for main memory in PCs) and better kinetic shock resistance than hard disks. These characteristics explain the popularity of flash memory in portable devices. Another feature of flash memory is that when packaged in a "memory card," it is enormously durable, being able to withstand intense pressure, extremes of temperature, and even immersion in water.
Although technically a type of EEPROM, the term "EEPROM" is generally used to refer specifically to non-flash EEPROM which is erasable in small blocks, typically bytes. Because erase cycles are slow, the large block sizes used in flash memory erasing give it a significant speed advantage over old-style EEPROM when writing large amounts of data.

Flash memory is a type of EEPROM chip, which stands for Electronically Erasable Programmable Read Only Memory. It has a grid of columns and rows with a cell that has two transistors at each intersection (see image below).

Here are a few examples of flash memory:

* Your computer's BIOS chip
* CompactFlash (most often found in digital cameras)
* SmartMedia (most often found in digital cameras)
* Memory Stick (most often found in digital cameras)
* PCMCIA Type I and Type II memory cards (used as solid-state disks in laptops)
* Memory cards for video game consoles

HOW FLASH MEMORY WORKS?

­We store and transfer all kinds o­f files on our computers -- digital photographs, music files, wor­d processing documents, PDFs and countless other forms of media. But sometimes your computer's hard drive isn't exactly wher­e you want your information. Whether you want to make backup copies of files that live off of your systems or if you worry about your security, portable storage devices that use a type of electronic memory called flash memory may be the right solution.
Electronic memory comes in a variety of forms to serve a variety of purposes. Flash memory is used for easy and fast information storage in computers, digital cameras and home video game consoles. It is used more like a hard drive than as RAM. In fact, flash memory is known as a solid state storage device, meaning there are no moving parts -- everything is electronic instead of mechanical.

Flash memory stick
Enlarge picture
Flash memories are solid state electronic devices with random access memory capabilities used for fast digital information storage. They are
used in a wide range of applications, such as storing BIOS routines in typical digital computers, as medium capacity hard drives for digital cameras or as memory cards for laptop computers and video consoles.

The technology used to manufacture flash memories is based on EEPROM (electrically erasable programmable read-only memory) chips, which consist of memory banks formed of storing cells disposed in a grid of columns and rows. A basic storing cell has two MOS-FET transistors at each intersection, and are separated by an oxide layer. The two transistors are known as the floating gate and the control gate.

When the floating gate is connected to the row, the cell stores a '1' logic bit. The value changes to '0' logic through a process known as the Fowler-Nordheim tunneling, which alters the distribution of electrons inside the floating gate. When charge is applied to the floating gate from the column, it passes through the transistor and then drains to the ground, forming a negative charge build up on the other side of the oxide layer.

Once the electric charge forms, no other electrons can penetrate through the layer and on the other side due to electrostatic forces, thus establishing a charge distribution slightly higher than 50 percent between the floating gate and the control gate, which registers as '1' logic. However, when the charge distribution between the two transistors drops below 50 percent, then the cell is evaluated as storing a '0' logic bit.

In order to erase the flash memory and return the electron distribution inside it, a high-voltage charge is used to generate an electric field that resets all the cells to '1' logic. This can be done in certain blocks of the memory or the entire chip. Flash memories can retain information only as long as they are powered, however Flash RAM can maintain the information it is storing without requiring any supplemental power sources.

There are several types of Flash memories, each with different specifications regarding size and storage capacity, but they all share roughly the same properties. They are solid state devices, thus having no moving parts and therefore are noiseless, have fast access speeds and are relatively small in size. They consume low amounts of power while in use and can provide a storing capacity ranging between several kilobytes to a few tens of gigabytes.

TYPES OF FLASH MEMORY

In the age of technology development, several different types of memory cards are available in the market. You can find USB Flash Memory Disks, USB Flash Memory Sticks, and others to give you the leading edge. These different cards functions more or less in similar fashion. Simply press a card into a given slot on the device, and data gets encoded onto the card.

1.) USB Flash Memory Disks

A USB flash drive consists of a NAND-type flash memory data storage device integrated with a USB (universal serial bus) interface. USB flash drives are typically removable and rewritable, much smaller than a floppy disk (1 to 4 inches or 2.5 to 10 cm), and weigh less than 2 ounces (56 g)[citation needed]. Storage capacities typically range from 64 MB to 64 GB[1] with steady improvements in size and price per gigabyte. Some allow 1 million write or erase cycles[2][3] and have 10-year data retention,[4] connected by USB 1.1 or USB.

jen-jen said...

Group 8:
Member:
* Genalyn Celleros
* Erlyn Galapago
* Roldan Sullano
* Christian Laper

Topic: How Microprocessor Work?
Sub Topic:
**Introduction to how **Microprocessors Work
**Microprocessor Progression: **IntelMicroprocessor Logic
**Microprocessor Memory
**Microprocessor Instructions
**Microprocessor Performance and Trends
**64-bit Microprocessors
**Lots More Information
___________________________________

"Introduction to how Microprocessors Work"

The microprocessor is the heart of any normal computer, whether it is a desktop machine, a server or a laptop. The microprocessor you are using might be a Pentium, a K6, a PowerPC, a Spark or any of the many other brands and types of microprocessors, but they all do approximately the same thing in approximately the same way.

A microprocessor -- also known as a CPU or central processing unit -- is a complete computation engine that is fabricated on a single chip. The first microprocessor was the Intel 4004, introduced in 1971. The 4004 was not very powerful -- all it could do was add and subtract, and it could only do that 4 bits at a time. But it was amazing that everything was on one chip. Prior to the 4004, engineers built computers either from collections of chips or from discrete components (transistors wired one at a time). The 4004 powered one of the first portable electronic calculators.


"Microprocessor Progression: Intel"

The first microprocessor to make it into a home computer was the Intel 8080, a complete 8-bit computer on one chip, introduced in 1974. The first microprocessor to make a real splash in the market was the Intel 8088, introduced in 1979 and incorporated into the IBM PC (which first appeared around 1982). If you are familiar with the PC market and its history, you know that the PC market moved from the 8088 to the 80286 to the 80386 to the 80486 to the Pentium to the Pentium II to the Pentium III to the Pentium 4. All of these microprocessors are made by Intel and all of them are improvements on the basic design of the 8088. The Pentium 4 can execute any piece of code that ran on the original 8088, but it does it about 5,000 times faster!

What's a Chip?

A chip is also called an integrated circuit. Generally it is a small, thin piece of silicon onto which the transistors making up the microprocessor have been etched. A chip might be as large as an inch on a side and can contain tens of millions of transistors. Simpler processors might consist of a few thousand transistors etched onto a chip just a few millimeters square.

* The date is the year that the processor was first introduced. Many processors are re-introduced at higher clock speeds for many years after the original release date.
* Transistors is the number of transistors on the chip. You can see that the number of transistors on a single chip has risen steadily over the years.
* Microns is the width, in microns, of the smallest wire on the chip. For comparison, a human hair is 100 microns thick. As the feature size on the chip goes down, the number of transistors rises.
* Clock speed is the maximum rate that the chip can be clocked at. Clock speed will make more sense in the next section.
* Data Width is the width of the ALU. An 8-bit ALU can add/subtract/multiply/etc. two 8-bit numbers, while a 32-bit ALU can manipulate 32-bit numbers. An 8-bit ALU would have to execute four instructions to add two 32-bit numbers, while a 32-bit ALU can do it in one instruction. In many cases, the external data bus is the same width as the ALU, but not always. The 8088 had a 16-bit ALU and an 8-bit bus, while the modern Pentiums fetch data 64 bits at a time for their 32-bit ALUs.
* MIPS stands for "millions of instructions per second" and is a rough measure of the performance of a CPU. Modern CPUs can do so many different things that MIPS ratings lose a lot of their meaning, but you can get a general sense of the relative power of the CPUs from this column.

"Microprocessor Logic"

To understand how a microprocessor works, it is helpful to look inside and learn about the logic used to create one. In the process you can also learn about assembly language -- the native language of a microprocessor -- and many of the things that engineers can do to boost the speed of a processor.

A microprocessor executes a collection of machine instructions that tell the processor what to do. Based on the instructions, a microprocessor does three basic things:

* Using its ALU (Arithmetic/Logic Unit), a microprocessor can perform mathematical operations like addition, subtraction, multiplication and division. Modern microprocessors contain complete floating point processors that can perform extremely sophisticated operations on large floating point numbers.
* A microprocessor can move data from one memory location to another.
* A microprocessor can make decisions and jump to a new set of instructions based on those decisions.


"Microprocessor Memory"

ROM stands for read-only memory. A ROM chip is programmed with a permanent collection of pre-set bytes. The address bus tells the ROM chip which byte to get and place on the data bus. When the RD line changes state, the ROM chip presents the selected byte onto the data bus.

RAM stands for random-access memory. RAM contains bytes of information, and the microprocessor can read or write to those bytes depending on whether the RD or WR line is signaled. One problem with today's RAM chips is that they forget everything once the power goes off. That is why the computer needs ROM.


"Microprocessor Instructions"

* LOADA mem - Load register A from memory address
* LOADB mem - Load register B from memory address
* CONB con - Load a constant value into register B
* SAVEB mem - Save register B to memory address
* SAVEC mem - Save register C to memory address
* ADD - Add A and B and store the result in C
* SUB - Subtract A and B and store the result in C
* MUL - Multiply A and B and store the result in C
* DIV - Divide A and B and store the result in C
* COM - Compare A and B and store the result in test
* JUMP addr - Jump to an address
* JEQ addr - Jump, if equal, to address
* JNEQ addr - Jump, if not equal, to address
* JG addr - Jump, if greater than, to address
* JGE addr - Jump, if greater than or equal, to address
* JL addr - Jump, if less than, to address
* JLE addr - Jump, if less than or equal, to address
* STOP - Stop execution


"Microprocessor Performance and Trends"

The number of transistors available has a huge effect on the performance of a processor. As seen earlier, a typical instruction in a processor like an 8088 took 15 clock cycles to execute. Because of the design of the multiplier, it took approximately 80 cycles just to do one 16-bit multiplication on the 8088. With more transistors, much more powerful multipliers capable of single-cycle speeds become possible.

More transistors also allow for a technology called pipelining. In a pipelined architecture, instruction execution overlaps. So even though it might take five clock cycles to execute each instruction, there can be five instructions in various stages of execution simultaneously. That way it looks like one instruction completes every clock cycle.

Many modern processors have multiple instruction decoders, each with its own pipeline. This allows for multiple instruction streams, which means that more than one instruction can complete during each clock cycle. This technique can be quite complex to implement, so it takes lots of transistors.

Trends
The trend in processor design has primarily been toward full 32-bit ALUs with fast floating point processors built in and pipelined execution with multiple instruction streams. The newest thing in processor design is 64-bit ALUs, and people are expected to have these processors in their home PCs in the next decade. There has also been a tendency toward special instructions (like the MMX instructions) that make certain operations particularly efficient, and the addition of hardware virtual memory support and L1 caching on the processor chip. All of these trends push up the transistor count, leading to the multi-million transistor powerhouses available today. These processors can execute about one billion instructions per second.

"64-bit Microprocessors"

Sixty-four-bit processors have been with us since 1992, and in the 21st century they have started to become mainstream. Both Intel and AMD have introduced 64-bit chips, and the Mac G5 sports a 64-bit processor. Sixty-four-bit processors have 64-bit ALUs, 64-bit registers, 64-bit buses and so on.

"Lots More Information"

Related How Stuff Works Articles

* CPU Quiz
* How Semiconductors Work
* How PCs Work
* How C Programming Works
* How Java Works
* How Operating Systems Work
* How Computer Memory Works
* How Quantum Computers Will Work
* How DNA Computers Will Work

More Great Links

* Webopedia: microprocessor
* Click On CPU
* Processor Upgrades
* 6th Generation CPU Comparisons
* 7th Generation CPU Comparisons
* TSCP Benchmark Scores

Anonymous said...

group members;
Aloro, June Irine
Baranggan, Realyn
Bausing, Mary Mae
Medalla, Rezel
Minglanilla, Jhelanie

http://junei-systemresouirce.blogspot.com/

luvrock said...

Group7:
Member:
Ronald Solania
Jorry De La Pinia
Felix Russiana
Lino Burgos

Topic: How OS Works
Sub Topic:
1. Introduction to How Operating Systems Work
2. What is an Operating System?
3. Operating System Functions
4. Types of Operating Systems
5. Computer Operating Systems
6. Processor Management
7. Process Control Block
8. Memory Storage and Management
9. Device Management
10.Application Program Interfaces
11.User Interface
12.Operating System Development

Introduction to How Operating Systems Work:
The purpose of an operating system is to organize and control hardware and software so that the device it lives in behaves in a flexible but predictable way. In this article, we'll tell you what a piece of software must do to be called an operating system, show you how the operating system in your desktop computer works and give you some examples of how to take control of the other operating systems around you.

What is an Operating System?

Not all computers have operating systems. The computer that controls the microwave oven in your kitchen, for example, doesn't need an operating system. It has one set of tasks to perform, very straightforward input to expect (a numbered keypad and a few pre-set buttons) and simple, never-changing hardware to control. For a computer like this, an operating system would be unnecessary baggage, driving up the development and manufacturing costs significantly and adding complexity where none is required. Instead, the computer in a microwave oven simply runs a single hard-wired program all the time.
A model displays Japanese mobile phone operator Willcom's smart phone, 'D4', which comes equipped with the Windows Vista operating system.
Yoshikazu Tsuno/AFP/Getty Images
A model displays Japanese mobile phone operator Willcom's smart phone, 'D4', which comes equipped with the Windows Vista operating system.

For other devices, an operating system creates the ability to:

* serve a variety of purposes
* interact with users in more complicated ways
* keep up with needs that change over time

All desktop computers have operating systems. The most common are the Windows family of operating systems developed by Microsoft, the Macintosh operating systems developed by Apple and the UNIX family of operating systems (which have been developed by a whole history of individuals, corporations and collaborators). There are hundreds of other operating systems available for special-purpose applications, including specializations for mainframes, robotics, manufacturing, real-time control systems and so on.

In any device that has an operating system, there's usually a way to make changes to how the device works. This is far from a happy accident; one of the reasons operating systems are made out of portable code rather than permanent physical circuits is so that they can be changed or modified without having to scrap the whole device.

For a desktop computer user, this means you can add a new security update, system patch, new application or even an entirely new operating system rather than junk your computer and start again with a new one when you need to make a change. As long as you understand how an operating system works and how to get at it, in many cases you can change some of the ways it behaves. The same thing goes for your phone, too.

Regardless of what device an operating system runs, what exactly can it do?

Operating System Functions:

Operating System Functions
At the simplest level, an operating system does two things:

1. It manages the hardware and software resources of the system. In a desktop computer, these resources include such things as the processor, memory, disk space and more (On a cell phone, they include the keypad, the screen, the address book, the phone dialer, the battery and the network connection).

2. It provides a stable, consistent way for applications to deal with the hardware without having to know all the details of the hardware.

The first task, managing the hardware and software resources, is very important, as various programs and input methods compete for the attention of the central processing unit (CPU) and demand memory, storage and input/output (I/O) bandwidth for their own purposes. In this capacity, the operating system plays the role of the good parent, making sure that each application gets the necessary resources while playing nicely with all the other applications, as well as husbanding the limited capacity of the system to the greatest good of all the users and applications.
Operating system architecture
©2008 HowStuffWorks
The operating system controls every task your computer
carries out and manages
system resources.

­ The second task, providing a consistent application interface, is especially important if there is to be more than one of a particular type of computer using the operating system, or if the hardware making up the computer is ever open to change. A consistent application program interface (API) allows a software developer to write an application on one computer and have a high level of confidence that it will run on another computer of the same type, even if the amount of memory or the quantity of storage is different on the two machines.

Even if a particular computer is unique, an operating system can ensure that applications continue to run when hardware upgrades and updates occur. This is because the operating system -- not the application -- is charged with managing the hardware and the distribution of its resources. One of the challenges facing developers is keeping their operating systems flexible enough to run hardware from the thousands of vendors manufacturing computer equipment. Today's systems can accommodate thousands of different printers, disk drives and special peripherals in any possible combination.

Types of Operating Systems:

Types of Operating Systems
Within the broad family of operating systems, there are generally four types, categorized based on the types of computers they control and the sort of applications they support. The categories are:

* Real-time operating system (RTOS) - Real-time operating systems are used to control machinery, scientific instruments and industrial systems. An RTOS typically has very little user-interface capability, and no end-user utilities, since the system will be a "sealed box" when delivered for use. A very important part of an RTOS is managing the resources of the computer so that a particular operation executes in precisely the same amount of time, every time it occurs. In a complex machine, having a part move more quickly just because system resources are available may be just as catastrophic as having it not move at all because the system is busy.

* Single-user, single task - As the name implies, this operating system is designed to manage the computer so that one user can effectively do one thing at a time. The Palm OS for Palm handheld computers is a good example of a modern single-user, single-task operating system.

* Single-user, multi-tasking - This is the type of operating system most people use on their desktop and laptop computers today. Microsoft's Windows and Apple's MacOS platforms are both examples of operating systems that will let a single user have several programs in operation at the same time. For example, it's entirely possible for a Windows user to be writing a note in a word processor while downloading a file from the Internet while printing the text of an e-mail message.

* Multi-user - A multi-user operating system allows many different users to take advantage of the computer's resources simultaneously. The operating system must make sure that the requirements of the various users are balanced, and that each of the programs they are using has sufficient and separate resources so that a problem with one user doesn't affect the entire community of users. Unix, VMS and mainframe operating systems, such as MVS, are examples of multi-user operating systems.


Photo courtesy Apple
Mac OS X Panther screen shot


It's important to differentiate between multi-user operating systems and single-user operating systems that support networking. Windows 2000 and Novell Netware can each support hundreds or thousands of networked users, but the operating systems themselves aren't true multi-user operating systems. The system administrator is the only "user" for Windows 2000 or Netware. The network support and all of the remote user logins the network enables are, in the overall plan of the operating system, a program being run by the administrative user.

With the different types of operating systems in mind, it's time to look at the basic functions provided by an operating system.

Computer Operating Systems:


Computer Operating Systems

When you turn on the power to a computer, the first program that runs is usually a set of instructions kept in the computer's read-only memory (ROM). This code examines the system hardware to make sure everything is functioning properly. This power-on self test (POST) checks the CPU, memory, and basic input-output systems (BIOS) for errors and stores the result in a special memory location. Once the POST has successfully completed, the software loaded in ROM (sometimes called the BIOS or firmware) will begin to activate the computer's disk drives. In most modern computers, when the computer activates the hard disk drive, it finds the first piece of the operating system: the bootstrap loader.
Khulud Dwaibess sits at her computer in her office in the West Bank city of Bethlehem. Several things happen when she boots up her computer, but eventually the operating system takes over.
Awad Awad/AFP/Getty Images
Khulud Dwaibess sits at her computer in her office in the West Bank city of Bethlehem in Israel. Several things happen when she boots up her computer, but eventually the operating system takes over.

The bootstrap loader is a small program that has a single function: It loads the operating system into memory and allows it to begin operation. In the most basic form, the bootstrap loader sets up the small driver programs that interface with and control the various hardware subsystems of the computer. It sets up the divisions of memory that hold the operating system, user information and applications. It establishes the data structures that will hold the myriad signals, flags and semaphores that are used to communicate within and between the subsystems and applications of the computer. Then it turns control of the computer over to the operating system.

The operating system's tasks, in the most general sense, fall into six categories:

* Processor management
* Memory management
* Device management
* Storage management
* Application interface
* User interface

While there are some who argue that an operating system should do more than these six tasks, and some operating-system vendors do build many more utility programs and auxiliary functions into their operating systems, these six tasks define the core of nearly all operating systems. Next, let's look at the tools the operating system uses to perform each of these functions.

Processor Management:

Processor Management

The heart of managing the processor comes down to two related issues:

* Ensuring that each process and application receives enough of the processor's time to function properly
* Using as many processor cycles as possible for real work

The basic unit of software that the operating system deals with in scheduling the work done by the processor is either a process or a thread, depending on the operating system.

It's tempting to think of a process as an application, but that gives an incomplete picture of how processes relate to the operating system and hardware. The application you see (word processor, spreadsheet or game) is, indeed, a process, but that application may cause several other processes to begin, for tasks like communications with other devices or other computers. There are also numerous processes that run without giving you direct evidence that they ever exist. For example, Windows XP and UNIX can have dozens of background processes running to handle the network, memory management, disk management, virus checks and so on.

A process, then, is software that performs some action and can be controlled -- by a user, by other applications or by the operating system.

It is processes, rather than applications, that the operating system controls and schedules for execution by the CPU. In a single-tasking system, the schedule is straightforward. The operating system allows the application to begin running, suspending the execution only long enough to deal with interrupts and user input.

Interrupts are special signals sent by hardware or software to the CPU. It's as if some part of the computer suddenly raised its hand to ask for the CPU's attention in a lively meeting. Sometimes the operating system will schedule the priority of processes so that interrupts are masked -- that is, the operating system will ignore the interrupts from some sources so that a particular job can be finished as quickly as possible. There are some interrupts (such as those from error conditions or problems with memory) that are so important that they can't be ignored. These non-maskable interrupts (NMIs) must be dealt with immediately, regardless of the other tasks at hand.

While interrupts add some complication to the execution of processes in a single-tasking system, the job of the operating system becomes much more complicated in a multi-tasking system. Now, the operating system must arrange the execution of applications so that you believe that there are several things happening at once. This is complicated because the CPU can only do one thing at a time. Today's multi-core processors and multi-processor machines can handle more work, but each processor core is still capable of managing one task at a time.

In order to give the appearance of lots of things happening at the same time, the operating system has to switch between different processes thousands of times a second. Here's how it happens:

* A process occupies a certain amount of RAM. It also makes use of registers, stacks and queues within the CPU and operating-system memory space.
* When two processes are multi-tasking, the operating system allots a certain number of CPU execution cycles to one program.
* After that number of cycles, the operating system makes copies of all the registers, stacks and queues used by the processes, and notes the point at which the process paused in its execution.
* It then loads all the registers, stacks and queues used by the second process and allows it a certain number of CPU cycles.
* When those are complete, it makes copies of all the registers, stacks and queues used by the second program, and loads the first program.

Process Control Block:

Process Control Block

All of the information needed to keep track of a process when switching is kept in a data package called a process control block. The process control block typically contains:

* An ID number that identifies the process
* Pointers to the locations in the program and its data where processing last occurred
* Register contents
* States of various flags and switches
* Pointers to the upper and lower bounds of the memory required for the process
* A list of files opened by the process
* The priority of the process
* The status of all I/O devices needed by the process

Each process has a status associated with it. Many processes consume no CPU time until they get some sort of input. For example, a process might be waiting for a keystroke from the user. While it is waiting for the keystroke, it uses no CPU time. While it's waiting, it is "suspended". When the keystroke arrives, the OS changes its status. When the status of the process changes, from pending to active, for example, or from suspended to running, the information in the process control block must be used like the data in any other program to direct execution of the task-switching portion of the operating system.

This process swapping happens without direct user interference, and each process gets enough CPU cycles to accomplish its task in a reasonable amount of time. Trouble can begin if the user tries to have too many processes functioning at the same time. The operating system itself requires some CPU cycles to perform the saving and swapping of all the registers, queues and stacks of the application processes. If enough processes are started, and if the operating system hasn't been carefully designed, the system can begin to use the vast majority of its available CPU cycles to swap between processes rather than run processes. When this happens, it's called thrashing, and it usually requires some sort of direct user intervention to stop processes and bring order back to the system.

One way that operating-system designers reduce the chance of thrashing is by reducing the need for new processes to perform various tasks. Some operating systems allow for a "process-lite," called a thread, that can deal with all the CPU-intensive work of a normal process, but generally does not deal with the various types of I/O and does not establish structures requiring the extensive process control block of a regular process. A process may start many threads or other processes, but a thread cannot start a process.

So far, all the scheduling we've discussed has concerned a single CPU. In a system with two or more CPUs, the operating system must divide the workload among the CPUs, trying to balance the demands of the required processes with the available cycles on the different CPUs. Asymmetric operating systems use one CPU for their own needs and divide application processes among the remaining CPUs. Symmetric operating systems divide themselves among the various CPUs, balancing demand versus CPU availability even when the operating system itself is all that's running.

If the operating system is the only software with execution needs, the CPU is not the only resource to be scheduled. Memory management is the next crucial step in making sure that all processes run smoothly.

Memory Storage and Management:


Memory Storage and Management
When an operating system manages the computer's memory, there are two broad tasks to be accomplished:

1. Each process must have enough memory in which to execute, and it can neither run into the memory space of another process nor be run into by another process.
2. The different types of memory in the system must be used properly so that each process can run most effectively.

The first task requires the operating system to set up memory boundaries for types of software and for individual applications.


As an example, let's look at an imaginary small system with 1 megabyte (1,000 kilobytes) of RAM. During the boot process, the operating system of our imaginary computer is designed to go to the top of available memory and then "back up" far enough to meet the needs of the operating system itself. Let's say that the operating system needs 300 kilobytes to run. Now, the operating system goes to the bottom of the pool of RAM and starts building up with the various driver software required to control the hardware subsystems of the computer. In our imaginary computer, the drivers take up 200 kilobytes. So after getting the operating system completely loaded, there are 500 kilobytes remaining for application processes.

When applications begin to be loaded into memory, they are loaded in block sizes determined by the operating system. If the block size is 2 kilobytes, then every process that's loaded will be given a chunk of memory that's a multiple of 2 kilobytes in size. Applications will be loaded in these fixed block sizes, with the blocks starting and ending on boundaries established by words of 4 or 8 bytes. These blocks and boundaries help to ensure that applications won't be loaded on top of one another's space by a poorly calculated bit or two. With that ensured, the larger question is what to do when the 500-kilobyte application space is filled.

In most computers, it's possible to add memory beyond the original capacity. For example, you might expand RAM from 1 to 2 gigabytes. This works fine, but can be relatively expensive. It also ignores a fundamental fact of computing -- most of the information that an application stores in memory is not being used at any given moment. A processor can only access memory one location at a time, so the vast majority of RAM is unused at any moment. Since disk space is cheap compared to RAM, then moving information in RAM to hard disk can greatly expand RAM space at no cost. This technique is called virtual memory management.

Disk storage is only one of the memory types that must be managed by the operating system, and it's also the slowest. Ranked in order of speed, the types of memory in a computer system are:

* High-speed cache -- This is fast, relatively small amounts of memory that are available to the CPU through the fastest connections. Cache controllers predict which pieces of data the CPU will need next and pull it from main memory into high-speed cache to speed up system performance.
* Main memory -- This is the RAM that you see measured in megabytes when you buy a computer.
* Secondary memory -- This is most often some sort of rotating magnetic storage that keeps applications and data available to be used, and serves as virtual RAM under the control of the operating system.

The operating system must balance the needs of the various processes with the availability of the different types of memory, moving data in blocks (called pages) between available memory as the schedule of processes dictates.


Device Management:

Device Management

The path between the operating system and virtually all hardware not on the computer's motherboard goes through a special program called a driver. Much of a driver's function is to be the translator between the electrical signals of the hardware subsystems and the high-level programming languages of the operating system and application programs. Drivers take data that the operating system has defined as a file and translate them into streams of bits placed in specific locations on storage devices, or a series of laser pulses in a printer.
A driver helps the operating system communicate with the electrical signals from computer hardware.
Nael Nabil/iStockphoto.com
A driver helps the operating system communicate with the electrical signals from computer hardware.

­Because there are such wide differences in the hardware, there are differences in the way that the driver programs function. Most run when the device is required, and function much the same as any other process. The operating system will frequently assign high-priority blocks to drivers so that the hardware resource can be released and readied for further use as quickly as possible.

One reason that drivers are separate from the operating system is so that new functions can be added to the driver -- and thus to the hardware subsystems -- without requiring the operating system itself to be modified, recompiled and redistributed. Through the development of new hardware device drivers, development often performed or paid for by the manufacturer of the subsystems rather than the publisher of the operating system, input/output capabilities of the overall system can be greatly enhanced.

Managing input and output is largely a matter of managing queues and buffers, special storage facilities that take a stream of bits from a device, perhaps a keyboard or a serial port, hold those bits, and release them to the CPU at a rate with which the CPU can cope. This function is especially important when a number of processes are running and taking up processor time. The operating system will instruct a buffer to continue taking input from the device, but to stop sending data to the CPU while the process using the input is suspended. Then, when the process requiring input is made active once again, the operating system will command the buffer to send data. This process allows a keyboard or a modem to deal with external users or computers at a high speed even though there are times when the CPU can't use input from those sources.

Managing all the resources of the computer system is a large part of the operating system's function and, in the case of real-time operating systems, may be virtually all the functionality required. For other operating systems, though, providing a relatively simple, consistent way for applications and humans to use the power of the hardware is a crucial part of their reason for existing.

Application Program Interfaces:


Application Program Interfaces

Just as drivers provide a way for applications to make use of hardware subsystems without having to know every detail of the hardware's operation, application program interfaces (APIs) let application programmers use functions of the computer and operating system without having to directly keep track of all the details in the CPU's operation. Let's look at the example of creating a hard disk file for holding data to see why this can be important.
An officer of robot venture company ZMP displays surveillance humanoid robot,
Koichi Kamoshida/Getty Images
An officer of robot venture company ZMP displays surveillance humanoid robot, "Nuvo," in Tokyo, Japan.

A programmer writing an application to record data from a scientific instrument might want to allow the scientist to specify the name of the file created. The operating system might provide an API function named MakeFile for creating files. When writing the program, the programmer would insert a line that looks like this:

MakeFile [1, %Name, 2]

In this example, the instruction tells the operating system to create a file that will allow random access to its data (signified by the 1 -- the other option might be 0 for a serial file), will have a name typed in by the user (%Name) and will be a size that varies depending on how much data is stored in the file (signified by the 2 -- other options might be zero for a fixed size, and 1 for a file that grows as data is added but does not shrink when data is removed). Now, let's look at what the operating system does to turn the instruction into action.

The operating system sends a query to the disk drive to get the location of the first available free storage location.

With that information, the operating system creates an entry in the file system showing the beginning and ending locations of the file, the name of the file, the file type, whether the file has been archived, which users have permission to look at or modify the file, and the date and time of the file's creation.

The operating system writes information at the beginning of the file that identifies the file, sets up the type of access possible and includes other information that ties the file to the application. In all of this information, the queries to the disk drive and addresses of the beginning and ending point of the file are in formats heavily dependent on the manufacturer and model of the disk drive.

Because the programmer has written the program to use the API for disk storage, the programmer doesn't have to keep up with the instruction codes, data types and response codes for every possible hard disk and tape drive. The operating system, connected to drivers for the various hardware subsystems, deals with the changing details of the hardware. The programmer must simply write code for the API and trust the operating system to do the rest.

APIs have become one of the most hotly contested areas of the computer industry in recent years. Companies realize that programmers using their API will ultimately translate this into the ability to control and profit from a particular part of the industry. This is one of the reasons that so many companies have been willing to provide applications like readers or viewers to the public at no charge. They know consumers will request that programs take advantage of the free readers, and application companies will be ready to pay royalties to allow their software to provide the functions requested by the consumers.

User Interface:

User Interface

Just as the API provides a consistent way for applications to use the resources of the computer system, a user interface (UI) brings structure to the interaction between a user and the computer. In the last decade, almost all development in user interfaces has been in the area of the graphical user interface (GUI), with two models, Apple's Macintosh and Microsoft's Windows, receiving most of the attention and gaining most of the market share. The popular open-source Linux operating system also supports a graphical user interface.


Screen shot copyright © 2003 Red Hat, Inc. All rights reserved.
Reused with permission from Red Hat, Inc.
Screen shot of Red Hat's Linux operating system


There are other user interfaces, some graphical and some not, for other operating systems.

Unix, for example, has user interfaces called shells that present a user interface more flexible and powerful than the standard operating system text-based interface. Programs such as the Korn Shell and the C Shell are text-based interfaces that add important utilities, but their main purpose is to make it easier for the user to manipulate the functions of the operating system. There are also graphical user interfaces, such as X-Windows and Gnome, that make Unix and Linux more like Windows and Macintosh computers from the user's point of view.

It's important to remember that in all of these examples, the user interface is a program or set of programs that sits as a layer above the operating system itself. The same thing is true, with somewhat different mechanisms, of both Windows and Macintosh operating systems. The core operating-system functions -- the management of the computer system -- lie in the kernel of the operating system. The display manager is separate, though it may be tied tightly to the kernel beneath. The ties between the operating-system kernel and the user interface, utilities and other software define many of the differences in operating systems today, and will further define them in the future.

Operating System Development:

Operating System Development

For desktop systems, access to a LAN or the Internet has become such an expected feature that in many ways it's hard to discuss an operating system without making reference to its connections to other computers and servers. Operating system developers have made the Internet the standard method for delivering crucial operating system updates and bug fixes. Although it's possible to receive these updates via CD or DVD, it's becoming increasingly less common. In fact, some entire operating systems themselves are only available through distribution over the Internet.

Further, a process called NetBooting has streamlined the capability to move the working operating system of a standard consumer desktop computer -- kernel, user interface and all -- off of the machine it controls. This was previously only possible for experienced power-users on multi-user platforms like UNIX and with a suite of specialized applications. NetBooting allows the operating system for one computer to be served over a network connection, by a remote computer connected anywhere in the network. One NetBoot server can serve operating systems to several dozen client computers simultaneously, and to the user sitting in front of each client computer the experience is just like they are using their familiar desktop operating system like Windows or Mac OS.

One question concerning the future of operating systems concerns the ability of a particular philosophy of software distribution to create an operating system usable by corporations and consumers together.


Logo courtesy Larry Ewing
Linux logo

Linux, the operating system created and distributed according to the principles of open source, has had a significant impact on the operating system in general. Most operating systems, drivers and utility programs are written by commercial organizations that distribute executable versions of their software -- versions that can't be studied or altered. Open source requires the distribution of original source materials that can be studied, altered and built upon, with the results once again freely distributed. In the desktop computer realm, this has led to the development and distribution of countless useful and cost-free applications like the image manipulation program GIMP and the popular Web server Apache. In the consumer device realm, the use of Linux has paved the way for individual users to have greater control over how their devices behave.

Many consumer devices like cell phones and routers deliberately hide access to the operating system from the user, mostly to make sure that it's not inadvertently broken or removed. In many cases, they leave a "developer's mode" or "programmer's mode" open to allow changes to be made; howe3ver, that's only if you know how to find it. Often these systems may be programmed in such a way that there are only a limited range of changes that can be made.Some devices leave both a mode of access and the means of making powerful changes open to users, especially those that use Linux. Here are a couple of examples:

* The TiVo DVR runs on a modified version of Linux. All of the modifications are public knowledge, and can be downloaded here along with some special tools for manipulating the code. Many enterprising TiVo users have added functionality to their systems, including increasing the storage capacity to getting to UNIX shells to changing the mode from NTSC to PAL.


Photo courtesy Amazon.com
Philips HDR312 TiVo 30-Hour Digital Video Recorder and Linksys EZXS55W EtherFast 10/100 5-Port Workgroup Switch

* Many home routers also run on Linux.

apple said...

Microprocessor Performance and Trends
The number of transistors available has a huge effect on the performance of a processor. As seen earlier, a typical instruction in a processor like an 8088 took 15 clock cycles to execute. Because of the design of the multiplier, it took approximately 80 cycles just to do one 16-bit multiplication on the 8088. With more transistors, much more powerful multipliers capable of single-cycle speeds become possible.

More transistors also allow for a technology called pipelining. In a pipelined architecture, instruction execution overlaps. So even though it might take five clock cycles to execute each instruction, there can be five instructions in various stages of execution simultaneously. That way it looks like one instruction completes every clock cycle.

Many modern processors have multiple instruction decoders, each with its own pipeline. This allows for multiple instruction streams, which means that more than one instruction can complete during each clock cycle. This technique can be quite complex to implement, so it takes lots of transistors.

Trends
The trend in processor design has primarily been toward full 32-bit ALUs with fast floating point processors built in and pipelined execution with multiple instruction streams. The newest thing in processor design is 64-bit ALUs, and people are expected to have these processors in their home PCs in the next decade. There has also been a tendency toward special instructions (like the MMX instructions) that make certain operations particularly efficient, and the addition of hardware virtual memory support and L1 caching on the processor chip. All of these trends push up the transistor count, leading to the multi-million transistor powerhouses available today. These processors can execute about one billion instructions per second!
Posted by apple at 6:13 PM 0 comments
Microprocessor Instructions
Even the incredibly simple microprocessor shown in the previous example will have a fairly large set of instructions that it can perform. The collection of instructions is implemented as bit patterns, each one of which has a different meaning when loaded into the instruction register. Humans are not particularly good at remembering bit patterns, so a set of short words are defined to represent the different bit patterns. This collection of words is called the assembly language of the processor. An assembler can translate the words into their bit patterns very easily, and then the output of the assembler is placed in memory for the microprocessor to execute.

Here's the set of assembly language instructions that the designer might create for the simple microprocessor in our example:

* LOADA mem - Load register A from memory address
* LOADB mem - Load register B from memory address
* CONB con - Load a constant value into register B
* SAVEB mem - Save register B to memory address
* SAVEC mem - Save register C to memory address
* ADD - Add A and B and store the result in C
* SUB - Subtract A and B and store the result in C
* MUL - Multiply A and B and store the result in C
* DIV - Divide A and B and store the result in C
* COM - Compare A and B and store the result in test
* JUMP addr - Jump to an address
* JEQ addr - Jump, if equal, to address
* JNEQ addr - Jump, if not equal, to address
* JG addr - Jump, if greater than, to address
* JGE addr - Jump, if greater than or equal, to address
* JL addr - Jump, if less than, to address
* JLE addr - Jump, if less than or equal, to address
* STOP - Stop execution

If you have read How C Programming Works, then you know that this simple piece of C code will calculate the factorial of 5 (where the factorial of 5 = 5! = 5 * 4 * 3 * 2 * 1 = 120):

a=1;
f=1;
while (a <= 5)
{
f = f * a;
a = a + 1;
}

At the end of the program's execution, the variable f contains the factorial of 5.

Assembly Language
A C compiler translates this C code into assembly language. Assuming that RAM starts at address 128 in this processor, and ROM (which contains the assembly language program) starts at address 0, then for our simple microprocessor the assembly language might look like this:

// Assume a is at address 128
// Assume F is at address 129
0 CONB 1 // a=1;
1 SAVEB 128
2 CONB 1 // f=1;
3 SAVEB 129
4 LOADA 128 // if a > 5 the jump to 17
5 CONB 5
6 COM
7 JG 17
8 LOADA 129 // f=f*a;
9 LOADB 128
10 MUL
11 SAVEC 129
12 LOADA 128 // a=a+1;
13 CONB 1
14 ADD
15 SAVEC 128
16 JUMP 4 // loop back to if
17 STOP

ROM
So now the question is, "How do all of these instructions look in ROM?" Each of these assembly language instructions must be represented by a binary number. For the sake of simplicity, let's assume each assembly language instruction is given a unique number, like this:

* LOADA - 1
* LOADB - 2
* CONB - 3
* SAVEB - 4
* SAVEC mem - 5
* ADD - 6
* SUB - 7
* MUL - 8
* DIV - 9
* COM - 10
* JUMP addr - 11
* JEQ addr - 12
* JNEQ addr - 13
* JG addr - 14
* JGE addr - 15
* JL addr - 16
* JLE addr - 17
* STOP - 18

The numbers are known as opcodes. In ROM, our little program would look like this:

// Assume a is at address 128
// Assume F is at address 129
Addr opcode/value
0 3 // CONB 1
1 1
2 4 // SAVEB 128
3 128
4 3 // CONB 1
5 1
6 4 // SAVEB 129
7 129
8 1 // LOADA 128
9 128
10 3 // CONB 5
11 5
12 10 // COM
13 14 // JG 17
14 31
15 1 // LOADA 129
16 129
17 2 // LOADB 128
18 128
19 8 // MUL
20 5 // SAVEC 129
21 129
22 1 // LOADA 128
23 128
24 3 // CONB 1
25 1
26 6 // ADD
27 5 // SAVEC 128
28 128
29 11 // JUMP 4
30 8
31 18 // STOP

You can see that seven lines of C code became 18 lines of assembly language, and that became 32 bytes in ROM.

Decoding
The instruction decoder needs to turn each of the opcodes into a set of signals that drive the different components inside the microprocessor. Let's take the ADD instruction as an example and look at what it needs to do:

1. During the first clock cycle, we need to actually load the instruction. Therefore the instruction decoder needs to:
* activate the tri-state buffer for the program counter
* activate the RD line
* activate the data-in tri-state buffer
* latch the instruction into the instruction register
2. During the second clock cycle, the ADD instruction is decoded. It needs to do very little:
* set the operation of the ALU to addition
* latch the output of the ALU into the C register
3. During the third clock cycle, the program counter is incremented (in theory this could be overlapped into the second clock cycle).

Every instruction can be broken down as a set of sequenced operations like these that manipulate the components of the microprocessor in the proper order. Some instructions, like this ADD instruction, might take two or three clock cycles. Others might take five or six clock cycles.
Posted by apple at 6:11 PM 0 comments
Microprocessor Memory
The previous section talked about the address and data buses, as well as the RD and WR lines. These buses and lines connect either to RAM or ROM -- generally both. In our sample microprocessor, we have an address bus 8 bits wide and a data bus 8 bits wide. That means that the microprocessor can address (28) 256 bytes of memory, and it can read or write 8 bits of the memory at a time. Let's assume that this simple microprocessor has 128 bytes of ROM starting at address 0 and 128 bytes of RAM starting at address 128.


ROM chip

ROM stands for read-only memory. A ROM chip is programmed with a permanent collection of pre-set bytes. The address bus tells the ROM chip which byte to get and place on the data bus. When the RD line changes state, the ROM chip presents the selected byte onto the data bus.


RAM chip
RAM stands for random-access memory. RAM contains bytes of information, and the microprocessor can read or write to those bytes depending on whether the RD or WR line is signaled. One problem with today's RAM chips is that they forget everything once the power goes off. That is why the computer needs ROM.

By the way, nearly all computers contain some amount of ROM (it is possible to create a simple computer that contains no RAM -- many microcontrollers do this by placing a handful of RAM bytes on the processor chip itself -- but generally impossible to create one that contains no ROM). On a PC, the ROM is called the BIOS (Basic Input/Output System). When the microprocessor starts, it begins executing instructions it finds in the BIOS. The BIOS instructions do things like test the hardware in the machine, and then it goes to the hard disk to fetch the boot sector (see How Hard Disks Work for details). This boot sector is another small program, and the BIOS stores it in RAM after reading it off the disk. The microprocessor then begins executing the boot sector's instructions from RAM. The boot sector program will tell the microprocessor to fetch something else from the hard disk into RAM, which the microprocessor then executes, and so on. This is how the microprocessor loads and executes the entire operating system.
Posted by apple at 6:09 PM 0 comments
Microprocessor Logic
To understand how a microprocessor works, it is helpful to look inside and learn about the logic used to create one. In the process you can also learn about assembly language -- the native language of a microprocessor -- and many of the things that engineers can do to boost the speed of a processor.

A microprocessor executes a collection of machine instructions that tell the processor what to do. Based on the instructions, a microprocessor does three basic things:

* Using its ALU (Arithmetic/Logic Unit), a microprocessor can perform mathematical operations like addition, subtraction, multiplication and division. Modern microprocessors contain complete floating point processors that can perform extremely sophisticated operations on large floating point numbers.
* A microprocessor can move data from one memory location to another.
* A microprocessor can make decisions and jump to a new set of instructions based on those decisions.

There may be very sophisticated things that a microprocessor does, but those are its three basic activities. The following diagram shows an extremely simple microprocessor capable of doing those three things:


This is about as simple as a microprocessor gets. This microprocessor has:

* An address bus (that may be 8, 16 or 32 bits wide) that sends an address to memory
* A data bus (that may be 8, 16 or 32 bits wide) that can send data to memory or receive data from memory
* An RD (read) and WR (write) line to tell the memory whether it wants to set or get the addressed location
* A clock line that lets a clock pulse sequence the processor
* A reset line that resets the program counter to zero (or whatever) and restarts execution

Let's assume that both the address and data buses are 8 bits wide in this example.

Here are the components of this simple microprocessor:

* Registers A, B and C are simply latches made out of flip-flops. (See the section on "edge-triggered latches" in How Boolean Logic Works for details.)
* The address latch is just like registers A, B and C.
* The program counter is a latch with the extra ability to increment by 1 when told to do so, and also to reset to zero when told to do so.
* The ALU could be as simple as an 8-bit adder (see the section on adders in How Boolean Logic Works for details), or it might be able to add, subtract, multiply and divide 8-bit values. Let's assume the latter here.
* The test register is a special latch that can hold values from comparisons performed in the ALU. An ALU can normally compare two numbers and determine if they are equal, if one is greater than the other, etc. The test register can also normally hold a carry bit from the last stage of the adder. It stores these values in flip-flops and then the instruction decoder can use the values to make decisions.
* There are six boxes marked "3-State" in the diagram. These are tri-state buffers. A tri-state buffer can pass a 1, a 0 or it can essentially disconnect its output (imagine a switch that totally disconnects the output line from the wire that the output is heading toward). A tri-state buffer allows multiple outputs to connect to a wire, but only one of them to actually drive a 1 or a 0 onto the line.
* The instruction register and instruction decoder are responsible for controlling all of the other components.



Although they are not shown in this diagram, there would be control lines from the instruction decoder that would:

* Tell the A register to latch the value currently on the data bus
* Tell the B register to latch the value currently on the data bus
* Tell the C register to latch the value currently output by the ALU
* Tell the program counter register to latch the value currently on the data bus
* Tell the address register to latch the value currently on the data bus
* Tell the instruction register to latch the value currently on the data bus
* Tell the program counter to increment
* Tell the program counter to reset to zero
* Activate any of the six tri-state buffers (six separate lines)
* Tell the ALU what operation to perform
* Tell the test register to latch the ALU's test bits
* Activate the RD line
* Activate the WR line

Posted by apple at 6:02 PM 0 comments
Microprocessor Progression: Intel
Intel 8080
The Intel 8080 was the first microprocessor in a home computer. See more microprocessor pictures.
The first microprocessor to make it into a home computer was the Intel 8080, a complete 8-bit computer on one chip, introduced in 1974. The first microprocessor to make a real splash in the market was the Intel 8088, introduced in 1979 and incorporated into the IBM PC (which first appeared around 1982). If you are familiar with the PC market and its history, you know that the PC market moved from the 8088 to the 80286 to the 80386 to the 80486 to the Pentium to the Pentium II to the Pentium III to the Pentium 4. All of these microprocessors are made by Intel and all of them are improvements on the basic design of the 8088. The Pentium 4 can execute any piece of code that ran on the original 8088, but it does it about 5,000 times faster!

The following table helps you to understand the differences between the different processors that Intel has introduced over the years.

Name

Date

Transistors

Microns

Clock speed

Data width

MIPS
8080

1974

6,000

6

2 MHz

8 bits

0.64
8088

1979

29,000

3

5 MHz

16 bits
8-bit bus

0.33
80286

1982

134,000

1.5

6 MHz

16 bits

1
80386

1985

275,000

1.5

16 MHz

32 bits

5
80486

1989

1,200,000

1

25 MHz

32 bits

20
Pentium

1993

3,100,000

0.8

60 MHz

32 bits
64-bit bus

100
Pentium II

1997

7,500,000

0.35

233 MHz

32 bits
64-bit bus

~300
Pentium III

1999

9,500,000

0.25

450 MHz

32 bits
64-bit bus

~510
Pentium 4

2000

42,000,000

0.18

1.5 GHz

32 bits
64-bit bus

~1,700
Pentium 4 "Prescott"

2004

125,000,000

0.09

3.6 GHz

32 bits
64-bit bus

~7,000

Compiled from The Intel Microprocessor Quick Reference Guide and TSCP Benchmark Scores

Information about this table:

What's a Chip?
A chip is also called an integrated circuit. Generally it is a small, thin piece of silicon onto which the transistors making up the microprocessor have been etched. A chip might be as large as an inch on a side and can contain tens of millions of transistors. Simpler processors might consist of a few thousand transistors etched onto a chip just a few millimeters square.

* The date is the year that the processor was first introduced. Many processors are re-introduced at higher clock speeds for many years after the original release date.
* Transistors is the number of transistors on the chip. You can see that the number of transistors on a single chip has risen steadily over the years.
* Microns is the width, in microns, of the smallest wire on the chip. For comparison, a human hair is 100 microns thick. As the feature size on the chip goes down, the number of transistors rises.
* Clock speed is the maximum rate that the chip can be clocked at. Clock speed will make more sense in the next section.
* Data Width is the width of the ALU. An 8-bit ALU can add/subtract/multiply/etc. two 8-bit numbers, while a 32-bit ALU can manipulate 32-bit numbers. An 8-bit ALU would have to execute four instructions to add two 32-bit numbers, while a 32-bit ALU can do it in one instruction. In many cases, the external data bus is the same width as the ALU, but not always. The 8088 had a 16-bit ALU and an 8-bit bus, while the modern Pentiums fetch data 64 bits at a time for their 32-bit ALUs.
* MIPS stands for "millions of instructions per second" and is a rough measure of the performance of a CPU. Modern CPUs can do so many different things that MIPS ratings lose a lot of their meaning, but you can get a general sense of the relative power of the CPUs from this column.

From this table you can see that, in general, there is a relationship between clock speed and MIPS. The maximum clock speed is a function of the manufacturing process and delays within the chip. There is also a relationship between the number of transistors and MIPS. For example, the 8088 clocked at 5 MHz but only executed at 0.33 MIPS (about one instruction per 15 clock cycles). Modern processors can often execute at a rate of two instructions per clock cycle. That improvement is directly related to the number of transistors on the chip and will make more sense in the next section.
Posted by apple at 5:39 PM 0 comments
Wednesday, January 14, 2009
How Microprocessors Work

Introduction to How Microprocessors Work
Computer Hardware Image Gallery

­microprocessor
2008 HowStuffWorks
Microprocessors are at the heart of all computers. See more computer hardware pictures.

­ The computer you are using to read this page uses a microprocessor to do its work. The microprocessor is the heart of any normal computer, whether it is a desktop machine, a server or a laptop. The microprocessor you are using might be a Pentium, a K6, a PowerPC, a Sparc or any of the many other brands and types of microprocessors, but they all do approximately the same thing in approximately the same way.

A microprocessor -- also known as a CPU or central processing unit -- is a complete computation engine that is fabricated on a single chip. The first microprocessor was the Intel 4004, introduced in 1971. The 4004 was not very powerful -- all it could do was add and subtract, and it could only do that 4 bits at a time. But it was amazing that everything was on one chip. Prior to the 4004, engineers built computers either from collections of chips or from discrete components (transistors wired one at a time). The 4004 powered one of the first portable electronic calculators.
If you have ever wondered what the microprocessor in your computer is doing, or if you have ever wondered about the differences between types of microprocessors, then read on. In this article, you will learn how fairly simple digital logic techniques allow a computer to do its job, whether its playing a game or spell checking a document!


Posted by apple at 12:40 AM 0 comments
Subscribe to: Posts (Atom)
64-bit Microprocessors
64-bit Microprocessors
Followers (0)
Follow this blog
Stop following
Be the first to follow this blog
0 Followers View All Manage
Blog Archive

* ▼ 2009 (6)
o ▼ January (6)
+ Microprocessor Performance and Trends
+ Microprocessor Instructions
+ Microprocessor Memory
+ Microprocessor Logic
+ Microprocessor Progression: Intel
+ How Microprocessors Work

Members:
Irene Barbas
Rachel Jabaybay
Gretchel Songcuya
Elna Menoso

grace said...

Ma'am gud noon amo ni ang blogs ko sa SRM kag ara da ang amun nga report...
http://grace-systemresource.blogspot.com
BSIT IV-C
Cezar miranda
Grace Entes
Susie ESpina
Michael belgira

apple said...

mam dy,amu ni akn nga blog URL!
http://gretchel-songcuya-interview.blogspot.com/

New Trends: (Call Center WidgetBox)