Tuesday, November 25, 2008
Tuesday, September 23, 2008
1. Explain the cirumstances under which a token-ring netwrok is more effective than an Ethernet network.
Token Ring Network
Token Ring technology was invented by IBM in 1984 and defined in standard IEEE 802.5 by the Institute of Electrical and Electronics Engineers. The token ring network has a logical ring topology, and may be setup with a physical ring topology, but is usually implemented in a physical star topology.
The central device of a token ring, called a Media Access Unit (MAU) or Multistation Access Unit (MSAU), can be thought of as a "Ring in a Box". It allows multiple network stations in a logical ring to connect as a physical star. The loop that used to make up the ring is integrated into a chip.
In a physical Token ring topology, when a cable is open or a station is not operating, the entire network goes down. However with a MAU, the broken circuit is shorted out, closing the loop so the network can continue to operate and the nonoperating stations may be unplugged without crashing the entire network.
Token ring protocol operates at the data link layer of the OSI model. In a token ring network, the first computer to come online creates a three-byte data frame called a token. The token is sent on the cable to the next node in the ring. The token continues around the ring until it arrives at a node that wants to transmit data. The node that wants to transmit data takes control of the token.
A node can only transmit data on the network cable when it takes control of the token. Since only one token exists, only one node can transmit at a time. This prevents the collisions that might occur with the Ethernet CSMA/CD access method.
After a node takes control of the token, it transmits a data packet. A Token Ring packet contains four main parts: The data, the MAC address of the packet’s source, the MAC address of the packet’s destination and a Frame Check Sequence (FCS) error checking code.
The data packet continues around the ring until it reaches the node with the destination address. The receiving node accepts the data and marks the packet that the data was received. The data packet then continues around the ring until it reaches the source node again. The source node removes the packet from the cable and releases the token so that another node may transmit.
Initially token ring ran at 4 Mbit/s, In 1989 IBM introduced the 16 Mbit/s token ring. Other companies introduced proprietary 10 Mbit/s and 12 Mbit/s versions of token ring. Speeds of 4 Mbit/s, 16 Mbit/s, 100 Mbit/s and 1 Gbit/s have been standardized by IEEE 802.5.
2. Although security issues were not mentioned in this chapter, every network owner must consider them. Knowing that open networks all data to pass to every node, describe the posssible security concerns of open network achitectures. include the implicatiions of passing logon procedures, user IDs, and passwords openly on the network.
In our modern computer network today, it concerns that the nation's networks are becoming more vulnerable to serious network disruptions.
The report reached several conclusions and listed numerous recommendations to reduce network vulnerabilities. Several conclusions reached by the National Research Council are listed below:
-The evolution of switching technology is resulting in fewer switches, a concentration of control, and thus greater vulnerability of the public switched networks.
-The public switched networks are increasingly controlled by and dependent on software that will increase access to executable code and databases for user configuration of features, a situation that creates vulnerability to damage by ``hackers,'' ``viruses,'' ``worms,'' and ``time bombs''.
-The power of optical fiber technology is diminishing the number of geographic transmission routes, increasing the concentration of traffic within those routes, reducing the use of other transmission technologies, and restricting spatial diversity. All these changes are resulting in an increase in network vulnerability.
-There is a progressive concentration of various traffic in and through single buildings resulting in increasing vulnerability. As a result this trend increases the potential for catastrophic disruption that may be caused by damage to even a single location.
3. Remembering the discussion of deadlocks, if you were designing a networked system, how would you manage the treat of deadlocks in your network? Consider all of the following: prevention, detection, avoidance, and recovery.
One-basic strategy for handling deadlocks is to ensure violation of at least one of the three conditions necessary for deadlock (exclusive control, hold-wait, and no preemption). This method is usually referred to as deadlock prevention, unless its primary aim is to avoid deadlock by using information about the processes' future intentions regarding resource requirements. A totally dierent strategy interrogates the process/resource relationships from time to time in order to identify the existence of a deadlock. This latter method presumes that the system can subsequently do something about the problem.
Detection techniques. These techniques assume that all resource requests will be granted eventually. A periodically invoked algorithm examines current resource allocations and outstanding requests to determine if any processes or resources are deadlocked. If a deadlock is discovered, the system must recover as gracefully as possible by preempting resources from aected processes until the deadlock is broken. Detection-scheme overhead includes not only the run-time cost of the algorithm but also the potential losses inherent in preempting resources. Since no action takes place until a deadlock actually occurs, resources may be held idle by blocked processes for long periods of time. Sometimes, using detection principles
effectively is dicult|for example, when preemption of resources such as tape drives might incur unacceptable overhead. Nevertheless, detection techniques have some advantages since the schemes are invoked intermittently and only essential preemptions need be performed. In the database context, detection methods rely on the management system to abort, roll back to a previous checkpoint, and restart at least one process to break the deadlock. Here, the problem of rollback and recovery assumes great importance from the viewpoint of maintaining database consistency.
Prevention mechanisms. Prevention is the process of constraining system users so that requests leading to deadlock never occur. Most proposals for prevention require each process to specify all needed resources before transactions begin. Deadlocks can be prevented in several ways, including requesting all resources at once, preempting resources held, and ordering resources. The simplest way of preventing deadlock is to outlaw concurrency, but this leads to very poor resource utilization and is not consistent with current system design philosophies. Another method requires that all resources be acquired before processing starts. Such a scheme is inecient, since resources held may be idle for prolonged periods, but works well for processes which perform a single burst of activity, such
as input/output drivers, since the resource can be released immediately after each use. For processes with uctuating requirements, the method can be impractical. In a database environment, it may be impossible for a data-driven process to specify and acquire all needed resources before beginning execution. In any case, the scheme discriminates heavily against data-driven processes where relationships in the data indicate what future resources are required for processing. Certain other prevention methods require a blocked process to release resources requested by an active
process. For example, when a process needs more main memory than is currently available, it becomes blocked. Subsequently the process is swapped to secondary storage by preempting its memory for use by an active process. The blocked process is swapped back only when the entire, larger quantity of memory
is available. For some peculiar situations in database systems, this use of preemption to prevent deadlocks is subject to cyclic restart, in which two or more processes loop by continually blocking, aborting, and restarting each other.
Avoidance schemes. In avoidance schemes, a resource request is granted only if at least one way remains for all processes to complete execution. One basic scheme, referred to as the \banker's algorithm; "6 manages multiple units of a single resource by requiring that the processes specify their total resource needs at initiation time. Furthermore, each process acquires or returns resource units one at a time. The algorithm denies a request by any process whose remaining needs are in excess of the available resources. Effectively, the scheme projects detection into the future to keep the system from committing itself to an allocation
which eventually leads to deadlock. Haberman developed a \maximum claims strategy"7 to control the future resource requirements for each process. This generalization of the banker's algorithm is a practical example of avoidance but requires quantity information, in the form of upper bounds, on every resource the process needs. Thus, if there is a process which can run to completion using only its allocated resources and those that are immediately available, then the current state of the system is said to be safe or \deadlock-free." Every successor state obtained this way is safe. Deadlock avoidance is achieved by testing each possible allocation and making
only those which lead to safe states. If the process originating this allocation can run to completion and release the resources it holds, then all other processes in the system can be completed, since the state prior to the allocation was safe.
4. Assuming you had sufficient funds to upgrade only one component for a system with which you are familiar, explain which component you would choose to upgrade to improve overall performance, and why?
Memory Upgrade
The system memory is the place where the computer holds current programs and data that are in use. The term "memory" is somewhat ambiguous; it can refer to many different parts of the PC because there are so many different kinds of memory that a PC uses. However, when used by itself, "memory" usually refers to the main system memory, which holds the instructions that the processor executes and the data that those instructions work with. Your system memory is an important part of the main processing subsystem of the PC, tied in with the processor, cache, motherboard and chipset. Memory plays a significant role in the following important aspects of your computer system:
Performance:
The amount and type of system memory you have is an important contributing factor to overall performance. In many ways, it is more important than the processor, because insufficient memory can cause a processor to work at 50% or even more below its performance potential. This is an important point that is often overlooked.
Software Support:
Newer programs require more memory than old ones. More memory will give you access to programs that you cannot use with a lesser amount.
Reliability and Stability:
Bad memory is a leading cause of mysterious system problems. Ensuring you have high-quality memory will result in a PC that runs smoothly and exhibits fewer problems. Also, even high-quality memory will not work well if you use the wrong kind.
Upgradability:
There are many different types of memory available, and some are more universal than others. Making a wise choice can allow you to migrate your memory to a future system or continue to use it after you upgrade your motherboard.
Maybe the easiest upgrade to do, possibly the one that will give the most immediate benefit. Main points to note are:
-Make sure that any new memory you add is at least as fast as the bus speed
-Memory which is faster than the bus speed is OK
-If possible, it is recommended that memory modules are as close to being the same as possible. In an upgrade this may, however, not always be possible.
-The minimum recommended memory for most applications is tending towards 128 Mb and 256 Mb as standard is not unusual.
Potential problems that you may have with regard to memory upgrades are:
-The motherboard needs memory of a type which is now no longer readily available. Influential factors include voltage and speed.
-If the memory is available for an old computer, it may be very expensive.
Token Ring Network
Token Ring technology was invented by IBM in 1984 and defined in standard IEEE 802.5 by the Institute of Electrical and Electronics Engineers. The token ring network has a logical ring topology, and may be setup with a physical ring topology, but is usually implemented in a physical star topology.
The central device of a token ring, called a Media Access Unit (MAU) or Multistation Access Unit (MSAU), can be thought of as a "Ring in a Box". It allows multiple network stations in a logical ring to connect as a physical star. The loop that used to make up the ring is integrated into a chip.
In a physical Token ring topology, when a cable is open or a station is not operating, the entire network goes down. However with a MAU, the broken circuit is shorted out, closing the loop so the network can continue to operate and the nonoperating stations may be unplugged without crashing the entire network.
Token ring protocol operates at the data link layer of the OSI model. In a token ring network, the first computer to come online creates a three-byte data frame called a token. The token is sent on the cable to the next node in the ring. The token continues around the ring until it arrives at a node that wants to transmit data. The node that wants to transmit data takes control of the token.
A node can only transmit data on the network cable when it takes control of the token. Since only one token exists, only one node can transmit at a time. This prevents the collisions that might occur with the Ethernet CSMA/CD access method.
After a node takes control of the token, it transmits a data packet. A Token Ring packet contains four main parts: The data, the MAC address of the packet’s source, the MAC address of the packet’s destination and a Frame Check Sequence (FCS) error checking code.
The data packet continues around the ring until it reaches the node with the destination address. The receiving node accepts the data and marks the packet that the data was received. The data packet then continues around the ring until it reaches the source node again. The source node removes the packet from the cable and releases the token so that another node may transmit.
Initially token ring ran at 4 Mbit/s, In 1989 IBM introduced the 16 Mbit/s token ring. Other companies introduced proprietary 10 Mbit/s and 12 Mbit/s versions of token ring. Speeds of 4 Mbit/s, 16 Mbit/s, 100 Mbit/s and 1 Gbit/s have been standardized by IEEE 802.5.
2. Although security issues were not mentioned in this chapter, every network owner must consider them. Knowing that open networks all data to pass to every node, describe the posssible security concerns of open network achitectures. include the implicatiions of passing logon procedures, user IDs, and passwords openly on the network.
In our modern computer network today, it concerns that the nation's networks are becoming more vulnerable to serious network disruptions.
The report reached several conclusions and listed numerous recommendations to reduce network vulnerabilities. Several conclusions reached by the National Research Council are listed below:
-The evolution of switching technology is resulting in fewer switches, a concentration of control, and thus greater vulnerability of the public switched networks.
-The public switched networks are increasingly controlled by and dependent on software that will increase access to executable code and databases for user configuration of features, a situation that creates vulnerability to damage by ``hackers,'' ``viruses,'' ``worms,'' and ``time bombs''.
-The power of optical fiber technology is diminishing the number of geographic transmission routes, increasing the concentration of traffic within those routes, reducing the use of other transmission technologies, and restricting spatial diversity. All these changes are resulting in an increase in network vulnerability.
-There is a progressive concentration of various traffic in and through single buildings resulting in increasing vulnerability. As a result this trend increases the potential for catastrophic disruption that may be caused by damage to even a single location.
3. Remembering the discussion of deadlocks, if you were designing a networked system, how would you manage the treat of deadlocks in your network? Consider all of the following: prevention, detection, avoidance, and recovery.
One-basic strategy for handling deadlocks is to ensure violation of at least one of the three conditions necessary for deadlock (exclusive control, hold-wait, and no preemption). This method is usually referred to as deadlock prevention, unless its primary aim is to avoid deadlock by using information about the processes' future intentions regarding resource requirements. A totally dierent strategy interrogates the process/resource relationships from time to time in order to identify the existence of a deadlock. This latter method presumes that the system can subsequently do something about the problem.
Detection techniques. These techniques assume that all resource requests will be granted eventually. A periodically invoked algorithm examines current resource allocations and outstanding requests to determine if any processes or resources are deadlocked. If a deadlock is discovered, the system must recover as gracefully as possible by preempting resources from aected processes until the deadlock is broken. Detection-scheme overhead includes not only the run-time cost of the algorithm but also the potential losses inherent in preempting resources. Since no action takes place until a deadlock actually occurs, resources may be held idle by blocked processes for long periods of time. Sometimes, using detection principles
effectively is dicult|for example, when preemption of resources such as tape drives might incur unacceptable overhead. Nevertheless, detection techniques have some advantages since the schemes are invoked intermittently and only essential preemptions need be performed. In the database context, detection methods rely on the management system to abort, roll back to a previous checkpoint, and restart at least one process to break the deadlock. Here, the problem of rollback and recovery assumes great importance from the viewpoint of maintaining database consistency.
Prevention mechanisms. Prevention is the process of constraining system users so that requests leading to deadlock never occur. Most proposals for prevention require each process to specify all needed resources before transactions begin. Deadlocks can be prevented in several ways, including requesting all resources at once, preempting resources held, and ordering resources. The simplest way of preventing deadlock is to outlaw concurrency, but this leads to very poor resource utilization and is not consistent with current system design philosophies. Another method requires that all resources be acquired before processing starts. Such a scheme is inecient, since resources held may be idle for prolonged periods, but works well for processes which perform a single burst of activity, such
as input/output drivers, since the resource can be released immediately after each use. For processes with uctuating requirements, the method can be impractical. In a database environment, it may be impossible for a data-driven process to specify and acquire all needed resources before beginning execution. In any case, the scheme discriminates heavily against data-driven processes where relationships in the data indicate what future resources are required for processing. Certain other prevention methods require a blocked process to release resources requested by an active
process. For example, when a process needs more main memory than is currently available, it becomes blocked. Subsequently the process is swapped to secondary storage by preempting its memory for use by an active process. The blocked process is swapped back only when the entire, larger quantity of memory
is available. For some peculiar situations in database systems, this use of preemption to prevent deadlocks is subject to cyclic restart, in which two or more processes loop by continually blocking, aborting, and restarting each other.
Avoidance schemes. In avoidance schemes, a resource request is granted only if at least one way remains for all processes to complete execution. One basic scheme, referred to as the \banker's algorithm; "6 manages multiple units of a single resource by requiring that the processes specify their total resource needs at initiation time. Furthermore, each process acquires or returns resource units one at a time. The algorithm denies a request by any process whose remaining needs are in excess of the available resources. Effectively, the scheme projects detection into the future to keep the system from committing itself to an allocation
which eventually leads to deadlock. Haberman developed a \maximum claims strategy"7 to control the future resource requirements for each process. This generalization of the banker's algorithm is a practical example of avoidance but requires quantity information, in the form of upper bounds, on every resource the process needs. Thus, if there is a process which can run to completion using only its allocated resources and those that are immediately available, then the current state of the system is said to be safe or \deadlock-free." Every successor state obtained this way is safe. Deadlock avoidance is achieved by testing each possible allocation and making
only those which lead to safe states. If the process originating this allocation can run to completion and release the resources it holds, then all other processes in the system can be completed, since the state prior to the allocation was safe.
4. Assuming you had sufficient funds to upgrade only one component for a system with which you are familiar, explain which component you would choose to upgrade to improve overall performance, and why?
Memory Upgrade
The system memory is the place where the computer holds current programs and data that are in use. The term "memory" is somewhat ambiguous; it can refer to many different parts of the PC because there are so many different kinds of memory that a PC uses. However, when used by itself, "memory" usually refers to the main system memory, which holds the instructions that the processor executes and the data that those instructions work with. Your system memory is an important part of the main processing subsystem of the PC, tied in with the processor, cache, motherboard and chipset. Memory plays a significant role in the following important aspects of your computer system:
Performance:
The amount and type of system memory you have is an important contributing factor to overall performance. In many ways, it is more important than the processor, because insufficient memory can cause a processor to work at 50% or even more below its performance potential. This is an important point that is often overlooked.
Software Support:
Newer programs require more memory than old ones. More memory will give you access to programs that you cannot use with a lesser amount.
Reliability and Stability:
Bad memory is a leading cause of mysterious system problems. Ensuring you have high-quality memory will result in a PC that runs smoothly and exhibits fewer problems. Also, even high-quality memory will not work well if you use the wrong kind.
Upgradability:
There are many different types of memory available, and some are more universal than others. Making a wise choice can allow you to migrate your memory to a future system or continue to use it after you upgrade your motherboard.
Maybe the easiest upgrade to do, possibly the one that will give the most immediate benefit. Main points to note are:
-Make sure that any new memory you add is at least as fast as the bus speed
-Memory which is faster than the bus speed is OK
-If possible, it is recommended that memory modules are as close to being the same as possible. In an upgrade this may, however, not always be possible.
-The minimum recommended memory for most applications is tending towards 128 Mb and 256 Mb as standard is not unusual.
Potential problems that you may have with regard to memory upgrades are:
-The motherboard needs memory of a type which is now no longer readily available. Influential factors include voltage and speed.
-If the memory is available for an old computer, it may be very expensive.
Subscribe to:
Posts (Atom)