Nulling
The condition known as nulling occurs when one or more reflected waves arrive at the receiver out-of-phase with the main wave with such amplitude that the main wave's amplitude is cancelled. As illustrated in Figure 9.4, when reflected waves arrive out-ofphase with the main wave at the receiver, the condition can cancel or “null” the entire set of RF waves, including the main wave.
When nulling occurs, retransmission of the data will not solve the problem. The transmitter, receiver, or reflective objects must be moved. Sometimes more than one of these must be relocated to compensate for the nulling effects on the RF wave.
Increased Signal Amplitude
Multipath conditions can also cause a signal’s amplitude to be increased from what it would have been without reflected waves present. Upfade is the term used to describe when multipath causes an RF signal to gain strength. Upfade, as illustrated in Figure 9.5, occurs due to reflected signals arriving at the receiver in-phase with the main signal. Similar to a decreased signal, all of these waves are additive to the main signal. Under no circumstance can multipath cause the signal that reaches the receiver to be stronger than the transmitted signal when the signal left the transmitting device. If multipath occurs in such a way as to be additive to the main signal, the total signal that reaches the receiver will be stronger than the signal would have otherwise been without multipath present.
It is important to understand that a received RF signal can never be as large as the signal that was transmitted due to the significance of free space path loss (usually called path loss). Path loss is the effect of a signal losing amplitude due to expansion as the signal travels through open space.
Think of path loss as someone blowing a bubble with bubble gum. As the gum expands, the gum at any point becomes thinner. If someone were to reach out and grab a 1-inch square piece of this bubble, the amount of gum they would actually get would be less and less as the bubble expanded. If a person grabbed a piece of the bubble while it was still small (close to the person's mouth, which is the transmitter) the person would get a significant amount of gum. If the person waited to get that same size piece until the bubble were large (further from the transmitter), the piece would be only a very small amount of gum. This illustration shows that path loss is affected by two factors: first, the distance between transmitter and receiver, and second, the size of the receiving aperture (the size of the piece of gum that was grabbed).
The Certified Wireless Network Professional Training & Certification Program is intended for individuals who administer, install, design, and support IEEE 802.11 compliant wireless networks.
Wednesday, December 2, 2009
Friday, November 13, 2009
Troubleshooting Wireless LAN Installations
Just as traditional wired networks have challenges during implementation, wireless LANs have their own set of challenges, mainly dealing with the behavior of RF signals. In this chapter, we will discuss the more common obstacles to successful implementation of a wireless LAN, and how to troubleshoot them. There are different methods of discovering when these challenges exist, and each of the challenges discussed has its remedies and workarounds.
The challenges to implementing any wireless LAN discussed herein are considered by many to be “textbook” problems that can occur within any wireless LAN installation, and, therefore, can be avoided by careful planning and simply being aware that these problems can and will occur.
Multipath
If you will recall from Chapter 2, RF Fundamentals, there are two types of line of sight (LOS). First, there is visual LOS, which is what the human eye sees. Visual LOS is your first and most basic LOS test. If you can see the RF receiver from the installation point of the RF transmitter, then you have visual line of sight. Second, and different from visual LOS, is RF line of sight. RF LOS is what your RF device can “see”.
The general behavior of an RF signal is to grow wider as it is transmitted farther. Because of this type of behavior, the RF signal will encounter objects in its path that will reflect, diffract, or otherwise interfere with the signal. When an RF wave is reflected off an object (water, tin roof, other metal object, etc.) while moving towards its receiver, multiple wave fronts are created (one for each reflection point). There are now waves moving in many directions, and many of these reflected waves are still headed toward the receiver. This behavior is where we get the term multipath, as shown in Figure 9.1. Multipath is defined as the composition of a primary signal plus duplicate or echoed wave fronts caused by reflections of waves off objects between the transmitter and receiver. The delay between the instant that the main signal arrives and the instant that the last reflected signal arrives is known as delay spread.
Effects of Multipath
Multipath can cause several different conditions, all of which can affect the transmission of the RF signal differently. These conditions include:
Decreased Signal Amplitude
When an RF wave arrives at the receiver, many reflected waves may arrive at the same time from different directions. The combination of these waves' amplitudes is additive to the main RF wave. Reflected waves, if out-of-phase with the main wave, can cause decreased signal amplitude at the receiver, as illustrated in Figure 9.2. This occurrence is commonly referred to as downfade and should be taken into consideration when conducting a sight survey and selecting appropriate antennas.
Corruption
Corrupted signals (waves) due to multipath can occur as a result of the same phenomena that cause decreased amplitude, but to a greater degree. When reflected waves arrive at the receiver out-of-phase with the main wave, as illustrated in Figure 9.3, they can cause the wave to be greatly reduced in amplitude instead of only slightly reduced. The amplitude reduction is such that the receiver is sensitive enough to detect most of the information being carried on the wave, but not all.
In such cases, the signal to noise ratio (SNR) is generally very low, where the signal itself is very close to the noise floor. The receiver is unable to clearly decipher between the information signal and noise, causing the data that is received to be only part (if any) of the transmitted data. This corruption of data will require the transmitter to resend the data, increasing overhead and decreasing throughput in the wireless LAN.
The challenges to implementing any wireless LAN discussed herein are considered by many to be “textbook” problems that can occur within any wireless LAN installation, and, therefore, can be avoided by careful planning and simply being aware that these problems can and will occur.
Multipath
If you will recall from Chapter 2, RF Fundamentals, there are two types of line of sight (LOS). First, there is visual LOS, which is what the human eye sees. Visual LOS is your first and most basic LOS test. If you can see the RF receiver from the installation point of the RF transmitter, then you have visual line of sight. Second, and different from visual LOS, is RF line of sight. RF LOS is what your RF device can “see”.
The general behavior of an RF signal is to grow wider as it is transmitted farther. Because of this type of behavior, the RF signal will encounter objects in its path that will reflect, diffract, or otherwise interfere with the signal. When an RF wave is reflected off an object (water, tin roof, other metal object, etc.) while moving towards its receiver, multiple wave fronts are created (one for each reflection point). There are now waves moving in many directions, and many of these reflected waves are still headed toward the receiver. This behavior is where we get the term multipath, as shown in Figure 9.1. Multipath is defined as the composition of a primary signal plus duplicate or echoed wave fronts caused by reflections of waves off objects between the transmitter and receiver. The delay between the instant that the main signal arrives and the instant that the last reflected signal arrives is known as delay spread.
Effects of Multipath
Multipath can cause several different conditions, all of which can affect the transmission of the RF signal differently. These conditions include:
- Decreased Signal Amplitude (downfade)
- Corruption
- Nulling
- Increased Signal Amplitude (upfade)
Decreased Signal Amplitude
When an RF wave arrives at the receiver, many reflected waves may arrive at the same time from different directions. The combination of these waves' amplitudes is additive to the main RF wave. Reflected waves, if out-of-phase with the main wave, can cause decreased signal amplitude at the receiver, as illustrated in Figure 9.2. This occurrence is commonly referred to as downfade and should be taken into consideration when conducting a sight survey and selecting appropriate antennas.
Corruption
Corrupted signals (waves) due to multipath can occur as a result of the same phenomena that cause decreased amplitude, but to a greater degree. When reflected waves arrive at the receiver out-of-phase with the main wave, as illustrated in Figure 9.3, they can cause the wave to be greatly reduced in amplitude instead of only slightly reduced. The amplitude reduction is such that the receiver is sensitive enough to detect most of the information being carried on the wave, but not all.
In such cases, the signal to noise ratio (SNR) is generally very low, where the signal itself is very close to the noise floor. The receiver is unable to clearly decipher between the information signal and noise, causing the data that is received to be only part (if any) of the transmitted data. This corruption of data will require the transmitter to resend the data, increasing overhead and decreasing throughput in the wireless LAN.
Thursday, October 29, 2009
Modulation
Modulation, which is a Physical Layer function, is a process in which the radio transceiver prepares the digital signal within the NIC for transmission over the airwaves. Modulation is the process of adding data to a carrier by altering the amplitude, frequency, or phase of the carrier in a controlled manner. Knowing the many different kinds of modulations used with wireless LANs is helpful when trying to build a compatible network piece-by-piece.
Figure 8.9 shows the details of modulation and spreading code types used with Frequency Hopping and Direct Sequence wireless LANs in the 2.4 GHz ISM band. Differential Binary Phase Shift Keying (DBPSK), Differential Quadrature Phase Shift Keying (DQPSK), and Gaussian Frequency Shift Keying (GFSK) are the types of modulation used by 802.11 and 802.11b products on the market today. Barker Code and Complimentary Code Keying (CCK) are the types of spreading codes used in 802.11 and 802.11b wireless LANs.
As higher transmission speeds are specified (such as when a system is using DRS), modulation techniques change in order to provide more data throughput. For example, 802.11g and 802.11a compliant wireless LAN equipment specify use of orthogonal frequency division multiplexing (OFDM), allowing speeds of up to 54 Mbps, which is a significant improvement over the 11 Mbps specified by 802.11b. Figure 8.10 shows the modulation types used for 802.11a networks. The 802.11g standard provides backwards compatibility by supporting CCK coding and even supports packet binary convolution coding (PBCC) as an option. Bluetooth and HomeRF are both FHSS technologies that use GFSK modulation technology in the 2.4 GHz ISM band.
Orthogonal frequency division multiplexing (OFDM) is a communications technique that divides a communications channel into a number of equally spaced frequency bands. A subcarrier carrying a portion of the user information is transmitted in each band. Each subcarrier is orthogonal (independent of each other) with every other subcarrier, differentiating OFDM from the commonly used frequency division multiplexing (FDM).
Figure 8.9 shows the details of modulation and spreading code types used with Frequency Hopping and Direct Sequence wireless LANs in the 2.4 GHz ISM band. Differential Binary Phase Shift Keying (DBPSK), Differential Quadrature Phase Shift Keying (DQPSK), and Gaussian Frequency Shift Keying (GFSK) are the types of modulation used by 802.11 and 802.11b products on the market today. Barker Code and Complimentary Code Keying (CCK) are the types of spreading codes used in 802.11 and 802.11b wireless LANs.
As higher transmission speeds are specified (such as when a system is using DRS), modulation techniques change in order to provide more data throughput. For example, 802.11g and 802.11a compliant wireless LAN equipment specify use of orthogonal frequency division multiplexing (OFDM), allowing speeds of up to 54 Mbps, which is a significant improvement over the 11 Mbps specified by 802.11b. Figure 8.10 shows the modulation types used for 802.11a networks. The 802.11g standard provides backwards compatibility by supporting CCK coding and even supports packet binary convolution coding (PBCC) as an option. Bluetooth and HomeRF are both FHSS technologies that use GFSK modulation technology in the 2.4 GHz ISM band.
Orthogonal frequency division multiplexing (OFDM) is a communications technique that divides a communications channel into a number of equally spaced frequency bands. A subcarrier carrying a portion of the user information is transmitted in each band. Each subcarrier is orthogonal (independent of each other) with every other subcarrier, differentiating OFDM from the commonly used frequency division multiplexing (FDM).
Saturday, October 17, 2009
How Wireless LANs Communicate
Request to Send/Clear to Send (RTS/CTS)
There are two carrier sense mechanisms used on wireless networks. The first is physical carrier sense. Physical carrier sense functions by checking the signal strength, called the Received Signal Strength Indicator (RSSI), on the RF carrier signal to see if there is a station currently transmitting. The second is virtual carrier sense. Virtual carrier sense works by using a field called the Network Allocation Vector (NAV), which acts as a timer on the station. If a station wishes to broadcast its intention to use the network, the station sends a frame to the destination station, which will set the NAV field on all stations hearing the frame to the time necessary for the station to complete its transmission, plus the returning ACK frame. In this way, any station can reserve use of the network for specified periods of time. Virtual carrier sense is implemented with the RTS/CTS protocol.
The RTS/CTS protocol is an extension of the CSMA/CA protocol. As the wireless LAN administrator, you can take advantage of using this protocol to solve problems like Hidden Node (discussed in Chapter 9, Troubleshooting). Using RTS/CTS allows stations to broadcast their intent to send data across the network.
As you can imagine by the brief description above, RTS/CTS will cause significant network overhead. For this reason RTS/CTS is turned OFF by default on a wireless LAN. If you are experiencing an unusual amount of collisions on your wireless LAN (evidenced by high latency and low throughput) using RTS/CTS can actually increase the traffic flow on the network by decreasing the number of collisions. Use of RTS/CTS should not be done haphazardly. RTS/CTS should be configured after careful study of the network's collisions, throughput, latency, etc.
Figure 8.7 illustrates the 4-way handshake process used for RTS/CTS. In short, the transmitting station broadcasts the RTS, followed by the CTS reply from the receiving station, both of which go through the access point. Next, the transmitting station sends its data payload through the access point to the receiving station, which immediately replies with an acknowledgement frame, or ACK. This process is used for every frame that is sent across the wireless network.
Configuring RTS/CTS
There are three settings on most access points and nodes for RTS/CTS:
When RTS/CTS is turned on, every packet that goes through the wireless network is announced and cleared between the transmitting and receiving nodes prior to transmission, creating a significant amount of overhead and significantly less throughput. Generally, RTS/CTS should only be used in diagnosing network problems and when only very large packets are flowing across a congested wireless network, which is rare.
However, the “on with threshold” setting allows the administrator to control which packets (over a certain size - called the threshold) are announced and cleared to send by the stations. Since collisions affect larger packets more than smaller ones, you can set the RTS/CTS threshold to work only when a node wishes to send packets over a certain size. This setting allows you to customize the RTS/CTS setting to your network data traffic and optimize the throughput of your wireless LAN while preventing problems like Hidden Node.
Figure 8.8 depicts a DCF network using the RTS/CTS protocol to transmit data. Notice that the RTS and CTS transmissions are spaced by SIFS. The NAV is set with RTS on all nodes, and then reset on all nodes by the immediately following CTS.
There are two carrier sense mechanisms used on wireless networks. The first is physical carrier sense. Physical carrier sense functions by checking the signal strength, called the Received Signal Strength Indicator (RSSI), on the RF carrier signal to see if there is a station currently transmitting. The second is virtual carrier sense. Virtual carrier sense works by using a field called the Network Allocation Vector (NAV), which acts as a timer on the station. If a station wishes to broadcast its intention to use the network, the station sends a frame to the destination station, which will set the NAV field on all stations hearing the frame to the time necessary for the station to complete its transmission, plus the returning ACK frame. In this way, any station can reserve use of the network for specified periods of time. Virtual carrier sense is implemented with the RTS/CTS protocol.
The RTS/CTS protocol is an extension of the CSMA/CA protocol. As the wireless LAN administrator, you can take advantage of using this protocol to solve problems like Hidden Node (discussed in Chapter 9, Troubleshooting). Using RTS/CTS allows stations to broadcast their intent to send data across the network.
As you can imagine by the brief description above, RTS/CTS will cause significant network overhead. For this reason RTS/CTS is turned OFF by default on a wireless LAN. If you are experiencing an unusual amount of collisions on your wireless LAN (evidenced by high latency and low throughput) using RTS/CTS can actually increase the traffic flow on the network by decreasing the number of collisions. Use of RTS/CTS should not be done haphazardly. RTS/CTS should be configured after careful study of the network's collisions, throughput, latency, etc.
Figure 8.7 illustrates the 4-way handshake process used for RTS/CTS. In short, the transmitting station broadcasts the RTS, followed by the CTS reply from the receiving station, both of which go through the access point. Next, the transmitting station sends its data payload through the access point to the receiving station, which immediately replies with an acknowledgement frame, or ACK. This process is used for every frame that is sent across the wireless network.
Configuring RTS/CTS
There are three settings on most access points and nodes for RTS/CTS:
- Off
- On
- On with Threshold
When RTS/CTS is turned on, every packet that goes through the wireless network is announced and cleared between the transmitting and receiving nodes prior to transmission, creating a significant amount of overhead and significantly less throughput. Generally, RTS/CTS should only be used in diagnosing network problems and when only very large packets are flowing across a congested wireless network, which is rare.
However, the “on with threshold” setting allows the administrator to control which packets (over a certain size - called the threshold) are announced and cleared to send by the stations. Since collisions affect larger packets more than smaller ones, you can set the RTS/CTS threshold to work only when a node wishes to send packets over a certain size. This setting allows you to customize the RTS/CTS setting to your network data traffic and optimize the throughput of your wireless LAN while preventing problems like Hidden Node.
Figure 8.8 depicts a DCF network using the RTS/CTS protocol to transmit data. Notice that the RTS and CTS transmissions are spaced by SIFS. The NAV is set with RTS on all nodes, and then reset on all nodes by the immediately following CTS.
Tuesday, October 6, 2009
Interframe Spacing
Interframe spacing doesn’t sound like something an administrator would need to know; however, if you don’t understand the types of interframe spacing, you cannot effectively grasp RTS/CTS, which helps you solve problems, or DCF and PCF, which are manually configured in the access point. Both of these functions are integral in the ongoing communications process of a wireless LAN. First, we will define each type of interframe space (IFS), and then we will explain how each type works on the wireless LAN.
As we learned when we discussed beacons, all stations on a wireless LAN are timesynchronized. All the stations on a wireless LAN are effectively ‘ticking’ time in sync with one another. Interframe spacing is the term we use to refer to standardized time spaces that are used on all 802.11 wireless LANs.
Three Types of Spacing
There are three main spacing intervals (interframe spaces): SIFS, DIFS, and PIFS. Each type of interframe space is used by a wireless LAN either to send certain types of messages across the network or to manage the intervals during which the stations contend for the transmission medium. Figure 8.3 illustrates the actual times that each interframe space takes for each type of 802.11 technology.
Interframe spaces are measured in microseconds and are used to defer a station's access to the medium and to provide various levels of priority. On a wireless network, everything is synchronized and all stations and access points use standard amounts of time (spaces) to perform various tasks. Each node knows these spaces and uses them appropriately. A set of standard spaces is specified for DSSS, FHSS, and Infrared as you can see from Figure 8.3. By using these spaces, each node knows when and if it is supposed to perform a certain action on the network.
Short Interframe Space (SIFS)
SIFS is the shortest fixed interframe space. SIFS are time spaces before and after which the following types of messages are sent. The list below is not an exhaustive list.
Point Coordination Function Interframe Space (PIFS)
A PIFS interframe space is neither the shortest nor longest fixed interframe space, so it gets more priority than DIFS and less than SIFS. Access points use a PIFS interframe space only when the network is in point coordination function mode, which is manually configured by the administrator. PIFS are shorter in duration than DIFS (see Figure 8.3), so the access point will always win control of the medium before other contending stations in distributed coordination function (DCF) mode. PCF only works with DCF, not as a stand-alone operational mode so that, once the access point is finished polling, other stations can continue to contend for the transmission medium using DCF mode.
Distributed Coordination Function Interframe Space (DIFS)
DIFS is the longest fixed interframe space and is used by default on all 802.11-compliant stations that are using the distributed coordination function. Each station on the network using DCF mode is required to wait until DIFS has expired before any station can contend for the network. All stations operating according to DCF use DIFS for transmitting data frames and management frames. This spacing makes the transmission of these frames lower priority than PCF-based transmissions. Instead of all stations assuming the medium is clear and arbitrarily beginning transmissions simultaneously after DIFS (which would cause collisions), each station uses a random back off algorithm to determine how long to wait before sending its data.
The period of time directly following DIFS is referred to as the contention period (CP). All stations in DCF mode use the random back off algorithm during the contention period. During the random back off process, a station chooses a random number and multiplies it by the slot time to get the length of time to wait. The stations count down these slot times one by one, performing a clear channel assessment (CCA) after each slot time to see if the medium is busy. Whichever station's random back off time expires first, that station does a CCA, and provided the medium is clear, it then begins transmission.
Once the first station has begun transmissions all other stations sense that the medium is busy, and remember the remaining amount of their random back off time from the previous CP. This remaining amount of time is used in lieu of picking another random number during the next CP. This process assures fair access to the medium among all stations.
Once the random back off period is over, the transmitting station sends its data and receives back the ACK from the receiving station. This entire process then repeats. It stands to reason that most stations will chose different random numbers, eliminating most collisions. However, it is important to remember that collisions do happen on wireless LANs, but they cannot directly be detected. Collisions are assumed by the fact that the ACK is not received back from the destination station.
The Communications Process
When you consider the PIFS process described above, it may seem as though the access point would always have control over the medium, since the access point does not have to wait for DIFS, but the stations do. This would be true, except for the existence of what is called a superframe. A superframe is a period of time, and it consists of three parts:
1. Beacon
2. Contention Free Period (CFP)
3. Contention Period (CP)
A diagram of the superframe is shown in Figure 8.4. The purpose of the superframe is to allow peaceful, fair co-existence between PCF and DCF mode clients on the network, allowing QoS for some, but not for others.
Again, remember that PIFS, and hence the superframe, only occurs when
1. The network is in point coordination function mode
2. The access point has been configured to do polling
3. The wireless clients have been configured to announce to the access point that they are pollable
Therefore, if we start from a hypothetical beginning point on a network that has the access point configured for PCF mode, and the some of the clients are configured for polling, the process is as follows.
1. The access point broadcasts a beacon.
2. During the contention free period, the access point polls stations to see if any station needs to send data.
3. If a station needs to send data, it sends one frame to the access point in response to the access point’s poll
4. If a station does not need to send data, it returns a null frame to the access point in response to the access point’s poll
5. Polling continues throughout the contention free period
6. Once the contention free period ends and the contention period begins, the access point can no longer poll stations. During the contention period, stations using DCF mode contend for the medium and the access point uses DCF mode.
7. The superframe ends with the end of the CP, and a new one begins with the following CFP.
Think of the CFP as using a "controlled access policy" and the CP as using a "random access policy." During the CFP, the access point is in complete control of all functions on the wireless network, whereas during the CP, stations arbitrate and randomly gain control over the medium. The access point, in PCF mode, does not have to wait for the DIFS to expire, but rather uses the PIFS, which is shorter than the DIFS, in order to capture the medium before any client using DCF mode does. Since the access point captures the medium and begins polling transmissions during the CFP, the DCF clients sense the medium as being busy and wait to transmit. After the CFP the CP begins, during which all stations using DCF mode may contend for the medium and the access point switches to DCF mode.
Figure 8.5 illustrates a short timeline for a wireless LAN using DCF and PCF modes.
The process is somewhat simpler when a wireless LAN is only in DCF mode, because there is no polling and, hence, no superframe. This process is as follows:
1. Stations wait for DIFS to expire
2. During the CP, which immediately follows DIFS, stations calculate their random back off time based on a random number multiplied by a slot time
3. Stations tick down their random time with each passing slot time, checking the medium (CCA) at the end of each slot time. The station with the shortest time gains control of the medium first.
4. A station sends its data.
5. The receiving station receives the data and waits a SIFS before returning an ACK back to the station that transmitted the data.
6. The transmitting station receives the ACK and the process starts over from the beginning with a new DIFS.
As we learned when we discussed beacons, all stations on a wireless LAN are timesynchronized. All the stations on a wireless LAN are effectively ‘ticking’ time in sync with one another. Interframe spacing is the term we use to refer to standardized time spaces that are used on all 802.11 wireless LANs.
Three Types of Spacing
There are three main spacing intervals (interframe spaces): SIFS, DIFS, and PIFS. Each type of interframe space is used by a wireless LAN either to send certain types of messages across the network or to manage the intervals during which the stations contend for the transmission medium. Figure 8.3 illustrates the actual times that each interframe space takes for each type of 802.11 technology.
Interframe spaces are measured in microseconds and are used to defer a station's access to the medium and to provide various levels of priority. On a wireless network, everything is synchronized and all stations and access points use standard amounts of time (spaces) to perform various tasks. Each node knows these spaces and uses them appropriately. A set of standard spaces is specified for DSSS, FHSS, and Infrared as you can see from Figure 8.3. By using these spaces, each node knows when and if it is supposed to perform a certain action on the network.
Short Interframe Space (SIFS)
SIFS is the shortest fixed interframe space. SIFS are time spaces before and after which the following types of messages are sent. The list below is not an exhaustive list.
- RTS - Request-to-Send frame, used for reserving the medium by stations
- CTS - Clear-to-Send frame, used as a response by access points to the RTS frame generated by a station in order to ensure all stations have stopped transmitting
- ACK - Acknowledgement frame used for notifying sending stations that data arrived in readable format at the receiving station
Point Coordination Function Interframe Space (PIFS)
A PIFS interframe space is neither the shortest nor longest fixed interframe space, so it gets more priority than DIFS and less than SIFS. Access points use a PIFS interframe space only when the network is in point coordination function mode, which is manually configured by the administrator. PIFS are shorter in duration than DIFS (see Figure 8.3), so the access point will always win control of the medium before other contending stations in distributed coordination function (DCF) mode. PCF only works with DCF, not as a stand-alone operational mode so that, once the access point is finished polling, other stations can continue to contend for the transmission medium using DCF mode.
Distributed Coordination Function Interframe Space (DIFS)
DIFS is the longest fixed interframe space and is used by default on all 802.11-compliant stations that are using the distributed coordination function. Each station on the network using DCF mode is required to wait until DIFS has expired before any station can contend for the network. All stations operating according to DCF use DIFS for transmitting data frames and management frames. This spacing makes the transmission of these frames lower priority than PCF-based transmissions. Instead of all stations assuming the medium is clear and arbitrarily beginning transmissions simultaneously after DIFS (which would cause collisions), each station uses a random back off algorithm to determine how long to wait before sending its data.
The period of time directly following DIFS is referred to as the contention period (CP). All stations in DCF mode use the random back off algorithm during the contention period. During the random back off process, a station chooses a random number and multiplies it by the slot time to get the length of time to wait. The stations count down these slot times one by one, performing a clear channel assessment (CCA) after each slot time to see if the medium is busy. Whichever station's random back off time expires first, that station does a CCA, and provided the medium is clear, it then begins transmission.
Once the first station has begun transmissions all other stations sense that the medium is busy, and remember the remaining amount of their random back off time from the previous CP. This remaining amount of time is used in lieu of picking another random number during the next CP. This process assures fair access to the medium among all stations.
Once the random back off period is over, the transmitting station sends its data and receives back the ACK from the receiving station. This entire process then repeats. It stands to reason that most stations will chose different random numbers, eliminating most collisions. However, it is important to remember that collisions do happen on wireless LANs, but they cannot directly be detected. Collisions are assumed by the fact that the ACK is not received back from the destination station.
The Communications Process
When you consider the PIFS process described above, it may seem as though the access point would always have control over the medium, since the access point does not have to wait for DIFS, but the stations do. This would be true, except for the existence of what is called a superframe. A superframe is a period of time, and it consists of three parts:
1. Beacon
2. Contention Free Period (CFP)
3. Contention Period (CP)
A diagram of the superframe is shown in Figure 8.4. The purpose of the superframe is to allow peaceful, fair co-existence between PCF and DCF mode clients on the network, allowing QoS for some, but not for others.
Again, remember that PIFS, and hence the superframe, only occurs when
1. The network is in point coordination function mode
2. The access point has been configured to do polling
3. The wireless clients have been configured to announce to the access point that they are pollable
Therefore, if we start from a hypothetical beginning point on a network that has the access point configured for PCF mode, and the some of the clients are configured for polling, the process is as follows.
1. The access point broadcasts a beacon.
2. During the contention free period, the access point polls stations to see if any station needs to send data.
3. If a station needs to send data, it sends one frame to the access point in response to the access point’s poll
4. If a station does not need to send data, it returns a null frame to the access point in response to the access point’s poll
5. Polling continues throughout the contention free period
6. Once the contention free period ends and the contention period begins, the access point can no longer poll stations. During the contention period, stations using DCF mode contend for the medium and the access point uses DCF mode.
7. The superframe ends with the end of the CP, and a new one begins with the following CFP.
Think of the CFP as using a "controlled access policy" and the CP as using a "random access policy." During the CFP, the access point is in complete control of all functions on the wireless network, whereas during the CP, stations arbitrate and randomly gain control over the medium. The access point, in PCF mode, does not have to wait for the DIFS to expire, but rather uses the PIFS, which is shorter than the DIFS, in order to capture the medium before any client using DCF mode does. Since the access point captures the medium and begins polling transmissions during the CFP, the DCF clients sense the medium as being busy and wait to transmit. After the CFP the CP begins, during which all stations using DCF mode may contend for the medium and the access point switches to DCF mode.
Figure 8.5 illustrates a short timeline for a wireless LAN using DCF and PCF modes.
The process is somewhat simpler when a wireless LAN is only in DCF mode, because there is no polling and, hence, no superframe. This process is as follows:
1. Stations wait for DIFS to expire
2. During the CP, which immediately follows DIFS, stations calculate their random back off time based on a random number multiplied by a slot time
3. Stations tick down their random time with each passing slot time, checking the medium (CCA) at the end of each slot time. The station with the shortest time gains control of the medium first.
4. A station sends its data.
5. The receiving station receives the data and waits a SIFS before returning an ACK back to the station that transmitted the data.
6. The transmitting station receives the ACK and the process starts over from the beginning with a new DIFS.
Saturday, September 26, 2009
How Wireless LANs Communicate
In order to understand how to configure and manage a wireless LAN, the administrator must understand communication parameters that are configurable on the equipment and how to implement those parameters. In order to estimate throughput across wireless LANs, one must understand the affects of these parameters and collision handling on system throughput. This section conveys a basic understanding of many configurable parameters and their affects on network performance.
Wireless LAN Frames vs. Ethernet Frames
Once a wireless client has joined a network, the client and the rest of the network will communicate by passing frames across the network, in almost the same manner as any other IEEE 802 network. To clear up a common misconception, wireless LANs do NOT use 802.3 Ethernet frames. The term wireless Ethernet is somewhat of a misnomer. Wireless LAN frames contain more information than common Ethernet frames do. The actual structure of a wireless LAN frame versus that of an Ethernet frame is beyond the scope of both the CWNA exam as well as a wireless LAN administrator’s job.
Something to consider is that there are many types of IEEE 802 frames, but there is only one type of wireless frame. With 802.3 Ethernet frames, once chosen by the network administrator, the same frame type is used to send all data across the wire just as with wireless. Wireless frames are all configured with the same overall frame format. One similarity to 802.3 Ethernet is that the payload of both is a maximum of 1500 bytes. Ethernet's maximum frame size is 1514 bytes where 802.11 wireless LANs have a maximum frame size of 1518 bytes.
Collision Handling
Since radio frequency is a shared medium, wireless LANs have to deal with the possibility of collisions just the same as traditional wired LANs do. The difference is that, on a wireless LAN, there is no means through which the sending station can determine that there has actually been a collision. It is impossible to detect a collision on a wireless LAN. For this reason, wireless LANs utilize the Carrier Sense Multiple Access / Collision Avoidance protocol, also known as CSMA/CA. CSMA/CA is somewhat similar to the protocol CSMA/CD, which is common on Ethernet networks.
The biggest difference between CSMA/CA and CSMA/CD is that CSMA/CA avoids collisions and uses positive acknowledgements (ACKs) instead of arbitrating use of the medium when collisions occur. The use of acknowledgements, or ACKs, works in a very simple manner. When a wireless station sends a packet, the receiving station sends back an ACK once that station actually receives the packet. If the sending station does not receive an ACK, the sending station assumes there was a collision and resends the data.
CSMA/CA, added to the large amount of control data used in wireless LANs, causes overhead that uses approximately 50% of the available bandwidth on a wireless LAN. This overhead, plus the additional overhead of protocols such as RTS/CTS that enhance collision avoidance, is responsible for the actual throughput of approximately 5.0 - 5.5 Mbps on a typical 802.11b wireless LAN rated at 11 Mbps. CSMA/CD also generates overhead, but only about 30% on an average use network. When an Ethernet network becomes congested, CSMA/CD can cause overhead of up to 70%, while a congested wireless network remains somewhat constant at around 50 - 55% throughput.
The CSMA/CA protocol avoids the probability of collisions among stations sharing the medium by using a random back off time if the station's physical or logical sensing mechanism indicates a busy medium. The period of time immediately following a busy medium is when the highest probability of collisions occurs, especially under high utilization. At this point in time, many stations may be waiting for the medium to become idle and will attempt to transmit at the same time. Once the medium is idle, a random back off time defers a station from transmitting a frame, minimizing the chance that stations will collide.
Fragmentation
Fragmentation of packets into shorter fragments adds protocol overhead and reduces protocol efficiency (decreases network throughput) when no errors are observed, but reduces the time spent on re-transmissions if errors occur. Larger packets have a higher probability of collisions on the network; hence, a method of varying packet fragment size is needed. The IEEE 802.11 standard provides support for fragmentation.
By decreasing the length of each packet, the probability of interference during packet transmission can be reduced, as illustrated in Figure 8.1. There is a tradeoff that must be made between the lower packet error rate that can be achieved by using shorter packets, and the increased overhead of more frames on the network due to fragmentation. Each fragment requires its own headers and ACK, so the adjustment of the fragmentation level is also an adjustment of the amount of overhead associated with each packet transmitted. Stations never fragment multicast and broadcast frames, but rather only unicast frames in order not to introduce unnecessary overhead into the network. Finding the optimal fragmentation setting to maximize the network throughput on an 802.11 network is an important part of administering a wireless LAN. Keep in mind that a 1518 byte frame is the largest frame that can traverse a wireless LAN segment without fragmentation.
One way to use fragmentation to improve network throughput in times of heavy packet errors is to monitor the packet error rate on the network and adjust the fragmentation level manually. As a recommended practice, you should monitor the network at multiple times throughout a typical day to see what impact fragmentation adjustment will have at various times. Another method of adjustment is to configure the fragmentation threshold.
If your network is experiencing a high packet error rate (faulty packets), increase the fragmentation threshold on the client stations and/or the access point (depending on which units allow these settings on your particular equipment). Start with the maximum value and gradually decrease the fragmentation threshold size until an improvement shows. If fragmentation is used, the network will experience a performance hit due to the overhead incurred with fragmentation. Sometimes this hit is acceptable in order to gain more throughput due to a decrease in packet errors and subsequent retransmissions.
Dynamic Rate Shifting (DRS)
Adaptive (or Automatic) Rate Selection (ARS) and Dynamic Rate Shifting (DRS) are both terms used to describe the method of dynamic speed adjustment on wireless LAN clients. This speed adjustment occurs as distance increases between the client and the access point or as interference increases. It is imperative that a network administrator understands how this function works in order to plan for network throughput, cell sizes, power outputs of access points and stations, and security.
Modern spread spectrum systems are designed to make discrete jumps only to specified data rates, such as 1, 2, 5.5, and 11 Mbps. As distance increases between the access point and a station, the signal strength will decrease to a point where the current data rate cannot be maintained. When this signal strength decrease occurs, the transmitting unit will drop its data rate to the next lower specified data rate, say from 11 Mbps to 5.5 Mbps or from 2 Mbps to 1 Mbps. Figure 8.2 illustrates that, as the distance from the access point increases, the data rate decreases.
A wireless LAN system will never drop from 11 Mbps to 10 Mbps, for example, since 10 Mbps is not a specified data rate. The method of making such discrete jumps is typically called either ARS or DRS, depending on the manufacturer. Both FHSS and DSSS implement DRS, and the IEEE 802.11, IEEE 802.11b, HomeRF, and OpenAir standards require it.
Distributed Coordination Function
Distributed Coordination Function (DCF) is an access method specified in the 802.11 standard that allows all stations on a wireless LAN to contend for access on the shared transmission medium (RF) using the CSMA/CA protocol. In this case, the transmission medium is a portion of the radio frequency band that the wireless LAN is using to send data. Basic service sets (BSS), extended service sets (ESS), and independent basic service sets (IBSS) can all use DCF mode. The access points in these service sets act in the same manner as IEEE 802.3 based wired hubs to transmit their data, and DCF is the mode in which the access points send the data.
Point Coordination Function
Point Coordination Function (PCF) is a transmission mode allowing for contention-free frame transfers on a wireless LAN by making use of a polling mechanism. PCF has the advantage of guaranteeing a known amount of latency so that applications requiring QoS (voice or video for example) can be used. When using PCF, the access point on a wireless LAN performs the polling. For this reason, an ad hoc network cannot utilize PCF, because an ad hoc network has no access point to do the polling.
The PCF Process
First, a wireless station must tell the access point that the station is capable of answering a poll. Then the access point asks, or polls, each wireless station to see if that station needs to send a data frame across the network. PCF, through polling, generates a significant amount of overhead on a wireless LAN.
Wireless LAN Frames vs. Ethernet Frames
Once a wireless client has joined a network, the client and the rest of the network will communicate by passing frames across the network, in almost the same manner as any other IEEE 802 network. To clear up a common misconception, wireless LANs do NOT use 802.3 Ethernet frames. The term wireless Ethernet is somewhat of a misnomer. Wireless LAN frames contain more information than common Ethernet frames do. The actual structure of a wireless LAN frame versus that of an Ethernet frame is beyond the scope of both the CWNA exam as well as a wireless LAN administrator’s job.
Something to consider is that there are many types of IEEE 802 frames, but there is only one type of wireless frame. With 802.3 Ethernet frames, once chosen by the network administrator, the same frame type is used to send all data across the wire just as with wireless. Wireless frames are all configured with the same overall frame format. One similarity to 802.3 Ethernet is that the payload of both is a maximum of 1500 bytes. Ethernet's maximum frame size is 1514 bytes where 802.11 wireless LANs have a maximum frame size of 1518 bytes.
Collision Handling
Since radio frequency is a shared medium, wireless LANs have to deal with the possibility of collisions just the same as traditional wired LANs do. The difference is that, on a wireless LAN, there is no means through which the sending station can determine that there has actually been a collision. It is impossible to detect a collision on a wireless LAN. For this reason, wireless LANs utilize the Carrier Sense Multiple Access / Collision Avoidance protocol, also known as CSMA/CA. CSMA/CA is somewhat similar to the protocol CSMA/CD, which is common on Ethernet networks.
The biggest difference between CSMA/CA and CSMA/CD is that CSMA/CA avoids collisions and uses positive acknowledgements (ACKs) instead of arbitrating use of the medium when collisions occur. The use of acknowledgements, or ACKs, works in a very simple manner. When a wireless station sends a packet, the receiving station sends back an ACK once that station actually receives the packet. If the sending station does not receive an ACK, the sending station assumes there was a collision and resends the data.
CSMA/CA, added to the large amount of control data used in wireless LANs, causes overhead that uses approximately 50% of the available bandwidth on a wireless LAN. This overhead, plus the additional overhead of protocols such as RTS/CTS that enhance collision avoidance, is responsible for the actual throughput of approximately 5.0 - 5.5 Mbps on a typical 802.11b wireless LAN rated at 11 Mbps. CSMA/CD also generates overhead, but only about 30% on an average use network. When an Ethernet network becomes congested, CSMA/CD can cause overhead of up to 70%, while a congested wireless network remains somewhat constant at around 50 - 55% throughput.
The CSMA/CA protocol avoids the probability of collisions among stations sharing the medium by using a random back off time if the station's physical or logical sensing mechanism indicates a busy medium. The period of time immediately following a busy medium is when the highest probability of collisions occurs, especially under high utilization. At this point in time, many stations may be waiting for the medium to become idle and will attempt to transmit at the same time. Once the medium is idle, a random back off time defers a station from transmitting a frame, minimizing the chance that stations will collide.
Fragmentation
Fragmentation of packets into shorter fragments adds protocol overhead and reduces protocol efficiency (decreases network throughput) when no errors are observed, but reduces the time spent on re-transmissions if errors occur. Larger packets have a higher probability of collisions on the network; hence, a method of varying packet fragment size is needed. The IEEE 802.11 standard provides support for fragmentation.
By decreasing the length of each packet, the probability of interference during packet transmission can be reduced, as illustrated in Figure 8.1. There is a tradeoff that must be made between the lower packet error rate that can be achieved by using shorter packets, and the increased overhead of more frames on the network due to fragmentation. Each fragment requires its own headers and ACK, so the adjustment of the fragmentation level is also an adjustment of the amount of overhead associated with each packet transmitted. Stations never fragment multicast and broadcast frames, but rather only unicast frames in order not to introduce unnecessary overhead into the network. Finding the optimal fragmentation setting to maximize the network throughput on an 802.11 network is an important part of administering a wireless LAN. Keep in mind that a 1518 byte frame is the largest frame that can traverse a wireless LAN segment without fragmentation.
One way to use fragmentation to improve network throughput in times of heavy packet errors is to monitor the packet error rate on the network and adjust the fragmentation level manually. As a recommended practice, you should monitor the network at multiple times throughout a typical day to see what impact fragmentation adjustment will have at various times. Another method of adjustment is to configure the fragmentation threshold.
If your network is experiencing a high packet error rate (faulty packets), increase the fragmentation threshold on the client stations and/or the access point (depending on which units allow these settings on your particular equipment). Start with the maximum value and gradually decrease the fragmentation threshold size until an improvement shows. If fragmentation is used, the network will experience a performance hit due to the overhead incurred with fragmentation. Sometimes this hit is acceptable in order to gain more throughput due to a decrease in packet errors and subsequent retransmissions.
Dynamic Rate Shifting (DRS)
Adaptive (or Automatic) Rate Selection (ARS) and Dynamic Rate Shifting (DRS) are both terms used to describe the method of dynamic speed adjustment on wireless LAN clients. This speed adjustment occurs as distance increases between the client and the access point or as interference increases. It is imperative that a network administrator understands how this function works in order to plan for network throughput, cell sizes, power outputs of access points and stations, and security.
Modern spread spectrum systems are designed to make discrete jumps only to specified data rates, such as 1, 2, 5.5, and 11 Mbps. As distance increases between the access point and a station, the signal strength will decrease to a point where the current data rate cannot be maintained. When this signal strength decrease occurs, the transmitting unit will drop its data rate to the next lower specified data rate, say from 11 Mbps to 5.5 Mbps or from 2 Mbps to 1 Mbps. Figure 8.2 illustrates that, as the distance from the access point increases, the data rate decreases.
A wireless LAN system will never drop from 11 Mbps to 10 Mbps, for example, since 10 Mbps is not a specified data rate. The method of making such discrete jumps is typically called either ARS or DRS, depending on the manufacturer. Both FHSS and DSSS implement DRS, and the IEEE 802.11, IEEE 802.11b, HomeRF, and OpenAir standards require it.
Distributed Coordination Function
Distributed Coordination Function (DCF) is an access method specified in the 802.11 standard that allows all stations on a wireless LAN to contend for access on the shared transmission medium (RF) using the CSMA/CA protocol. In this case, the transmission medium is a portion of the radio frequency band that the wireless LAN is using to send data. Basic service sets (BSS), extended service sets (ESS), and independent basic service sets (IBSS) can all use DCF mode. The access points in these service sets act in the same manner as IEEE 802.3 based wired hubs to transmit their data, and DCF is the mode in which the access points send the data.
Point Coordination Function
Point Coordination Function (PCF) is a transmission mode allowing for contention-free frame transfers on a wireless LAN by making use of a polling mechanism. PCF has the advantage of guaranteeing a known amount of latency so that applications requiring QoS (voice or video for example) can be used. When using PCF, the access point on a wireless LAN performs the polling. For this reason, an ad hoc network cannot utilize PCF, because an ad hoc network has no access point to do the polling.
The PCF Process
First, a wireless station must tell the access point that the station is capable of answering a poll. Then the access point asks, or polls, each wireless station to see if that station needs to send a data frame across the network. PCF, through polling, generates a significant amount of overhead on a wireless LAN.
Thursday, September 17, 2009
Power Management Features
Wireless clients operate in one of two power management modes specified by the IEEE 802.11 standard. These power management modes are active mode, which is commonly called continuous aware mode (CAM) and power save, which is commonly called power save polling (PSP) mode. Conserving power using a power-saving mode is especially important to mobile users whose laptops or PDAs run on batteries. Extending the life of these batteries allows the user to stay up and running longer without a recharge. Wireless LAN cards can draw a significant amount of power from the battery while in CAM, which is why power saving features are included in the 802.11 standard.
Continuous Aware Mode
Continuous aware mode is the setting during which the wireless client uses full power, does not “sleep,” and is constantly in regular communication with the access point. Any computers that stay plugged into an AC power outlet continuously such as a desktop or server should be set for CAM. Under these circumstances, there is no reason to have the PC card conserve power.
Power Save Polling
Using power save polling (PSP) mode allows a wireless client to “sleep.” By sleep, we mean that the client actually powers down for a very short amount of time, perhaps a small fraction of a second. This sleep is enough time to save a significant amount of power on the wireless client. In turn, the power saved by the wireless client enables a laptop computer user, for example, to work for a longer period of time on batteries, making that user more productive.
When using PSP, the wireless client behaves differently within basic service sets and independent basic service sets. The one similarity in behavior from a BSS to an IBSS is the sending and receiving of beacons.
The processes that operate during PSP mode, in both BSS and IBSS, are described below. Keep in mind that these processes occur many times per second. That fact allows your wireless LAN to maintain its connectivity, but also causes a certain amount of additional overhead. An administrator should consider this overhead when planning for the needs of the users on the wireless LAN.
PSP Mode in a Basic Service Set
When using PSP mode in a BSS, stations first send a frame to the access point to inform the access point that they are going to sleep, (temporarily powering down). The access point then records the sleeping stations as asleep. The access point buffers any frames that are intended for the sleeping stations. Traffic for those clients who are asleep continues arriving at the access point, but the access point cannot send traffic to a sleeping client.
Therefore, packets get queued in a buffer marked for the sleeping client. The access point is constantly sending beacons at a regular interval. Clients, since they are time-synchronized with the access point, know exactly when to receive the beacon. Clients that are sleeping power up their receivers to listen for beacons, which contain the traffic indication map (TIM) If a station sees itself listed in the TIM, it powers up, and sends a frame to the access point notifying the access point that it is now awake and ready to receive the buffered data packets. Once the client has received its packets from the access point, the client sends a message to the access point informing it that the client is going back to ‘sleep’. Then the process repeats itself over and over again. This process creates some overhead that would not be present if PSP mode were not being utilized. The steps of this process are shown in Figure 7.18.
PSP in an Independent Basic Service Set
The power saving communication process in an IBSS is very different than when power saving mode is used in a BSS. An IBSS does not contain an access point, so there is no device to buffer packets. Therefore, every station must buffer packets destined from itself to every other station in the Ad Hoc network. Stations alternate the sending of beacons on an IBSS network using varied methods, each dependent on the manufacturer. When stations are using power saving mode, there is a period of time called an ATIM window, during which each station is fully awake and ready to receive data frames. Ad hoc traffic indication messages (ATIM) are unicast frames used by stations to notify other stations that there is data destined to them and that they should stay awake long enough to receive it. ATIMs and beacons are both sent during the ATIM window. The process followed by stations in order to pass traffic between peers is:
As a wireless LAN administrator, you need to know what affect power management features will have on performance, battery life, broadcast traffic on your LAN, etc. In the example described above, the effects could be significant.
Continuous Aware Mode
Continuous aware mode is the setting during which the wireless client uses full power, does not “sleep,” and is constantly in regular communication with the access point. Any computers that stay plugged into an AC power outlet continuously such as a desktop or server should be set for CAM. Under these circumstances, there is no reason to have the PC card conserve power.
Power Save Polling
Using power save polling (PSP) mode allows a wireless client to “sleep.” By sleep, we mean that the client actually powers down for a very short amount of time, perhaps a small fraction of a second. This sleep is enough time to save a significant amount of power on the wireless client. In turn, the power saved by the wireless client enables a laptop computer user, for example, to work for a longer period of time on batteries, making that user more productive.
When using PSP, the wireless client behaves differently within basic service sets and independent basic service sets. The one similarity in behavior from a BSS to an IBSS is the sending and receiving of beacons.
The processes that operate during PSP mode, in both BSS and IBSS, are described below. Keep in mind that these processes occur many times per second. That fact allows your wireless LAN to maintain its connectivity, but also causes a certain amount of additional overhead. An administrator should consider this overhead when planning for the needs of the users on the wireless LAN.
PSP Mode in a Basic Service Set
When using PSP mode in a BSS, stations first send a frame to the access point to inform the access point that they are going to sleep, (temporarily powering down). The access point then records the sleeping stations as asleep. The access point buffers any frames that are intended for the sleeping stations. Traffic for those clients who are asleep continues arriving at the access point, but the access point cannot send traffic to a sleeping client.
Therefore, packets get queued in a buffer marked for the sleeping client. The access point is constantly sending beacons at a regular interval. Clients, since they are time-synchronized with the access point, know exactly when to receive the beacon. Clients that are sleeping power up their receivers to listen for beacons, which contain the traffic indication map (TIM) If a station sees itself listed in the TIM, it powers up, and sends a frame to the access point notifying the access point that it is now awake and ready to receive the buffered data packets. Once the client has received its packets from the access point, the client sends a message to the access point informing it that the client is going back to ‘sleep’. Then the process repeats itself over and over again. This process creates some overhead that would not be present if PSP mode were not being utilized. The steps of this process are shown in Figure 7.18.
PSP in an Independent Basic Service Set
The power saving communication process in an IBSS is very different than when power saving mode is used in a BSS. An IBSS does not contain an access point, so there is no device to buffer packets. Therefore, every station must buffer packets destined from itself to every other station in the Ad Hoc network. Stations alternate the sending of beacons on an IBSS network using varied methods, each dependent on the manufacturer. When stations are using power saving mode, there is a period of time called an ATIM window, during which each station is fully awake and ready to receive data frames. Ad hoc traffic indication messages (ATIM) are unicast frames used by stations to notify other stations that there is data destined to them and that they should stay awake long enough to receive it. ATIMs and beacons are both sent during the ATIM window. The process followed by stations in order to pass traffic between peers is:
- Stations are synchronized through the beacons so they wake up before the ATIM window begins.
- The ATIM window begins, the stations send beacons, and then stations send ATIM frames notifying other stations of buffered traffic destined to them.
- Stations receiving ATIM frames during the ATIM window stay awake to receive data frames. If no ATIM frames are received, stations go back to sleep.
- The ATIM window closes, and stations begin transmitting data frames. After receiving data frames, stations go back to sleep awaiting the next ATIM window.
As a wireless LAN administrator, you need to know what affect power management features will have on performance, battery life, broadcast traffic on your LAN, etc. In the example described above, the effects could be significant.
Monday, August 24, 2009
Roaming
Roaming is the process or ability of a wireless client to move seamlessly from one cell (or BSS) to another without losing network connectivity. Access points hand the client off from one to another in a way that is invisible to the client, ensuring unbroken connectivity. Figure 7.12 illustrates a client roaming from one BSS to another BSS. When any area in the building is within reception range of more than one access point, the cells’ coverage overlaps. Overlapping coverage areas are an important attribute of the wireless LAN setup, because it enables seamless roaming between overlapping cells. Roaming allows mobile users with portable stations to move freely between overlapping cells, constantly maintaining their network connection.
When roaming is seamless, a work session can be maintained while moving from one cell to another. Multiple access points can provide wireless roaming coverage for an entire building or campus.
When the coverage area of two or more access points overlap, the stations in the overlapping area can establish the best possible connection with one of the access points while continuously searching for the best access point. In order to minimize packet loss during switchover, the “old” and “new” access points communicate to coordinate the roaming process. This function is similar to a cellular phones’ handover, with two main differences:
Standards
The 802.11 standard does not define how roaming should be performed, but does define the basic building blocks. These building blocks include active & passive scanning and a reassociation process. The reassociation process occurs when a wireless station roams from one access point to another, becoming associated with the new access point.
The 802.11 standard allows a client to roam among multiple access points operating on the same or separate channels. For example, every 100 ms, an access point might transmit a beacon signal that includes a time stamp for client synchronization, a traffic indication map, an indication of supported data rates, and other parameters. Roaming clients use the beacon to gauge the strength of their existing connection to the access point. If the connection is weak, the roaming station can attempt to associate itself with a new access point.
To meet the needs of mobile radio communications, the 802.11b standard must be tolerant of connections being dropped and re-established. The standard attempts to ensure minimum disruption to data delivery, and provides some features for caching and forwarding messages between BSSs.
Particular implementations of some higher layer protocols such as TCP/IP may be less tolerant. For example, in a network where DHCP is used to assign IP addresses, a roaming node may lose its connection when it moves across cell boundaries. The node will then have to re-establish the connection when it enters the next BSS or cell. Software solutions are available to address this particular problem.
The 802.11b standard leaves much of the detailed functioning of what it calls the distribution system to manufacturers. This decision was a deliberate decision on the part of the standard designers, because they were most concerned with making the standard entirely independent of any other existing network standards. As a practical matter, an overwhelming majority of 802.11b wireless LANs using ESS topologies are connected to Ethernet LANs and make heavy use of TCP/IP. Wireless LAN vendors have stepped into the gap to offer proprietary methods of facilitating roaming between nodes in an ESS.
Connectivity
The 802.11 MAC layer is responsible for how a client associates with an access point. When an 802.11 client enters the range of one or more access points, the client chooses an access point to associate with (also called joining a BSS) based on signal strength and observed packet error rates.
Once associated with the access point, the station periodically surveys all 802.11 channels in order to assess whether a different access point would provide better performance characteristics. If the client determines that there is a stronger signal from a different access point, the client re-associates with the new access point, tuning to the radio channel to which that access point is set. The station will not attempt to roam until it drops below a manufacturer-defined signal strength threshold.
Reassociation
Reassociation usually occurs because the wireless station has physically moved away from the original access point, causing the signal to weaken. In other cases, reassociation occurs due to a change in radio characteristics in the building, or due simply to high network traffic on the original access point. In the latter case, this function is known as load balancing, since its primary function is to distribute the total wireless LAN load most efficiently across the available wireless infrastructure.
Association and reassociation differ only slightly in their use. Association request frames are used when joining a network for the first time. Reassociation request frames are used when roaming between access points so that the new access point knows to negotiate transfer of buffered frames from the old access point and to let the distribution system know that the client has moved. Reassociation is illustrated in Figure 7.13.
This process of dynamically associating and re-associating with access points allows network managers to set up wireless LANs with very broad coverage by creating a series of overlapping 802.11 cells throughout a building or across a campus. To be successful, the IT manager ideally will employ channel reuse, taking care to configure each access point on an 802.11 DSSS channel that does not overlap with a channel used by a neighboring access point. While there are 14 partially overlapping channels specified in 802.11 DSSS (11 channels can be used within the U.S.), there are only 3 channels that do not overlap at all, and these are the best to use for multi-cell coverage. If two access points are in range of one another and are set to the same or partially overlapping channels, they may cause some interference for one another, thus lowering the total available bandwidth in the area of overlap.
VPN Use
Wireless VPN solutions are typically implemented in two fashions. First, a centralized VPN server is implemented upstream from the access points. This VPN server could be a proprietary hardware solution or a server with a VPN application running on it. Both serve the same purpose and provide the same type of security and connectivity. Having this VPN server (also acting as a gateway and firewall) between the wireless user and the core network provides a level of security similar to wired VPNs.
The second approach is a distributed set of VPN servers. Some manufacturers implement a VPN server into their access points. This type of solution would provide security for small office and medium-sized organizations without use of an external authentication mechanism like RADIUS. For scalability, these same access point/VPN servers typically support RADIUS.
Tunnels are built from the client station to the VPN server, as illustrated in Figure 7.14. When a user roams, the client is roaming between access points across layer 2 boundaries. This process is seamless to the layer 3 connectivity. However, if a tunnel is built to the access point or centralized VPN server and a layer 3 boundary is crossed, a mechanism of some kind must be provided for keeping the tunnel alive when the boundary is crossed.
Layer 2 & 3 Boundaries
A constraint of existing technology is that wired networks are often segmented for manageability. Enterprises with multiple buildings, such as hospitals or large businesses, often implement a LAN in each building and then connect these LANs with routers or switch-routers. This is layer 3 segmentation has two major advantages. First, it contains broadcasts effectively, and second it allows access control between segments on the network. This type of segmentation can also be done at layer 2 using VLANs on switches. VLANs are often seen implemented floor-by-floor in multi-floor office buildings or for each remote building in a campus for the same reasons. Segmenting at layer 2 in this fashion segments the networks completely as if multiple networks were being implemented. When using routers such as seen in figure 7.15, users must have a method of roaming across router boundaries without losing their layer 3 connection. The layer 2 connection is still maintained by the access points, but since the IP subnet has changed while roaming, the connection to servers, for example, will be broken. Without subnet-roaming capability (such as with using a Mobile IP solution or using DHCP), wireless LAN access points must all be connected to a single subnet (a.k.a. "a flat network"). This work-around can be done at a loss of network management flexibility, but customers may be willing to incur this cost if they perceive that the value of the end system is high enough.
Many network environments (e.g., multi-building campuses, multi-floored high rises, or older or historical buildings) cannot embrace a single subnet solution as a practical option. This wired architecture is at odds with current wireless LAN technology. Access points can't hand off a session when a remote device moves across router boundaries because crossing routers changes the client device's IP address. The wired system no longer knows where to send the message. When a mobile device reattaches to the network, all application end points are lost and users are forced to log in again, reauthenticate, relocate themselves in their applications, and recreate lost data. The same type of problem is incurred when using VLANs. Switches see users as roaming across VLAN boundaries.
A hardware solution to this problem is to deploy all access points on a single VLAN using a flat IP subnet for all access points so that there is no change of IP address for roaming users and a Mobile IP solution isn't required. Users are then routed as a group back into the corporate network using a firewall, a router, a gateway device, etc. This solution can be difficult to implement in many instances, but is generally accepted as the "standard" methodology. There are many more instances where an enterprise must forego use of a wireless LAN altogether because such a solution just isn't practical.
Even with all access points on a single subnet, mobile users can still encounter coverage problems. If a user moves out of range, into a coverage hole, or simply suspends the device to prolong battery life, all application end points are lost and users in these situations again are also forced to log in again and find their way back to where they left off.
Load Balancing
Congested areas with many users and heavy traffic load per unit may require a multi-cell structure. In a multi-cell structure, several co-located access points “illuminate” the same area creating a common coverage area, which increases aggregate throughput. Stations inside the common coverage area automatically associate with the access point that is less loaded and provides the best signal quality.
As illustrated in Figure 7.17, the stations are equally divided between the access points in order to equally share the load between all access points. Efficiency is maximized because all access points are working at the same low-level load. Load balancing is also known as load sharing and is configured on both the stations and the access point in most cases.
When roaming is seamless, a work session can be maintained while moving from one cell to another. Multiple access points can provide wireless roaming coverage for an entire building or campus.
When the coverage area of two or more access points overlap, the stations in the overlapping area can establish the best possible connection with one of the access points while continuously searching for the best access point. In order to minimize packet loss during switchover, the “old” and “new” access points communicate to coordinate the roaming process. This function is similar to a cellular phones’ handover, with two main differences:
- On a packet-based LAN system, the transition from cell to cell may be performed between packet transmissions, as opposed to telephony where the transition may occur during a phone conversation.
- On a voice system, a temporary disconnection may not affect the conversation, while in a packet-based environment it significantly reduces performance because the upper layer protocols then retransmit the data.
Standards
The 802.11 standard does not define how roaming should be performed, but does define the basic building blocks. These building blocks include active & passive scanning and a reassociation process. The reassociation process occurs when a wireless station roams from one access point to another, becoming associated with the new access point.
The 802.11 standard allows a client to roam among multiple access points operating on the same or separate channels. For example, every 100 ms, an access point might transmit a beacon signal that includes a time stamp for client synchronization, a traffic indication map, an indication of supported data rates, and other parameters. Roaming clients use the beacon to gauge the strength of their existing connection to the access point. If the connection is weak, the roaming station can attempt to associate itself with a new access point.
To meet the needs of mobile radio communications, the 802.11b standard must be tolerant of connections being dropped and re-established. The standard attempts to ensure minimum disruption to data delivery, and provides some features for caching and forwarding messages between BSSs.
Particular implementations of some higher layer protocols such as TCP/IP may be less tolerant. For example, in a network where DHCP is used to assign IP addresses, a roaming node may lose its connection when it moves across cell boundaries. The node will then have to re-establish the connection when it enters the next BSS or cell. Software solutions are available to address this particular problem.
The 802.11b standard leaves much of the detailed functioning of what it calls the distribution system to manufacturers. This decision was a deliberate decision on the part of the standard designers, because they were most concerned with making the standard entirely independent of any other existing network standards. As a practical matter, an overwhelming majority of 802.11b wireless LANs using ESS topologies are connected to Ethernet LANs and make heavy use of TCP/IP. Wireless LAN vendors have stepped into the gap to offer proprietary methods of facilitating roaming between nodes in an ESS.
Connectivity
The 802.11 MAC layer is responsible for how a client associates with an access point. When an 802.11 client enters the range of one or more access points, the client chooses an access point to associate with (also called joining a BSS) based on signal strength and observed packet error rates.
Once associated with the access point, the station periodically surveys all 802.11 channels in order to assess whether a different access point would provide better performance characteristics. If the client determines that there is a stronger signal from a different access point, the client re-associates with the new access point, tuning to the radio channel to which that access point is set. The station will not attempt to roam until it drops below a manufacturer-defined signal strength threshold.
Reassociation
Reassociation usually occurs because the wireless station has physically moved away from the original access point, causing the signal to weaken. In other cases, reassociation occurs due to a change in radio characteristics in the building, or due simply to high network traffic on the original access point. In the latter case, this function is known as load balancing, since its primary function is to distribute the total wireless LAN load most efficiently across the available wireless infrastructure.
Association and reassociation differ only slightly in their use. Association request frames are used when joining a network for the first time. Reassociation request frames are used when roaming between access points so that the new access point knows to negotiate transfer of buffered frames from the old access point and to let the distribution system know that the client has moved. Reassociation is illustrated in Figure 7.13.
This process of dynamically associating and re-associating with access points allows network managers to set up wireless LANs with very broad coverage by creating a series of overlapping 802.11 cells throughout a building or across a campus. To be successful, the IT manager ideally will employ channel reuse, taking care to configure each access point on an 802.11 DSSS channel that does not overlap with a channel used by a neighboring access point. While there are 14 partially overlapping channels specified in 802.11 DSSS (11 channels can be used within the U.S.), there are only 3 channels that do not overlap at all, and these are the best to use for multi-cell coverage. If two access points are in range of one another and are set to the same or partially overlapping channels, they may cause some interference for one another, thus lowering the total available bandwidth in the area of overlap.
VPN Use
Wireless VPN solutions are typically implemented in two fashions. First, a centralized VPN server is implemented upstream from the access points. This VPN server could be a proprietary hardware solution or a server with a VPN application running on it. Both serve the same purpose and provide the same type of security and connectivity. Having this VPN server (also acting as a gateway and firewall) between the wireless user and the core network provides a level of security similar to wired VPNs.
The second approach is a distributed set of VPN servers. Some manufacturers implement a VPN server into their access points. This type of solution would provide security for small office and medium-sized organizations without use of an external authentication mechanism like RADIUS. For scalability, these same access point/VPN servers typically support RADIUS.
Tunnels are built from the client station to the VPN server, as illustrated in Figure 7.14. When a user roams, the client is roaming between access points across layer 2 boundaries. This process is seamless to the layer 3 connectivity. However, if a tunnel is built to the access point or centralized VPN server and a layer 3 boundary is crossed, a mechanism of some kind must be provided for keeping the tunnel alive when the boundary is crossed.
Layer 2 & 3 Boundaries
A constraint of existing technology is that wired networks are often segmented for manageability. Enterprises with multiple buildings, such as hospitals or large businesses, often implement a LAN in each building and then connect these LANs with routers or switch-routers. This is layer 3 segmentation has two major advantages. First, it contains broadcasts effectively, and second it allows access control between segments on the network. This type of segmentation can also be done at layer 2 using VLANs on switches. VLANs are often seen implemented floor-by-floor in multi-floor office buildings or for each remote building in a campus for the same reasons. Segmenting at layer 2 in this fashion segments the networks completely as if multiple networks were being implemented. When using routers such as seen in figure 7.15, users must have a method of roaming across router boundaries without losing their layer 3 connection. The layer 2 connection is still maintained by the access points, but since the IP subnet has changed while roaming, the connection to servers, for example, will be broken. Without subnet-roaming capability (such as with using a Mobile IP solution or using DHCP), wireless LAN access points must all be connected to a single subnet (a.k.a. "a flat network"). This work-around can be done at a loss of network management flexibility, but customers may be willing to incur this cost if they perceive that the value of the end system is high enough.
Many network environments (e.g., multi-building campuses, multi-floored high rises, or older or historical buildings) cannot embrace a single subnet solution as a practical option. This wired architecture is at odds with current wireless LAN technology. Access points can't hand off a session when a remote device moves across router boundaries because crossing routers changes the client device's IP address. The wired system no longer knows where to send the message. When a mobile device reattaches to the network, all application end points are lost and users are forced to log in again, reauthenticate, relocate themselves in their applications, and recreate lost data. The same type of problem is incurred when using VLANs. Switches see users as roaming across VLAN boundaries.
A hardware solution to this problem is to deploy all access points on a single VLAN using a flat IP subnet for all access points so that there is no change of IP address for roaming users and a Mobile IP solution isn't required. Users are then routed as a group back into the corporate network using a firewall, a router, a gateway device, etc. This solution can be difficult to implement in many instances, but is generally accepted as the "standard" methodology. There are many more instances where an enterprise must forego use of a wireless LAN altogether because such a solution just isn't practical.
Even with all access points on a single subnet, mobile users can still encounter coverage problems. If a user moves out of range, into a coverage hole, or simply suspends the device to prolong battery life, all application end points are lost and users in these situations again are also forced to log in again and find their way back to where they left off.
Load Balancing
Congested areas with many users and heavy traffic load per unit may require a multi-cell structure. In a multi-cell structure, several co-located access points “illuminate” the same area creating a common coverage area, which increases aggregate throughput. Stations inside the common coverage area automatically associate with the access point that is less loaded and provides the best signal quality.
As illustrated in Figure 7.17, the stations are equally divided between the access points in order to equally share the load between all access points. Efficiency is maximized because all access points are working at the same low-level load. Load balancing is also known as load sharing and is configured on both the stations and the access point in most cases.
Sunday, August 16, 2009
Service Sets
A service set is a term used to describe the basic components of a fully operational wireless LAN. In other words, there are three ways to configure a wireless LAN, and each way requires a different set of hardware. The three ways to configure a wireless LAN are:
Basic Service Set (BSS)
When one access point is connected to a wired network and a set of wireless stations, the network configuration is referred to as a basic service set (BSS). A basic service set consists of only one access point and one or more wireless clients, as shown in Figure 7.9. A basic service set uses infrastructure mode - a mode that requires use of an access point and in which all of the wireless traffic traverses the access point. No direct clientto-client transmissions are allowed.
Each wireless client must use the access point to communicate with any other wireless client or any wired host on the network. The BSS covers a single cell, or RF area, around the access point with varying data rate zones (concentric circles) of differing data speeds, measured in Mbps. The data speeds in these concentric circles will depend on the technology being utilized. If the BSS were made up of 802.11b equipment, then the concentric circles would have data speeds of 11, 5.5, 2, and 1 Mbps. The data rates get smaller as the circles get farther away from the access point. A BSS has one unique SSID.
Extended Service Set (ESS)
An extended service set is defined as two or more basic service sets connected by a common distribution system, as shown in Figure 7.10. The distribution system can be either wired, wireless, LAN, WAN, or any other method of network connectivity. An ESS must have at least 2 access points operating in infrastructure mode. Similar to a BSS, all packets in an ESS must go through one of the access points.
Other characteristics of extended service sets, according to the 802.11 standard, are that an ESS covers multiple cells, allows – but does not require – roaming capabilities, and does not require the same SSID in both basic service sets.
Independent Basic Service Set (IBSS)
An independent basic service set is also known as an ad hoc network. An IBSS has no access point or any other access to a distribution system, but covers one single cell and has one SSID, as shown in Figure 7.11. The clients in an IBSS alternate the responsibility of sending beacons since there is no access point to perform this task.
In order to transmit data outside an IBSS, one of the clients in the IBSS must be acting as a gateway, or router, using a software solution for this purpose. In an IBSS, clients make direct connections to each other when transmitting data, and for this reason, an IBSS is often referred to as a peer-to-peer network.
- Basic service set
- Extended service set
- Independent basic service set
Basic Service Set (BSS)
When one access point is connected to a wired network and a set of wireless stations, the network configuration is referred to as a basic service set (BSS). A basic service set consists of only one access point and one or more wireless clients, as shown in Figure 7.9. A basic service set uses infrastructure mode - a mode that requires use of an access point and in which all of the wireless traffic traverses the access point. No direct clientto-client transmissions are allowed.
Each wireless client must use the access point to communicate with any other wireless client or any wired host on the network. The BSS covers a single cell, or RF area, around the access point with varying data rate zones (concentric circles) of differing data speeds, measured in Mbps. The data speeds in these concentric circles will depend on the technology being utilized. If the BSS were made up of 802.11b equipment, then the concentric circles would have data speeds of 11, 5.5, 2, and 1 Mbps. The data rates get smaller as the circles get farther away from the access point. A BSS has one unique SSID.
Extended Service Set (ESS)
An extended service set is defined as two or more basic service sets connected by a common distribution system, as shown in Figure 7.10. The distribution system can be either wired, wireless, LAN, WAN, or any other method of network connectivity. An ESS must have at least 2 access points operating in infrastructure mode. Similar to a BSS, all packets in an ESS must go through one of the access points.
Other characteristics of extended service sets, according to the 802.11 standard, are that an ESS covers multiple cells, allows – but does not require – roaming capabilities, and does not require the same SSID in both basic service sets.
Independent Basic Service Set (IBSS)
An independent basic service set is also known as an ad hoc network. An IBSS has no access point or any other access to a distribution system, but covers one single cell and has one SSID, as shown in Figure 7.11. The clients in an IBSS alternate the responsibility of sending beacons since there is no access point to perform this task.
In order to transmit data outside an IBSS, one of the clients in the IBSS must be acting as a gateway, or router, using a software solution for this purpose. In an IBSS, clients make direct connections to each other when transmitting data, and for this reason, an IBSS is often referred to as a peer-to-peer network.
Sunday, August 9, 2009
Authentication Security
Shared Key authentication is not considered secure because the access point transmits the challenge text in the clear and receives the same challenge text encrypted with the WEP key. This scenario allows a hacker using a sniffer to see both the plaintext challenge and the encrypted challenge. Having both of these values, a hacker could use a simple cracking program to derive the WEP key. Once the WEP key is obtained, the hacker could decrypt encrypted traffic. It is for this reason that Open System authentication is considered more secure than Shared Key authentication.
Shared Secrets & Certificates
Shared secrets are strings of numbers or text that are commonly referred to as the WEP key. Certificates are another method of user identification used with wireless networks. Just as with WEP keys, certificates (which are authentication documents) are placed on the client machine ahead of time. This placement is done so that when the user wishes to authenticate to the wireless network, the authentication mechanism is already in place on the client station. Both of these methods have historically been implemented in a manual fashion, but there are applications available today that allow automation of this process.
Emerging Authentication Protocols
There are many new authentication security solutions and protocols on the market today, including VPN and 802.1x using Extensible Authentication Protocol (EAP). Many of these security solutions involve passing authentication through to authentication servers upstream from the access point while keeping the client waiting during the authentication phase. Windows XP has native support for 802.11, 802.1x, and EAP. Cisco and other wireless LAN manufacturers also support these standards. For this reason, it is easy to see that the 802.1x and EAP authentication solution could be a common solution in the wireless LAN security market.
802.1x and EAP
The 802.1x (port-based network access control) standard is relatively new, and devices that support it have the ability to allow a connection into the network at layer 2 only if user authentication is successful. This protocol works well for access points that need the ability to keep users disconnected if they are not supposed to be on the network. EAP is a layer 2 protocol that is a flexible replacement for PAP or CHAP under PPP that works over local area networks. EAP allows plug-ins at either end of a link through which many methods of authentication can be used. In the past, PAP and/or CHAP have been used for user authentication, and both support using passwords. The need for a stronger, more flexible alternative is clear with wireless networks since more varied implementations abound with wireless than with wired networks.
Typically, user authentication is accomplished using a Remote Authentication Dial-In User Service (RADIUS) server and some type of user database (Native RADIUS, NDS, Active Directory, LDAP, etc.). The process of authenticating using EAP is shown in Figure 7.6. The new 802.11i standard includes support for 802.1x, EAP, AAA, mutual authentication, and key generation, none of which were included in the original 802.11 standard. “AAA” is an acronym for authentication (identifying who you are), authorization (attributes to allow you to perform certain tasks on the network), and accounting (shows what you’ve done and where you’ve been on the network).
In the 802.1x standard model, network authentication consists of three pieces: the supplicant, the authenticator, and the authentication server.
Because wireless LAN security is essential – and EAP authentication types provide the means of securing the wireless LAN connection – vendors are rapidly developing and adding EAP authentication types to their wireless LAN access points. Knowing the type of EAP being used is important in understanding the characteristics of the authentication method such as passwords, key generation, mutual authentication, and protocol. Some of the commonly deployed EAP authentication types include:
EAP-MD-5 Challenge. The earliest EAP authentication type, this essentially duplicates CHAP password protection on a wireless LAN. EAP-MD5 represents a kind of baselevel EAP support among 802.1x devices.
EAP-Cisco Wireless. Also called LEAP (Lightweight Extensible Authentication Protocol), this EAP authentication type is used primarily in Cisco wireless LAN access points. LEAP provides security during credential exchange, encrypts data transmission using dynamically generated WEP keys, and supports mutual authentication.
EAP-TLS (Transport Layer Security). EAP-TLS provides for certificate-based, mutual authentication of the client and the network. EAP-TLS relies on client-side and serverside certificates to perform authentication, using dynamically generated user- and session-based WEP keys distributed to secure the connection. Windows XP includes an EAP-TLS client, and EAP-TLS is also supported by Windows 2000.
EAP-TTLS. Funk Software and Certicom have jointly developed EAP-TTLS (Tunneled Transport Layer Security). EAP-TTLS is an extension of EAP-TLS, which provides for certificate-based, mutual authentication of the client and network. Unlike EAP-TLS, however, EAP-TTLS requires only server-side certificates, eliminating the need to configure certificates for each wireless LAN client.
In addition, EAP-TTLS supports legacy password protocols, so you can deploy it against your existing authentication system (such as Active Directory or NDS). EAP-TTLS securely tunnels client authentication within TLS records, ensuring that the user remains anonymous to eavesdroppers on the wireless link. Dynamically generated user- and session-based WEP keys are distributed to secure the connection.
EAP-SRP (Secure Remote Password). SRP is a secure, password-based authentication and key-exchange protocol. It solves the problem of authenticating clients to servers securely in cases where the user of the client software must memorize a small secret (like a password) and carries no other secret information. The server carries a verifier for each user, which allows the server to authenticate the client. However, if the verifier were compromised, the attacker would not be allowed to impersonate the client. In addition, SRP exchanges a cryptographically strong secret as a byproduct of successful authentication, which enables the two parties to communicate securely.
EAP-SIM (GSM). EAP-SIM is a mechanism for Mobile IP network access authentication and registration key generation using the GSM Subscriber Identity Module (SIM). The rationale for using the GSM SIM with Mobile IP is to leverage the existing GSM authorization infrastructure with the existing user base and the existing SIM card distribution channels. By using the SIM key exchange, no other preconfigured security association besides the SIM card is required on the mobile node. The idea is not to use the GSM radio access technology, but to use GSM SIM authorization with Mobile IP over any link layer, for example on Wireless LAN access networks.
It is likely that this list of EAP authentication types will grow as more and more vendors enter the wireless LAN security market, and until the market chooses a standard.
VPN Solutions
VPN technology provides the means to securely transmit data between two network devices over an unsecure data transport medium. It is commonly used to link remote computers or networks to a corporate server via the Internet. However, VPN is also a solution for protecting data on a wireless network. VPN works by creating a tunnel on top of a protocol such as IP. Traffic inside the tunnel is encrypted, and totally isolated as can be seen in Figures 7.7 and 7.8. VPN technology provides three levels of security: user authentication, encryption, and data authentication.
Applying VPN technology to secure a wireless network requires a different approach than when it is used on wired networks for the following reasons.
Shared Secrets & Certificates
Shared secrets are strings of numbers or text that are commonly referred to as the WEP key. Certificates are another method of user identification used with wireless networks. Just as with WEP keys, certificates (which are authentication documents) are placed on the client machine ahead of time. This placement is done so that when the user wishes to authenticate to the wireless network, the authentication mechanism is already in place on the client station. Both of these methods have historically been implemented in a manual fashion, but there are applications available today that allow automation of this process.
Emerging Authentication Protocols
There are many new authentication security solutions and protocols on the market today, including VPN and 802.1x using Extensible Authentication Protocol (EAP). Many of these security solutions involve passing authentication through to authentication servers upstream from the access point while keeping the client waiting during the authentication phase. Windows XP has native support for 802.11, 802.1x, and EAP. Cisco and other wireless LAN manufacturers also support these standards. For this reason, it is easy to see that the 802.1x and EAP authentication solution could be a common solution in the wireless LAN security market.
802.1x and EAP
The 802.1x (port-based network access control) standard is relatively new, and devices that support it have the ability to allow a connection into the network at layer 2 only if user authentication is successful. This protocol works well for access points that need the ability to keep users disconnected if they are not supposed to be on the network. EAP is a layer 2 protocol that is a flexible replacement for PAP or CHAP under PPP that works over local area networks. EAP allows plug-ins at either end of a link through which many methods of authentication can be used. In the past, PAP and/or CHAP have been used for user authentication, and both support using passwords. The need for a stronger, more flexible alternative is clear with wireless networks since more varied implementations abound with wireless than with wired networks.
Typically, user authentication is accomplished using a Remote Authentication Dial-In User Service (RADIUS) server and some type of user database (Native RADIUS, NDS, Active Directory, LDAP, etc.). The process of authenticating using EAP is shown in Figure 7.6. The new 802.11i standard includes support for 802.1x, EAP, AAA, mutual authentication, and key generation, none of which were included in the original 802.11 standard. “AAA” is an acronym for authentication (identifying who you are), authorization (attributes to allow you to perform certain tasks on the network), and accounting (shows what you’ve done and where you’ve been on the network).
In the 802.1x standard model, network authentication consists of three pieces: the supplicant, the authenticator, and the authentication server.
Because wireless LAN security is essential – and EAP authentication types provide the means of securing the wireless LAN connection – vendors are rapidly developing and adding EAP authentication types to their wireless LAN access points. Knowing the type of EAP being used is important in understanding the characteristics of the authentication method such as passwords, key generation, mutual authentication, and protocol. Some of the commonly deployed EAP authentication types include:
EAP-MD-5 Challenge. The earliest EAP authentication type, this essentially duplicates CHAP password protection on a wireless LAN. EAP-MD5 represents a kind of baselevel EAP support among 802.1x devices.
EAP-Cisco Wireless. Also called LEAP (Lightweight Extensible Authentication Protocol), this EAP authentication type is used primarily in Cisco wireless LAN access points. LEAP provides security during credential exchange, encrypts data transmission using dynamically generated WEP keys, and supports mutual authentication.
EAP-TLS (Transport Layer Security). EAP-TLS provides for certificate-based, mutual authentication of the client and the network. EAP-TLS relies on client-side and serverside certificates to perform authentication, using dynamically generated user- and session-based WEP keys distributed to secure the connection. Windows XP includes an EAP-TLS client, and EAP-TLS is also supported by Windows 2000.
EAP-TTLS. Funk Software and Certicom have jointly developed EAP-TTLS (Tunneled Transport Layer Security). EAP-TTLS is an extension of EAP-TLS, which provides for certificate-based, mutual authentication of the client and network. Unlike EAP-TLS, however, EAP-TTLS requires only server-side certificates, eliminating the need to configure certificates for each wireless LAN client.
In addition, EAP-TTLS supports legacy password protocols, so you can deploy it against your existing authentication system (such as Active Directory or NDS). EAP-TTLS securely tunnels client authentication within TLS records, ensuring that the user remains anonymous to eavesdroppers on the wireless link. Dynamically generated user- and session-based WEP keys are distributed to secure the connection.
EAP-SRP (Secure Remote Password). SRP is a secure, password-based authentication and key-exchange protocol. It solves the problem of authenticating clients to servers securely in cases where the user of the client software must memorize a small secret (like a password) and carries no other secret information. The server carries a verifier for each user, which allows the server to authenticate the client. However, if the verifier were compromised, the attacker would not be allowed to impersonate the client. In addition, SRP exchanges a cryptographically strong secret as a byproduct of successful authentication, which enables the two parties to communicate securely.
EAP-SIM (GSM). EAP-SIM is a mechanism for Mobile IP network access authentication and registration key generation using the GSM Subscriber Identity Module (SIM). The rationale for using the GSM SIM with Mobile IP is to leverage the existing GSM authorization infrastructure with the existing user base and the existing SIM card distribution channels. By using the SIM key exchange, no other preconfigured security association besides the SIM card is required on the mobile node. The idea is not to use the GSM radio access technology, but to use GSM SIM authorization with Mobile IP over any link layer, for example on Wireless LAN access networks.
It is likely that this list of EAP authentication types will grow as more and more vendors enter the wireless LAN security market, and until the market chooses a standard.
VPN Solutions
VPN technology provides the means to securely transmit data between two network devices over an unsecure data transport medium. It is commonly used to link remote computers or networks to a corporate server via the Internet. However, VPN is also a solution for protecting data on a wireless network. VPN works by creating a tunnel on top of a protocol such as IP. Traffic inside the tunnel is encrypted, and totally isolated as can be seen in Figures 7.7 and 7.8. VPN technology provides three levels of security: user authentication, encryption, and data authentication.
- User authentication ensures that only authorized users (over a specific device) are able to connect, send, and receive data over the wireless network.
- Encryption offers additional protection as it ensures that even if transmissions are intercepted, they cannot be decoded without significant time and effort.
- Data authentication ensures the integrity of data on the wireless network, guaranteeing that all traffic is from authenticated devices only.
Applying VPN technology to secure a wireless network requires a different approach than when it is used on wired networks for the following reasons.
- The inherent repeater function of wireless access points automatically forwards traffic between wireless LAN stations that communicate together and that appear on the same wireless network.
- The range of the wireless network will likely extend beyond the physical boundaries of an office or home, giving intruders the means to compromise the network.
Subscribe to:
Posts (Atom)