Saturday, September 26, 2009

How Wireless LANs Communicate

In order to understand how to configure and manage a wireless LAN, the administrator must understand communication parameters that are configurable on the equipment and how to implement those parameters. In order to estimate throughput across wireless LANs, one must understand the affects of these parameters and collision handling on system throughput. This section conveys a basic understanding of many configurable parameters and their affects on network performance.


Wireless LAN Frames vs. Ethernet Frames

Once a wireless client has joined a network, the client and the rest of the network will communicate by passing frames across the network, in almost the same manner as any other IEEE 802 network. To clear up a common misconception, wireless LANs do NOT use 802.3 Ethernet frames. The term wireless Ethernet is somewhat of a misnomer. Wireless LAN frames contain more information than common Ethernet frames do. The actual structure of a wireless LAN frame versus that of an Ethernet frame is beyond the scope of both the CWNA exam as well as a wireless LAN administrator’s job.

Something to consider is that there are many types of IEEE 802 frames, but there is only one type of wireless frame. With 802.3 Ethernet frames, once chosen by the network administrator, the same frame type is used to send all data across the wire just as with wireless. Wireless frames are all configured with the same overall frame format. One similarity to 802.3 Ethernet is that the payload of both is a maximum of 1500 bytes. Ethernet's maximum frame size is 1514 bytes where 802.11 wireless LANs have a maximum frame size of 1518 bytes.


Collision Handling

Since radio frequency is a shared medium, wireless LANs have to deal with the possibility of collisions just the same as traditional wired LANs do. The difference is that, on a wireless LAN, there is no means through which the sending station can determine that there has actually been a collision. It is impossible to detect a collision on a wireless LAN. For this reason, wireless LANs utilize the Carrier Sense Multiple Access / Collision Avoidance protocol, also known as CSMA/CA. CSMA/CA is somewhat similar to the protocol CSMA/CD, which is common on Ethernet networks.

The biggest difference between CSMA/CA and CSMA/CD is that CSMA/CA avoids collisions and uses positive acknowledgements (ACKs) instead of arbitrating use of the medium when collisions occur. The use of acknowledgements, or ACKs, works in a very simple manner. When a wireless station sends a packet, the receiving station sends back an ACK once that station actually receives the packet. If the sending station does not receive an ACK, the sending station assumes there was a collision and resends the data.

CSMA/CA, added to the large amount of control data used in wireless LANs, causes overhead that uses approximately 50% of the available bandwidth on a wireless LAN. This overhead, plus the additional overhead of protocols such as RTS/CTS that enhance collision avoidance, is responsible for the actual throughput of approximately 5.0 - 5.5 Mbps on a typical 802.11b wireless LAN rated at 11 Mbps. CSMA/CD also generates overhead, but only about 30% on an average use network. When an Ethernet network becomes congested, CSMA/CD can cause overhead of up to 70%, while a congested wireless network remains somewhat constant at around 50 - 55% throughput.

The CSMA/CA protocol avoids the probability of collisions among stations sharing the medium by using a random back off time if the station's physical or logical sensing mechanism indicates a busy medium. The period of time immediately following a busy medium is when the highest probability of collisions occurs, especially under high utilization. At this point in time, many stations may be waiting for the medium to become idle and will attempt to transmit at the same time. Once the medium is idle, a random back off time defers a station from transmitting a frame, minimizing the chance that stations will collide.


Fragmentation

Fragmentation of packets into shorter fragments adds protocol overhead and reduces protocol efficiency (decreases network throughput) when no errors are observed, but reduces the time spent on re-transmissions if errors occur. Larger packets have a higher probability of collisions on the network; hence, a method of varying packet fragment size is needed. The IEEE 802.11 standard provides support for fragmentation.

By decreasing the length of each packet, the probability of interference during packet transmission can be reduced, as illustrated in Figure 8.1. There is a tradeoff that must be made between the lower packet error rate that can be achieved by using shorter packets, and the increased overhead of more frames on the network due to fragmentation. Each fragment requires its own headers and ACK, so the adjustment of the fragmentation level is also an adjustment of the amount of overhead associated with each packet transmitted. Stations never fragment multicast and broadcast frames, but rather only unicast frames in order not to introduce unnecessary overhead into the network. Finding the optimal fragmentation setting to maximize the network throughput on an 802.11 network is an important part of administering a wireless LAN. Keep in mind that a 1518 byte frame is the largest frame that can traverse a wireless LAN segment without fragmentation.


One way to use fragmentation to improve network throughput in times of heavy packet errors is to monitor the packet error rate on the network and adjust the fragmentation level manually. As a recommended practice, you should monitor the network at multiple times throughout a typical day to see what impact fragmentation adjustment will have at various times. Another method of adjustment is to configure the fragmentation threshold.

If your network is experiencing a high packet error rate (faulty packets), increase the fragmentation threshold on the client stations and/or the access point (depending on which units allow these settings on your particular equipment). Start with the maximum value and gradually decrease the fragmentation threshold size until an improvement shows. If fragmentation is used, the network will experience a performance hit due to the overhead incurred with fragmentation. Sometimes this hit is acceptable in order to gain more throughput due to a decrease in packet errors and subsequent retransmissions.


Dynamic Rate Shifting (DRS)

Adaptive (or Automatic) Rate Selection (ARS) and Dynamic Rate Shifting (DRS) are both terms used to describe the method of dynamic speed adjustment on wireless LAN clients. This speed adjustment occurs as distance increases between the client and the access point or as interference increases. It is imperative that a network administrator understands how this function works in order to plan for network throughput, cell sizes, power outputs of access points and stations, and security.

Modern spread spectrum systems are designed to make discrete jumps only to specified data rates, such as 1, 2, 5.5, and 11 Mbps. As distance increases between the access point and a station, the signal strength will decrease to a point where the current data rate cannot be maintained. When this signal strength decrease occurs, the transmitting unit will drop its data rate to the next lower specified data rate, say from 11 Mbps to 5.5 Mbps or from 2 Mbps to 1 Mbps. Figure 8.2 illustrates that, as the distance from the access point increases, the data rate decreases.

A wireless LAN system will never drop from 11 Mbps to 10 Mbps, for example, since 10 Mbps is not a specified data rate. The method of making such discrete jumps is typically called either ARS or DRS, depending on the manufacturer. Both FHSS and DSSS implement DRS, and the IEEE 802.11, IEEE 802.11b, HomeRF, and OpenAir standards require it.


Distributed Coordination Function

Distributed Coordination Function (DCF) is an access method specified in the 802.11 standard that allows all stations on a wireless LAN to contend for access on the shared transmission medium (RF) using the CSMA/CA protocol. In this case, the transmission medium is a portion of the radio frequency band that the wireless LAN is using to send data. Basic service sets (BSS), extended service sets (ESS), and independent basic service sets (IBSS) can all use DCF mode. The access points in these service sets act in the same manner as IEEE 802.3 based wired hubs to transmit their data, and DCF is the mode in which the access points send the data.


Point Coordination Function

Point Coordination Function (PCF) is a transmission mode allowing for contention-free frame transfers on a wireless LAN by making use of a polling mechanism. PCF has the advantage of guaranteeing a known amount of latency so that applications requiring QoS (voice or video for example) can be used. When using PCF, the access point on a wireless LAN performs the polling. For this reason, an ad hoc network cannot utilize PCF, because an ad hoc network has no access point to do the polling.

The PCF Process

First, a wireless station must tell the access point that the station is capable of answering a poll. Then the access point asks, or polls, each wireless station to see if that station needs to send a data frame across the network. PCF, through polling, generates a significant amount of overhead on a wireless LAN.

Thursday, September 17, 2009

Power Management Features

Wireless clients operate in one of two power management modes specified by the IEEE 802.11 standard. These power management modes are active mode, which is commonly called continuous aware mode (CAM) and power save, which is commonly called power save polling (PSP) mode. Conserving power using a power-saving mode is especially important to mobile users whose laptops or PDAs run on batteries. Extending the life of these batteries allows the user to stay up and running longer without a recharge. Wireless LAN cards can draw a significant amount of power from the battery while in CAM, which is why power saving features are included in the 802.11 standard.


Continuous Aware Mode

Continuous aware mode is the setting during which the wireless client uses full power, does not “sleep,” and is constantly in regular communication with the access point. Any computers that stay plugged into an AC power outlet continuously such as a desktop or server should be set for CAM. Under these circumstances, there is no reason to have the PC card conserve power.


Power Save Polling

Using power save polling (PSP) mode allows a wireless client to “sleep.” By sleep, we mean that the client actually powers down for a very short amount of time, perhaps a small fraction of a second. This sleep is enough time to save a significant amount of power on the wireless client. In turn, the power saved by the wireless client enables a laptop computer user, for example, to work for a longer period of time on batteries, making that user more productive.

When using PSP, the wireless client behaves differently within basic service sets and independent basic service sets. The one similarity in behavior from a BSS to an IBSS is the sending and receiving of beacons.

The processes that operate during PSP mode, in both BSS and IBSS, are described below. Keep in mind that these processes occur many times per second. That fact allows your wireless LAN to maintain its connectivity, but also causes a certain amount of additional overhead. An administrator should consider this overhead when planning for the needs of the users on the wireless LAN.


PSP Mode in a Basic Service Set

When using PSP mode in a BSS, stations first send a frame to the access point to inform the access point that they are going to sleep, (temporarily powering down). The access point then records the sleeping stations as asleep. The access point buffers any frames that are intended for the sleeping stations. Traffic for those clients who are asleep continues arriving at the access point, but the access point cannot send traffic to a sleeping client.

Therefore, packets get queued in a buffer marked for the sleeping client. The access point is constantly sending beacons at a regular interval. Clients, since they are time-synchronized with the access point, know exactly when to receive the beacon. Clients that are sleeping power up their receivers to listen for beacons, which contain the traffic indication map (TIM) If a station sees itself listed in the TIM, it powers up, and sends a frame to the access point notifying the access point that it is now awake and ready to receive the buffered data packets. Once the client has received its packets from the access point, the client sends a message to the access point informing it that the client is going back to ‘sleep’. Then the process repeats itself over and over again. This process creates some overhead that would not be present if PSP mode were not being utilized. The steps of this process are shown in Figure 7.18.



PSP in an Independent Basic Service Set


The power saving communication process in an IBSS is very different than when power saving mode is used in a BSS. An IBSS does not contain an access point, so there is no device to buffer packets. Therefore, every station must buffer packets destined from itself to every other station in the Ad Hoc network. Stations alternate the sending of beacons on an IBSS network using varied methods, each dependent on the manufacturer. When stations are using power saving mode, there is a period of time called an ATIM window, during which each station is fully awake and ready to receive data frames. Ad hoc traffic indication messages (ATIM) are unicast frames used by stations to notify other stations that there is data destined to them and that they should stay awake long enough to receive it. ATIMs and beacons are both sent during the ATIM window. The process followed by stations in order to pass traffic between peers is:
  • Stations are synchronized through the beacons so they wake up before the ATIM window begins.
  • The ATIM window begins, the stations send beacons, and then stations send ATIM frames notifying other stations of buffered traffic destined to them.
  • Stations receiving ATIM frames during the ATIM window stay awake to receive data frames. If no ATIM frames are received, stations go back to sleep.
  • The ATIM window closes, and stations begin transmitting data frames. After receiving data frames, stations go back to sleep awaiting the next ATIM window.
This PSP process for an IBSS is illustrated in Figure 7.19.


As a wireless LAN administrator, you need to know what affect power management features will have on performance, battery life, broadcast traffic on your LAN, etc. In the example described above, the effects could be significant.