Saturday, September 26, 2009

How Wireless LANs Communicate

In order to understand how to configure and manage a wireless LAN, the administrator must understand communication parameters that are configurable on the equipment and how to implement those parameters. In order to estimate throughput across wireless LANs, one must understand the affects of these parameters and collision handling on system throughput. This section conveys a basic understanding of many configurable parameters and their affects on network performance.


Wireless LAN Frames vs. Ethernet Frames

Once a wireless client has joined a network, the client and the rest of the network will communicate by passing frames across the network, in almost the same manner as any other IEEE 802 network. To clear up a common misconception, wireless LANs do NOT use 802.3 Ethernet frames. The term wireless Ethernet is somewhat of a misnomer. Wireless LAN frames contain more information than common Ethernet frames do. The actual structure of a wireless LAN frame versus that of an Ethernet frame is beyond the scope of both the CWNA exam as well as a wireless LAN administrator’s job.

Something to consider is that there are many types of IEEE 802 frames, but there is only one type of wireless frame. With 802.3 Ethernet frames, once chosen by the network administrator, the same frame type is used to send all data across the wire just as with wireless. Wireless frames are all configured with the same overall frame format. One similarity to 802.3 Ethernet is that the payload of both is a maximum of 1500 bytes. Ethernet's maximum frame size is 1514 bytes where 802.11 wireless LANs have a maximum frame size of 1518 bytes.


Collision Handling

Since radio frequency is a shared medium, wireless LANs have to deal with the possibility of collisions just the same as traditional wired LANs do. The difference is that, on a wireless LAN, there is no means through which the sending station can determine that there has actually been a collision. It is impossible to detect a collision on a wireless LAN. For this reason, wireless LANs utilize the Carrier Sense Multiple Access / Collision Avoidance protocol, also known as CSMA/CA. CSMA/CA is somewhat similar to the protocol CSMA/CD, which is common on Ethernet networks.

The biggest difference between CSMA/CA and CSMA/CD is that CSMA/CA avoids collisions and uses positive acknowledgements (ACKs) instead of arbitrating use of the medium when collisions occur. The use of acknowledgements, or ACKs, works in a very simple manner. When a wireless station sends a packet, the receiving station sends back an ACK once that station actually receives the packet. If the sending station does not receive an ACK, the sending station assumes there was a collision and resends the data.

CSMA/CA, added to the large amount of control data used in wireless LANs, causes overhead that uses approximately 50% of the available bandwidth on a wireless LAN. This overhead, plus the additional overhead of protocols such as RTS/CTS that enhance collision avoidance, is responsible for the actual throughput of approximately 5.0 - 5.5 Mbps on a typical 802.11b wireless LAN rated at 11 Mbps. CSMA/CD also generates overhead, but only about 30% on an average use network. When an Ethernet network becomes congested, CSMA/CD can cause overhead of up to 70%, while a congested wireless network remains somewhat constant at around 50 - 55% throughput.

The CSMA/CA protocol avoids the probability of collisions among stations sharing the medium by using a random back off time if the station's physical or logical sensing mechanism indicates a busy medium. The period of time immediately following a busy medium is when the highest probability of collisions occurs, especially under high utilization. At this point in time, many stations may be waiting for the medium to become idle and will attempt to transmit at the same time. Once the medium is idle, a random back off time defers a station from transmitting a frame, minimizing the chance that stations will collide.


Fragmentation

Fragmentation of packets into shorter fragments adds protocol overhead and reduces protocol efficiency (decreases network throughput) when no errors are observed, but reduces the time spent on re-transmissions if errors occur. Larger packets have a higher probability of collisions on the network; hence, a method of varying packet fragment size is needed. The IEEE 802.11 standard provides support for fragmentation.

By decreasing the length of each packet, the probability of interference during packet transmission can be reduced, as illustrated in Figure 8.1. There is a tradeoff that must be made between the lower packet error rate that can be achieved by using shorter packets, and the increased overhead of more frames on the network due to fragmentation. Each fragment requires its own headers and ACK, so the adjustment of the fragmentation level is also an adjustment of the amount of overhead associated with each packet transmitted. Stations never fragment multicast and broadcast frames, but rather only unicast frames in order not to introduce unnecessary overhead into the network. Finding the optimal fragmentation setting to maximize the network throughput on an 802.11 network is an important part of administering a wireless LAN. Keep in mind that a 1518 byte frame is the largest frame that can traverse a wireless LAN segment without fragmentation.


One way to use fragmentation to improve network throughput in times of heavy packet errors is to monitor the packet error rate on the network and adjust the fragmentation level manually. As a recommended practice, you should monitor the network at multiple times throughout a typical day to see what impact fragmentation adjustment will have at various times. Another method of adjustment is to configure the fragmentation threshold.

If your network is experiencing a high packet error rate (faulty packets), increase the fragmentation threshold on the client stations and/or the access point (depending on which units allow these settings on your particular equipment). Start with the maximum value and gradually decrease the fragmentation threshold size until an improvement shows. If fragmentation is used, the network will experience a performance hit due to the overhead incurred with fragmentation. Sometimes this hit is acceptable in order to gain more throughput due to a decrease in packet errors and subsequent retransmissions.


Dynamic Rate Shifting (DRS)

Adaptive (or Automatic) Rate Selection (ARS) and Dynamic Rate Shifting (DRS) are both terms used to describe the method of dynamic speed adjustment on wireless LAN clients. This speed adjustment occurs as distance increases between the client and the access point or as interference increases. It is imperative that a network administrator understands how this function works in order to plan for network throughput, cell sizes, power outputs of access points and stations, and security.

Modern spread spectrum systems are designed to make discrete jumps only to specified data rates, such as 1, 2, 5.5, and 11 Mbps. As distance increases between the access point and a station, the signal strength will decrease to a point where the current data rate cannot be maintained. When this signal strength decrease occurs, the transmitting unit will drop its data rate to the next lower specified data rate, say from 11 Mbps to 5.5 Mbps or from 2 Mbps to 1 Mbps. Figure 8.2 illustrates that, as the distance from the access point increases, the data rate decreases.

A wireless LAN system will never drop from 11 Mbps to 10 Mbps, for example, since 10 Mbps is not a specified data rate. The method of making such discrete jumps is typically called either ARS or DRS, depending on the manufacturer. Both FHSS and DSSS implement DRS, and the IEEE 802.11, IEEE 802.11b, HomeRF, and OpenAir standards require it.


Distributed Coordination Function

Distributed Coordination Function (DCF) is an access method specified in the 802.11 standard that allows all stations on a wireless LAN to contend for access on the shared transmission medium (RF) using the CSMA/CA protocol. In this case, the transmission medium is a portion of the radio frequency band that the wireless LAN is using to send data. Basic service sets (BSS), extended service sets (ESS), and independent basic service sets (IBSS) can all use DCF mode. The access points in these service sets act in the same manner as IEEE 802.3 based wired hubs to transmit their data, and DCF is the mode in which the access points send the data.


Point Coordination Function

Point Coordination Function (PCF) is a transmission mode allowing for contention-free frame transfers on a wireless LAN by making use of a polling mechanism. PCF has the advantage of guaranteeing a known amount of latency so that applications requiring QoS (voice or video for example) can be used. When using PCF, the access point on a wireless LAN performs the polling. For this reason, an ad hoc network cannot utilize PCF, because an ad hoc network has no access point to do the polling.

The PCF Process

First, a wireless station must tell the access point that the station is capable of answering a poll. Then the access point asks, or polls, each wireless station to see if that station needs to send a data frame across the network. PCF, through polling, generates a significant amount of overhead on a wireless LAN.

No comments: