Based on the above analysis, we identify significant shortcomings in currently deployed wireless mesh networks. We believe that these deficiencies have only occurred in the first generation of wireless mesh networks that focused on providing the proof of concept for wireless mesh networks. However, these deficiencies must be addressed in the second generation of wireless networks. The remainder of this section highlights the challenges, and points out possible solutions.
3.1. Quality
The quest to achieve performance, reliability, and scalability in wireless mesh networks must be concurrently started at all layers. At the physical layer, improvements are on their way with multiple antenna systems, orthogonal frequency-division multiplexing (OFDM), and with novel 802.11 flavors such as 802.11n. In addition, however, two alternative research paths must be pursued. One is new wideband transmission schemes beyond OFDM and UWB (ultra-wide-band). These schemes must achieve higher transmission rates, and therefore push the capacity limits. Second, enhanced power schemes are needed to address the increasing interference. With the rapid deployment of wireless technologies in homes and cities, the degree of interference is constantly mounting. In the city of Berlin, during our measurements with the MagNets testbed [
10], we have found up to 25 interfering networks in the neighborhood of one access point—per channel! Moreover, we have learned during the past two years that interference is the main reason for performance degradations, and not multipath fading. Thus, it is vital that interference is reduced by flexibly adjusting the power of wireless senders.
Tightly coupled with the physical-layer needs, there are the set of demands at the MAC layer. While advances at the physical layer provide the basic mechanisms, the MAC layer must determine how to use these mechanisms. For example, under which conditions the power should be increased or decreased to tradeoff the probability of correct reception of one packet against the interference with other neighboring access points. A strategy where everybody keeps the transmission power to its maximum is simply not going to work. Therefore, an enhanced collaboration between physical and MAC layers is required. A second set of work must deal with innovative MAC protocols. The current random access protocol, such as carrier sensing multiple access/collision avoidance (CSMA/CA), is far from being efficient and fair. Is a time division multiple access (TDMA) approach better, and in particular is it feasible when the schedule must take multiple distributed nodes into account? On the other hand, a TDMA solution would solve many issues. In particular, for ISPs, a TDMA solution would allow them to offer service-level agreements and have different service classes. These guarantees are necessary to create the desired revenues from mesh networks. Moreover, TDMA systems are likely to allow for a simple solution to the multihop unfairness and performance degradations.
At the network layer, the key challenge is to optimize the usage of the underlying capacity. This task is extremely challenging given the need to coordinate multiple distributed mesh nodes and given the wide heterogeneity of underlying mesh nodes and channels. What kind of routing metrics does show the best performance and best match the application needs? Is multipath routing a way to optimize the capacity usage? How can we integrate routing in a mesh with routing in the Internet? All these questions require a fundamental analysis and experimental evaluation before they can be answered. However, we note a recent interest in multipath routing or, to formulate it in a more general way, in diversity. Even in the Internet, the concept that only a single path is used through the Internet is currently questioned because it is likely that alternative paths exist, which may be less loaded and therefore have a better application-level performance. If the concept of diversity was integrated as a fundamental concept into a future Internet architecture, it could also help to improve the performance in a wireless mesh network.
At the transport layer, we face two challenges. At the actual stage, we know that current TCP implementations do not perform well over multihop wireless networks. Thus, it is necessary to tune and adapt TCP mechanisms to deal with large round trip time (RTT) variations, path asymmetries, and varying channel conditions at different time scales. The challenge thereby is to come up with solutions that achieve a high throughput in both wired and wireless networks, or to have different TCP implementations and find a way to dynamically choose a specific implementation based on the underlying network.
Finally, at the application layer, we see one dominant question, that is, whether there is such a thing as a killer application for mesh networks. It is unlikely that current applications require significant changes in their behavior depending on whether they are deployed over a mesh network or a wired network. It can be assumed that the lower-layer protocols take care of the difference. That is, VoIP applications require a routing based on delay minimization, whereas multimedia applications or peer-to-peer applications are likely to prefer routing protocols that achieve a high bandwidth. However, a killer application would push the limits and the requirements of future mesh networks into a specific direction.
Towards achieving the above goals, we should be aware that three types of work are required to make progress. First, at a theoretical level, work is required to help us understand the behavior of protocols. For example, we still ignore to a large degree how 802.11 MACs perform over multihop backhaul networks in real networks. That is, how exactly is data forwarded from one hop to another? This knowledge is vital to, for example, foster new MAC-layer protocols that rely on random access but do not have severe throughput and unfairness drawbacks. Second, novel protocols are needed that significantly improve the performance. In research, we often see research proposals that achieve 10 or 20% of improvements. Such small advances do not help us make progress. Instead protocols are needed, which double, triple, or n-ple the throughput. Finally, we need solutions that are experimentally evaluated and tested under several conditions. Over the last decades, for example, a plethora of routing protocols or enhancements thereof has been proposed. However, we still ignore how they would perform in a real network. In fact, they often perform well under a specific constraint but have severe drawbacks under others. It is vital for the progress that protocols are experimentally evaluated.
3.2. Security
Providing security must be one of the most dominant objectives in wireless mesh network research in the near future. Without securing wireless networks properly, it is likely that users will not use wireless mesh networks, as seen in the case of San Francisco. But how to secure a wireless mesh network? The good news is that security in wireless mesh networks often coincides with security in wired networks. Because the topology is known, mesh nodes know their neighbors and can ask for identification. Currently, the worst attack scenario is probably jamming, as jamming (all frequencies) does not leave room for automated solutions. However, the advantage is that jamming networks require that the attacker be near the mesh or that a jamming device be installed near the mesh. In either case, the jamming device can easily be identified by following the radiation pattern.
For all other attacks, we repeat the requirements by Yang et al. [
11]. In future work, the main directions are as follows: (i) to critically evaluate any proposed security solution, including vulnerability analysis and measurements and emulations, and (ii) security protocols must be resilient and robust, possibly even against unknown attacks. By no means must a security protocol proposal make idealistic assumptions.
3.3. Economy
At an economical level, we identify three key directions. First, protocols and mechanisms must be implemented into wireless mesh networks to provide carrier-grade services. These services are a vital requirement for ISPs to create revenues. To enable carrier-grade services, protocols must be designed to achieve a predictable performance and allow for quality differentiation. At the MAC layer, TDMA could be an option, but similar efforts are required at all levels. For example, streaming services must be deployed. Moreover, AAA and related mechanisms must be built into meshes. In contrast to wired networks where service guarantees are achieved with overprovisioning today, it is clear that such an approach is not feasible in a wireless world—at least not by scaling bandwidth.
Second, related to carrier-grade services is the following question. How much frequency is needed for wireless technology? As discussed above, the increasing deployment of wireless technology incurs interference and is therefore already now the main "killer" of performance. Adding more spectrum certainly helps. The key question thereby is as follows. Should the spectrum continue to be free, or should it be licensed? Clearly for a TDMA system to work, a licensed spectrum is a precondition, as otherwise any random access technology in the same frequency band would interfere with the TDMA schedule. Discussions about issuing small frequency bandwidth to ISPs for a relatively low cost are already ongoing in different countries.
Third, the killer application for meshes must be found. Actually, there are two types of killer applications: the killer application that motivates the deployment of mesh networks, and the killer application for users to use the mesh. These two applications may be different or can be the same. For the killer application that motivates the deployment, the use of this application must create revenues or savings that compensate for the investment of mesh deployment. Potential killers here are the meters for gas, heating, power or parking, and remote surveillance and emergency situations. For example, if all meters were equipped with cheap WiFi senders, their level could remotely be controlled, saving the costs of sending people to homes. Remote surveillance and emergency may help police, fire departments, and ambulances to get a picture of an emergency situation at an early stage and prepare the rescue accordingly. For users, video and TV streaming is often considered as the killer application. However, are we really all such addicted to TV that we need to receive streams at high data rates all the time? Or do location-based services find the right balance between providing useful information and ensuring the privacy of users? Thinking along these lines, it seems that the technological challenges are far better understood than the demands of the users and the society.