Contents

1. Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2

2. The global extent of urban agriculture . . . . . . . . . . 3

3. Africa . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4

3.1 Malaria and urban agriculture . . . . . . . . . . . . . 4

3.2 Human waste and urban agriculture . . . . . . . . . 6

3.3 Poverty and urban agriculture . . . . . . . . . . . . . 8

4. Latin America and the Caribbean . . . . . . . . . . . . . . 10

4.1 The Cuban story . . . . . . . . . . . . . . . . . . . . . . . 10

4.2 Innovative urban agriculture programs in South America . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 12

5. Asia . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 13

5.1 Urban agriculture and Asia’s urban poor . . . . . 16

6. Eastern Europe . . . . . . . . . . . . . . . . . . . . . . . . . . . 17

6.1 Urban agriculture in Revolutionary and post-Soviet Russia . . . . . . . . . . . . . . . . . . . . . . . . . 17

6.2 Rooftop production . . . . . . . . . . . . . . . . . . . . . 18

7. Pacific Island countries and territories . . . . . . . . . . . 18

7.1 Increasing reliance upon imported food . . . . . . 18

7.2 Obesity . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 19

7.3 Cassava’s future . . . . . . . . . . . . . . . . . . . . . . . 20

8. Conclusion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 21

1 Introduction

Sitting alongside the mastery of language, art, and tools, the development of agriculture is widely recognized as one of humankind’s most important achievements. The taming of plants and animals in Syria and Palestine about 10,000 years ago set us on a path from which there was no turning back, and today many enjoy a breadth of diet and continuity of food supply that would have been unimaginable to our ancestors (Diamond 1991). Agriculture’s legacy extends well beyond food provision though—it precipitated the development of cities. As Blainey (2000) observed, “The town and the city were impossibilities before the development of farming”. Crops and stock simultaneously necessitated and supported aggregation of people and specialization of labor, the essential ingredients for growing a city. Somewhat ironically, today, agriculture is perceived largely as a rural pursuit, and it is included in the definition of the word rural in all major English dictionaries (Macquarie Dictionary 2003; Merriam-Webster 2012; Oxford English Dictionary 2000). The term urban agriculture, therefore, would be considered an oxymoron by many. Urban agriculture has been variously defined by several authors (Mwalukasa 2000; Smit et al. 1996; van Veenhuizen 2006) and we do not wish to clutter the literature with yet another definition; rather, we take the pragmatic view that it is simply agriculture within an urban or peri-urban setting.

Of course, cities are largely centers of commerce and most agricultural productions do indeed occur well beyond their boundaries, and for good reason. Cities plainly cannot offer sufficient (let alone contiguous) space and consequent economies of scale for broad-acre grains and grazing activities, for example. To the nearest approximation, cities occupy about 300–700 thousand km2 globally (Potere and Schneider 2007), yet the current extent of agriculture spreads across 48 million km2 of the Earth’s surface, with cereal production alone accounting for about 6.8 million km2 (World Bank 2012a), and the annual global vegetable and fruit harvests each cover an area roughly equivalent to that of cities—546 and 552 thousand km2, respectively (FAO 2012a). Nevertheless, agriculture, particularly horticulture, never completely left the city and in many parts of the world it is now making an aggressive return (Fig. 1).

Fig. 1
figure 1

Urban agriculture is practiced on an unparalleled scale in Havana, Cuba. Organopónicos, such as the one shown here, are one of the most common urban production systems in Cuba. They comprise simple raised beds with small retaining walls (canteros). Photograph supplied by Pamela Morgan

Cities are not static phenomena either. When the late Neil Armstrong placed his left foot on the surface of the moon, this immediately became the most profound step in our history, but on May 23 2007 an anonymous person literally made arguably the most significant step in humankind’s 200,000 years of existence when she or he crossed a city boundary to make the world population predominantly urban for the first time (Wimberley et al. 2007). Whether a Chinese rural peasant relocating to Beijing or a Highlander descending the hills into Port Moresby, undoubtedly this person was seeking a more prosperous life in the city. Unfortunately, many such migrants do not realize their dreams but rather find themselves living in increasingly crowded and resource-depleted conditions. Such unceasing rural-to-urban migration and consequent falling average household incomes are generally agreed to be the main driving forces behind the believed expansion of urban agriculture in developing countries, although the precise contribution of urban agriculture to alleviating poverty has been difficult to quantify (Bryld 2003; Zezza and Tasciotti 2010). It has even been argued that for many developing countries, the Structural Adjustment Programs (SAPs) of the International Monetary Fund and the World Bank (The Bretton Woods Institutions), which are intended to increase household income through development of a well-managed free-market economy, have in fact led to a worsening fiscal situation for the urban poor, which in turn has driven greater reliance on urban agriculture (Bryld 2003; Demery and Squire 1996; Jamal 1985; Ratta and Nasr 1996).

Urban agriculture does not have free reign though and is often cast in a negative light by governments trying to convey a modern and progressive city image, particularly in developing nations (Freeman 1991; Gbadegesin 1991; Kironde 1992). Chaplowe (1998) notes that active disapproval and/or repression of urban agriculture is common in Africa, whereas in Asia it is often met with apathy and indifference and is thus a somewhat passive sector. Thaman (1995) observed that both attitudes are expressed among authorities in Pacific Island Countries and Territories (PICT).

Despite the potential significance of urban agriculture in developing countries, the practice is yet to be synthesized on a global scale in the academic literature. Existing reviews cover specific aspects of urban agriculture and are also restricted in their geographical coverage (Belevi and Baumgartner 2003; Bryld 2003; De Bon et al. 2010). Here, we undertake the ambitious task of providing an overview of urban agriculture throughout developing countries; developed countries are considered in an accompanying paper (Mok et al. 2013). The scope is thus parts of the world where developing countries—low- and middle-income countries, as defined by the World Bank (2012b)—predominate and thus excludes entirely North America, Western and Northern Europe, and Australia and New Zealand. We are primarily concerned with the cultivation of food plants, the dominant form of urban agriculture (Zezza and Tasciotti 2010), rather than animal husbandry, aquaculture, or arboriculture. While we wish to paint a picture of the situation across the globe, an extensive and purely descriptive roll call of case studies would provide little structured understanding of the processes underlying and interacting with urban agriculture. Therefore, we have chosen to use the selected ports of call in this global tour to not only give a sense of the status and form of urban agriculture in different parts of the world but also to highlight various aspects of the activity in an attempt to elucidate insight into its nature as it stands and what it might or might not have to offer in the future. For example, we investigate the role of crises as a driving force for urban agriculture (Cuba and Russia), the link between sanitation and urban agriculture (West Africa), the topical (and tropical) issue of urban agriculture as a potential contributor to malaria (Africa), the role of active governance and training in supporting the development of urban agriculture (Cuba and South America), the relationship between industrial/urban pollutants and chemical contamination of produce (China), the relationships between women’s rights and urban agriculture (Africa), and the possible role urban agriculture might have to play in mitigating malnutrition on the one hand (Africa) and obesity on the other (PICT).

The choice of case localities for investigating these themes was dictated to some degree by the availability of statistics and published literature, and we stress that the topics are not necessarily exclusive to the places where we consider them. Malaria, for example, wreaks havoc across much of the tropics, not just Africa, and governance, malnutrition, and poverty alleviation are plainly relevant to all developing countries. Also, we have attempted to strike a balance between breadths of coverage in different regions and in-depth analyses, and have therefore saved ourselves and the reader from the burden of having to consider every city for which agricultural activity has been documented. Similarly, from a temporal perspective, while we devote much effort to descriptions of contemporary situations, we lean strongly on the tenet of the historian—one must learn from the past.

2 The global extent of urban agriculture

The current extent of urban agriculture across the globe is very poorly understood. Undoubtedly, the most well-known estimate is that of Smit et al. (1996, 2001), who suggested over 15 years ago that 800 million people were actively engaged in urban agriculture (many as consumers), with 200 million farmers producing for the market. Given the highly frequent citation of these numbers, particularly the former, in both the peer-reviewed (Belevi and Baumgartner 2003; Bryld 2003; Omonona et al. 2006; Pearson et al. 2010; Ruma and Sheik 2010) and gray literature (Maxwell et al. 2000; Pandya 2012; United Nations Environment Program 2002) as well as official and recent United Nations documents (FAO 2012b), it is worth considering their providence. The values were based on a combination of “various official censuses and professional surveys” and “the authors’ [revised to “author’s (Jac Smit)” in the 2001 edition] experiences and observations”. The specific sources of the census data and how they are combined with subjective opinions are not revealed, and it is therefore impossible to get a handle on the uncertainty associated with the values, let alone their accuracy. The authors stressed that the estimates were intended to provide a “thumbnail sketch” only of the situation, but it is not clear if they even serve this purpose; and in any case it is not how they have been used in the literature, where they have most often been presented as facts rather than factoids.

A transparent estimate of the global extent of urban agriculture throughout the developing world is sorely needed, so here we use a recent survey of urban agriculture in developing countries (Zezza and Tasciotti 2010) as the basis for doing this. Zezza and Tasciotti (2010) surveyed agricultural practices at the household level in urban and rural environments in 15 countries across Africa (Ghana, Madagasgar, Malawi, and Nigeria), Asia (Bangladesh, Indonesia, Nepal, Pakistan, and Vietnam), Eastern Europe (Albania and Bulgaria), and Latin America (Ecuador, Guatemala, Nicaragua, and Panama). Oceania was the only major geographic region omitted, but this would have negligible impact on any global estimate because the region accounts for <0.07 % of the population of the developing world. Among other factors, they reported the proportion of urban households participating in crop production, which they suggest is probably a very reasonable representation of the proportion of households engaged in urban crop production: of course, some city households may cultivate crops in rural areas and not at home. The number of households, H, engaged in urban agriculture (crops only) in developing countries can be simply estimated as

$$ H=\mathop{\sum}\limits_{j=1}^4\left( {{p_j}\alpha_j^{-1}\mathop{\sum}\limits_{i=1}^n{\lambda_{ij }}} \right), $$

where, for the ith of n countries in the jth of four macro-geographic regions, p is the proportion of urban households partaking in urban agriculture, α is the number of people in a household, and λ is the country’s urban population size (World Bank 2012c). If we assume that the countries covered by Zezza and Tasciotti (2010) are regionally representative in terms of p, then this parameter can be described conservatively by the following uniform (U) (min, max) distributions: p Africa = U (0.29, 0.45), p Asia = U (0.04, 0.65), p E. Europe = U (0.18, 0.23), p L. America = U (0.17, 0.65). Note that it would be inappropriate to weight the contributions of the individual countries to these distributions by population size because doing so would assume that the more populous countries are more truly representative of the proportion of households engaged in urban agriculture in the region as whole, when in fact it is possible that the less populous countries might be closer to the true macro-geographical mean. This is particularly important for Asia, not only because data for China and India—which collectively account for 73 % of the population of the developing world—are not included in Zezza and Tasciotti’s study but also because intercountry variation in p is much greater for Asia than the other macro-geographical regions. In other words, we have assumed that the proportion of households participating in urban agriculture in Vietnam (0.65) is equally likely as Indonesia (0.10) to represent Asia on the whole, despite Indonesia having over 2.5 times the population size. Bongaarts (2001) presented means and standard deviations for household sizes in Asia, Latin America, the Near East/North Africa, and Sub-Saharan Africa. Therefore, if we assume that sub-Saharan Africa provides a suitable generalization of Africa as a whole, then the following zero-truncated normal (N) (mean, SD) distributions can be used: α Africa = N (5.3, 0.7), α Asia = N (5.1, 0.8), and α L. America = N (4.8, 0.5). Eurostat (2012) provides household sizes for six of the United Nations’ 10 Eastern European countries (United Nations Statistical Division 2011), and this information can be used to construct the following zero-truncated Normal distribution: α E. Europe = N (2.75, 0.16).

Distributions are used for p and α because of the absence of available data for every country comprising each region. For λ, on the other hand, estimates for every country in each region are available. Countries were assigned to regions in accordance with the statistical divisions of the United Nations, with the category “Latin America and the Caribbean” being treated as synonymous with Latin America (United Nations Statistical Division 2011). A probability distribution for H was generated by taking one million Monte Carlo samples of each of the distributions and confidence intervals determined using the percentile method (Buckland 1984). This produced a median estimate of 266 [207, 349 CI90] million households engaged in urban crop production in developing countries, with the following regional estimates (all in millions): Africa = 29 [21, 34], Asia = 182 [129, 260], Latin America = 39 [18, 63], and Europe 15 [13, 17] (Fig. 2). All distributions exhibit a slight positive skew, owing to the structure of the model and the Multiplicative Central Limit Theorem. Uncertainties associated with both parameters for Asia—the number of people in the household and the proportion of households actively engaged in urban agriculture—had the largest and roughly equal effects on the global prediction (Fig. 2., the tornado plot of Spearman rank correlation coefficients).

Fig. 2
figure 2

Probability distributions of the number of households involved in urban agriculture globally and in the developing world. The median is represented by the heavy vertical line. The left and right dashed vertical lines respectively denote the 5th and 95th confidence limits. Note that the developing world and Asian panels share a different abscissa scale from the others. The tornado plot in bottom left panel represents the influence of uncertainty in input parameters on uncertainty in the prediction of the estimate for the developing world as a whole

3 Africa

3.1 Malaria and urban agriculture

In tropical climes, urban agriculture is often seen by authorities as posing an unacceptable risk to public health through the provision of breeding sites for malaria-vectoring mosquitoes of the genus Anopheles, and this is sometimes used as a rationale for banning the practice (ZANIS 2009). The larval stages of mosquitoes develop in stagnant water, hence the concern about irrigated urban agriculture offering a breeding-ground for vectors. The contentious issue of urban agriculture and malaria has been well investigated on the African continent, where malaria well and truly has its greatest global impact (Klinkenberg et al. 2008; Matthys et al. 2006; Stoler et al. 2009). In Dar es Salaam, Tanzania, Dongus et al. (2009) found that location within lowland areas, proximity to a river, and presence of relatively impervious soils were statistically significant (P < 0.05) geographic predictors of the presence of anopheline larvae; and seedbed type, mid-size gardens, irrigation by tap water, rain-fed agriculture, and cultivation of leguminous crops or fruit trees were significant negative predictors. Overall, the proportion of habitats containing anopheline larvae was 1.7 times greater [1.56, 1.92 CI95] in urban areas with agriculture than those without.

Similarly, a study in Kumasi, Ghana, of night catches of adult anopheline mosquitoes in peri-urban and urban locations with agriculture compared to those without agriculture found significantly higher (P < 0.05) abundances in the former (one order of magnitude higher in both the rainy and wet seasons) (Afrane et al. 2004). All specimens were confirmed by molecular techniques to be Anopheles gambiae, a well-known vector of the malaria protistan parasite Plasmodium. But the authors noted the potential for confounding owing to urban agriculture in Kumasi mostly being situated in inland valleys, which might naturally provide suitable breeding habitat for mosquitoes. Nonetheless, a cross-sectional study in Ghana’s capital, Accra, where there were no obvious confounding factors, also revealed significantly higher (P = 0.008) malaria incidence among children living near urban agriculture (16.4 %) than those living in solely residential or industrial areas (11.4 %) (Klinkenberg et al. 2005). The link between irrigated urban agriculture and malaria in Accra was made more direct through the study of Klinkenberg et al. (2008), where trained (and brave) personnel in three urban agriculture and three non-urban agriculture areas captured mosquitoes upon landing on their exposed legs at night. Not only did about three times as many A. gambiae adults bite urban agriculture participants (urban agriculture: geometric mean = 8.1 [5.1, 13.0, CI90] bites per person per night; residential/industrial = 2.8 [1.8, 4.3]) but also the entomological inoculation rate, which combines the rate of these bites and the frequency of Plasmodium in the mosquitoes, was markedly higher (19.2 cf 6.6—confidence intervals not reported).

Heavy use of insecticides in urban agriculture and consequent selection pressure also raises the possibility of resistance development in vector mosquito populations. Reporting permethrin resistance rates of 24–82 % in A. gambiae populations from city vegetable farms in Ghana, Yadouleton et al. (2009) claimed to have demonstrated that urban agriculture led to resistance development. Unfortunately, resistance was not tested for populations in areas where no urban vegetable production took place, and this lack of control sites and information on background resistance levels mean that it is impossible to ascribe the cause to urban agriculture. A subsequent study in Cameroon, however, does appear to have demonstrated a link between urban agriculture and development of resistance in anopheline larvae (Antonio-Nkondjio et al. 2011). Bioassays were used to compare mortality rates of larvae collected at three different types of sites: polluted—“semi-permanent water containing domestic wastes or organic product in decomposition”; non-polluted—“temporary water collections created after rains or resulting from a clean water source and mainly without any sign of organic pollution”; and cultivated [urban agriculture]—those that are “created by the practice of agriculture” and which contain “furrows and irrigation pits.” In both cities, resistance levels to DDT and Permethrine were much higher in the urban agriculture sites than the non-agricultural sites, and this was also observed for Bendiocarb in Yaoundé but not in Douala (Table 1). Notably, resistance had not yet evolved to Malathion at either site.

Table 1 Mortality (percentage) of Anopheles gambiae adults following 1 h of exposure to selected pesticides. The adults were reared from larvae collected at urban agriculture (UA), polluted (P), and non-polluted (NP) sites in Cameroon

It is likely that urban agriculture’s contribution to malaria risk is highly situation specific, and simplistic policies that outright prohibit the practice may do more harm than good, because even in situations where it does pose a discernible malaria risk this needs to weighed against the benefits urban agriculture has to offer in terms of food security and livelihoods. Moreover, there are many other human activities in cities that can produce suitable breeding conditions for anopheline mosquitoes, and there may be circumstances where urban agriculture is unfairly labeled as the culprit. A large cross-sectional study in Malindi, Kenya, suggested that—after controlling for confounding variables associated with distance to the city center, drainage, access to resources, and population density—household-level urban agriculture may provide relatively less anopheline breeding habitat than other unnatural waters, such as open water tanks, drainage ponds and channels, and shallow depressions and wheel ruts in poorly drained areas (Keating et al. 2004). Similarly, Robert et al. (1998) found that anopheline larval densities in market garden wells in Dakar, Senegal, were poor predictors of peak adult densities. On the other hand, Klinkenberg et al. (2008) have suggested that the provision of adult resting sites in urban crops may be of equal or greater epidemiological significance than the supply of larval habitats.

It is also probably inappropriate to consider irrigated urban agriculture as a single entity in the context of malaria, as it is likely that the form of irrigated agriculture is significant. Matthys et al. (2006) found that the prevalence of infection with Plasmodium in children under 15 years of age was significantly affected by the agricultural setting, with highest rates (55–78 %) being reported in communities practicing mixed-cropping and lowest prevalence (21 %) observed in an area characterized by a mosaic of traditional smallholder rice plots and large-scale rice production. Complicating the picture even further, Brieger (2011) has sagely noted that urban malaria is not necessarily urban in origin, because urban residents can become infected when working on rural farmlands outside the city.

3.2 Human waste and urban agriculture

Africa, particularly Sub-Saharan West Africa, has served as a test bed for studying another issue highly pertinent to urban agriculture in developing countries throughout the world, namely, the use of human waste—fecal sludge and wastewater—on urban crops. Here, we consider both edges of the double-edged sword that is human waste: the jagged one laden with devastating pathogenic diseases and the equally sharp blade that cuts through impoverished soils and a paucity of irrigation water to provide food, support livelihoods, and lessen the impact of waste discharge to the environment.

The use of fecal sludge on food crops is technically illegal in Ghana, although authorities tend to turn a blind-eye to the practice. A survey in Ghana’s northern municipalities of Tamale and Bolgatanga revealed that fecal sludge was used by 64 % of farmers to improve soil fertility and increase maize and sorghum yields (Cofie et al. 2005). Two- to threefold yield increases were reported by farmers using sludge, and it was preferred over inorganic NPK fertilizer because it is markedly cheaper, with a 50-kg bag of NPK costing around US$17 and fecal sludge being free, save for a ∼US$2 tip to the truck driver per load (Cofie et al. 2005). Farmers favor sludge in the form of stabilized septage, but this is difficult to obtain and consequently accounts for just under a third of sludge use, with farmers usually having to settle for either unstabilized public toilet sludge or a mixture of this and septage (Cofie et al. 2005). Fecal sludge is typically applied to a crop by the simple method of manual spreading, usually around November, but occasionally a technique known as the “pit method” is used, whereby alternating layers of sludge and straw are stored in a large pit and allowed to compost for a few months before use on the crop. Because of disease concerns, especially from helminths, its use is typically restricted to cereal crops, especially maize, sorghum, and millet. There is a large and steady supply of sludge in Ghana because only 5 % of the country’s households are connected to a sewer (Cofie et al. 2005). While not standard practice, fecal sludge in Ghana is sometimes dewatered using drying beds (Fig. 3), and well-designed drying beds have been demonstrated to produce sludge with ≥20 % total solids content (Cofie et al. 2006). Moreover, the high organic matter content of this sludge (61 %) means that it can be composted with other organic solid waste to produce a high-quality fertilizer.

Fig. 3
figure 3

Fecal sludge being delivered (A) and discharged (B) to a dewatering (drying) bed (C) in Accra, Ghana. The signs on either side of the truck indicate the value of fecal sludge as a resource. Photographs taken by A.J. Hamilton

The use of fecal sludge is not without its problems though. In Ghana, complaints of itchy feet and foot rot were reported by 22 % of farmers using sludge, and symptoms of headache and catarrh (inflammation of the mucous membranes) were described by 2 % (Cofie et al. 2005). Problems associated with the use of sludge extend beyond physical health, with Ghanaian farmers complaining of malodour (47 % of farmers), transport issues (14 %), negative perceptions of consumers (4 %), and public mockery (3 %) (Cofie et al. 2005). Also, competition for fecal sludge was cited as a frustration by 24 % of farmers, with reports of having to wait for 1–2 months not uncommon (Cofie et al. 2005). Nonetheless, the increased yields it delivers will undoubtedly ensure that farmers will continue to view it as a valuable resource. Residents, on the other hand, have more negative perceptions of sludge use, with a survey of the Efutu community in Ghana’s Cape Coast Metropolitan Area revealing that 84 % agree that “Human excreta is a waste and suitable only for disposal” (Mariwah and Drangert 2011).

Wastewater irrigation of crops is common throughout Africa, and, as with the use of fecal sludge, the practice has been particularly well documented for Sub-Saharan West Africa, especially Ghana, owing largely to the presence of a major International Water Management Institute research station there. In Ghana’s capital, Accra, it has been estimated that of the 80 ML of wastewater generated every day, urban vegetable production alone uses up to 11.3 ML (14 %; Lydecker and Drechsel 2010). Wastewater irrigation offers significant benefits (Raschid-Sally and Jayakody 2008), not the least being accessibility and continuity of supply (Keraita et al. 2008a, b), and can confer higher economic returns to farmers (Drechsel and Dongus 2010; Owusu et al. 2012). Irrigated urban vegetable production plays an important role in Ghana’s cities, covering ∼100 ha in Accra (Van Rooijen et al. 2010) and producing 60–90 % of the perishable vegetables consumed in Kumasi (Obuobie et al. 2006). Wastewater of varying quality/origin is available from several sources in these cities. This includes the use of greywater (i.e., all wastewater minus the blackwater/toilet component) collected in open channels, raw sewage directly mined from sewer pipes, as well as partially diluted wastewater (i.e., a mixture of stormwater, sewage, greywater, and possibly other inputs) in open street drains and urban streams (Fig. 4). The fecal contamination of these waters is of particular concern when used to irrigate vegetables consumed without cooking, such as exotic salad vegetables and lettuce (Amoah et al. 2005; Amponsah Doku 2010). While farmers typically perceive risks to consumers as low (Keraita et al. 2008a, b; Owusu et al. 2012), evidence of unsafe levels of fecal contamination on produce has been documented (Amoah et al. 2007).

Fig. 4
figure 4

Sources of wastewater for irrigating urban vegetable plots in Accra, Ghana. A Greywater collection for irrigation of an adjacent crop. B Sewer-mining to flood-irrigate a crop. A hole punctured in the bottom of the pipe is stoppered with a rag (placed on top of the pipe in this photo) that can be removed when irrigation is required. C Open street drain carrying stormwater, sewage, and greywater. D Manual collection of water from an urban stream immediately downstream from a raw sewage discharge point. Photographs by A.J. Hamilton

Advanced treatment technologies offer the ideal solution for minimizing human health risks but they remain beyond the reach of under-financed local water and sanitation authorities and poor urban farmers. Acknowledging this reality, considerable work has been conducted in Ghana to determine both on-farm and post-harvest practices that can be implemented to reduce human health risks. Low-cost measures such as sedimentation ponds and filtration prior to irrigation, irrigation techniques that minimize contact with plant surfaces (e.g., low-head bucket drip kits and reduced height of watering cans to lessen the spread of water directly onto the plants’ surface), removal of outer leaves prior to consumption, and washing of produce with recommended sanitizers and for longer contact times, have been promoted (Amoah et al. 2011). Furthermore, cessation of wastewater irrigation several days before harvest can in some instances be an effective means of markedly reducing enteric virus infection risks associated with consumption of vegetable crops eaten raw (Hamilton et al. 2006; Keraita et al. 2008a), but of course, an alternative, clean source of water would need to be available during this wastewater irrigation withholding period. A combination of intervention measures at different levels (multi-barrier approach) appears to hold the greatest promise for reduced consumer health risks, but will depend largely on uptake rate (Amoah et al. 2011). An analysis conducted by Seidu and Drechsel (2010) found that cost-effectiveness ratios were highly sensitive to the adoption rates of non-treatment interventions, such that in a two-barrier approach (interventions on- and off-farm) 75 % adoption of at least one intervention was required. Yet current awareness amongst farmers is very low, with >85 % in Tamale possessing no knowledge of safe irrigation methods (Abubakari et al. 2011). Karg and Drechsel (2011) highlighted the challenges in effecting behavioral change. They found that risk awareness was very low among farmers and consumers alike, and they suggested that low-cost practices are important but insufficient on their own to elicit change; financial benefits, in terms of higher revenues, were believed to be more powerful incentives.

While Africa has been the subject of much study into the management of disease risks associated with use of wastewater in urban agriculture, Keraita et al. (2008a, b) astutely note that when it comes to chemical risks, Asia has been the focal point of research. They suggest that this is largely a function of the much lower levels of industrialization in Africa, and thus less concern about heavy metals and other contaminants. Chemical risks associated with the use of human waste to grow food have not completely escaped attention in Africa though (Lawal and Audu 2011; Mapanda et al. 2005; Muchuweti et al. 2006; Snyman et al. 2000), but they remain a distinctly secondary concern behind microbiological hazards, and we therefore consider them in a broader discussion of chemical contamination of urban agricultural crops in the Asia section of this review. Also, the various benefits and risks associated with wastewater irrigation have been reviewed comprehensively elsewhere (Grant et al. 2012; Hamilton et al. 2007).

3.3 Poverty and urban agriculture

Owing to the extensive body of research on the interactions between human waste and urban agriculture, as well as urban agriculture and malaria, the literature on Africa is heavily biased towards West Africa and several equatorial African countries. But with the backdrop of massive rural-to-city migration, urban agriculture appears to play a crucial role in the maintenance of the livelihoods and the provision of food for thousands, possibly millions, in many southern African cities. In Zimbabwe’s capital, Harare, a burgeoning population, high unemployment, and inflation have driven the development of inner-city crop cultivation (Drakakis-Smith et al. 1995). It takes place largely on land adjacent to high-density housing, which is poorly drained and unsuitable for building, but also on road and railway verges and banks of canals and ditches. The most commonly grown crops are maize, groundnuts, sweet potatoes, and leafy green vegetables. Cultivation on state land is illegal and in the years of Zimbabwe African National Union—Patriotic Front rule authorities were not averse to destroying crops, the rationale for this being that urban agriculture poses environmental risks, such as contamination and sedimentation of waterways (Drakakis-Smith et al. 1995). Even today, under the power-sharing arrangement with the Movement for Democratic Change—Tsvangirai, the Harare City Council has received permission from the Zimbabwe Republic Police to slash urban crops (Anon 2011). In addition to siltation and malaria arguments, public protection has been given a s a rationale, especially in relation to maize crops, which have been argued to serve as hideouts for thieves and muggers (Anon 2011). Similarly, in Lusaka, the capital of neighboring Zambia, the City Council instigated slashing of maize crops at the height of the 1992 drought, when food security was perilous (Drescher 1997).

But urban agriculture is not always met with such active official disapproval. South Africa’s second largest city, Durban, has a well-established history of hosting much urban horticultural activity. In the early 1990s, Cross et al. (1992) found that across various types of settlements on the city’s outskirts, the percentage of households engaged in urban agriculture averaged 30 %. Similarly, May and Rogerson (1995) found that 25 % of households in the KwaZulu region on Durban’s urban fringe cultivated food crops. Namibia, the driest country in sub-Saharan Africa, has a weak agricultural base and is heavily reliant upon imports from South Africa; but urban agriculture likely plays a significant part in feeding the nation (Dima et al. 2002). Twenty-three types of vegetables and fruit trees are grown in Windhoek and Oshakati alone (Dima et al. 2002). The dominant crops are maize, beans, pumpkin, watermelon, sweet potatoes, and pepper. Most people grow food for family consumption, although 13 and 17 % in Windhoek and Oshakati, respectively, supplement their income with it.

The contribution of urban agriculture to the alleviation or mitigation of malnutrition in poor communities in sub-Saharan Africa, or indeed anywhere in the world, has to date been poorly studied, despite a wealth of anecdotal claims (Alnwick 1981; Lee-Smith et al. 1987; Pinstrup-Andersen 1989). In their study of malnutrition in children under 5 years of age in Kampala, Uganda, Maxwell et al. (1998) found that—after controlling for potentially confounding variables—the frequencies of occurrence of underweight and stunted children were significantly lower in families that were involved in urban agriculture (P < 0.05, Chi-squared test); but no significant difference in wasting (abnormally low weight for height) was observed. In accordance with the WHO standards, a child was considered to be stunted, underweight, or wasted if her/his Z score on the Normal distribution was <−2 for the ratio of height to age, weight to age, or weight to height, respectively. It is also noteworthy that when considering the urban farming families alone, there were no significant differences in stunting, insufficient weight, or wasting across the four socioeconomic groups (“low” through to “upper-middle/high”) (ANOVA, P > 0.05), whereas significant group differences (P < 0.05) in stunting and underweight frequency, but not wasting, were observed across the non-farming population. To the best of our knowledge, this is the only study to directly measure the role of urban agriculture in abating malnutrition, or, more precisely, its morphometric effects. The lack of comparative studies for other parts of the world and for older children and adults is disconcerting, to say the least. Moreover, while a useful first step, the study of Maxwell et al. (1998) simply tests relationships between urban farming families and indicators of malnutrition, but does not quantify the significance or relevance of the different pathways through which the practice can improve health. Are most of the benefits reaped simply through the direct access to nutritious food, or are increased cash income (and thus ability to purchase food and health care) and liberation of mothers from external employment (and thus time and resources to devote to childcare) also significant factors? (Maxwell et al. 1998; UNICEF 1990). In short, a far better understanding of urban agriculture’s contribution to assuaging malnutrition is sorely needed, particularly in the context of the broader debate on urban agriculture and population health, which includes negative health consequences such as fecal-borne disease, malaria, and chemical contamination of produce (as we shall see in the section on Asia).

Sub-Saharan Africa has also served as a host for several studies on the relationships between urban agriculture and women’s freedom, protection, and work burden. In many countries, it is mostly women who are responsible for the cultivation of urban agricultural plots (Maxwell 1995; Mbiba 1995). Bryld (2003) suggests two reasons for this, namely that farming of small plots close to the home can readily be accommodated into women’s daily work routines and that men generally perceive urban agriculture as a marginal activity rather than a serious business endeavor. As a percentage of their overall income, female farmers in Malawi, for example, obtain more money from urban agriculture than males do (Mkwambisi et al. 2011). But they are typically unable to demand the same prices as males do for their produce because they usually sell it locally rather than directly to supermarkets. Income is only one benefit though, with urban farming by women in South Africa and Sudan having been documented to lead to improved diversity and nutritional value of family diets, alongside a sense of community with other growers (Ibnouf 2009; Slater 2001). Looking beyond fiscal and nutritional benefits, the strength in community that urban agriculture provides women has been reported to act as a refuge for women who have been sexually abused or subjected to violence (Slater 2001). On the other hand, urban agriculture can pose a threat to women’s security and freedom. For example, it has been argued that the cultivated areas in cities can provide havens for prostitution and thieves (Simatele et al. 2012). Also, it has been suggested that in many situations, engagement in urban agriculture simply increases the total work burden on women and can even prevent them from gaining higher-income formal employment (Bryld 2003; Freeman 1993; Potts 1997).

4 Latin America and the Caribbean

4.1 The Cuban story

Following the dissolution of the Eastern Bloc from 1989 to 1991 and consequent loss of trade with her communist allies through the Council for Mutual Economic Assistance, Cuba found herself in a precarious position and in urgent need of agrarian reform. Soviet-style collectivist agriculture was no longer feasible—the supply of fuel, trucks, agricultural machinery, parts, fertilizers, and pesticides ground to a virtual halt shortly after the Iron Curtain was drawn open (Rosset and Benjamin 1994). In 1990, imports of fertilizer and pesticides fell by 77 and 73 %, respectively (Rosset and Benjamin 1994). Cuba had relied upon the USSR for 99 % of her oil (Mesa-Lago 1993), and oil importation from the USSR fell by about half in 1990 (Rosset and Benjamin 1994). Prior to 1990 about 57 % of the calories in the Cuban diet were estimated to come from imported foodstuffs, mostly from the USSR (Deere 1993), and from 1984 to 1989 about 70 % of import–export trade was with the USSR, with sugar accounting for 77 % of Cuba’s exports (Pastor and Zimbalist 1995). Cuba was to enter a new phase: the “Special Period in Time of Peace”, or simply the “Special Period” (1991–1996), which was in effect Fidel Castro’s optimistic euphemism for a state of emergency—an economic recession with attendant severe austerity measures (Chaplowe 1998; Febles-Gonzalez et al. 2011).

Compounding the loss of supplies from the Soviets was the ever-present threat to food security posed by embargoes. The USA instigated a near-full trade embargo in 1962, just 3 years after Fidel Castro seized power. The situation worsened in 1992 with the passage of the Cuban Democracy Act, which extended the USA’s embargo to foreign arms of US firms, upon whom Cuba had been reliant for about 18 % of her imports (Pastor and Zimbalist 1995). The threat of complete isolation, and therefore the necessity for complete self-sufficiency, was real and needed to be addressed. To this end, the National Institute of State Reserves was established, with a directive to investigate subsistence potential. The development of urban agriculture on an unprecedented scale was to become the primary solution to the challenges of the Special Period, and it would ultimately define Cuba as the world leader in the practice. In retrospect, urban agriculture was the obvious answer—no fuel for tractors, trucks, or refrigeration: bring agriculture closer to/into the cities; no pesticides or fertilizers: switch to chemical-free and labor-intensive production that the populous cities could support; no export market for large monocultural cash crops such as sugar and limited ability to import food: produce food rather than money to buy food.

Various taxonomies of urban agricultural practices in Cuba have been proposed (e.g., Altieri et al. 1999; Campanioni et al. 1997), but the recent description of Koont (2011) is probably the most useful because it treats land tenure and production method separately. He describes nine distinct tenancy arrangements (Table 2) and the four main production methods that are officially recognized by the Grupo Nacional de Agricultura UrbanaNational Urban Agriculture Group (GNAU), namely, patios, parcelas, huertas intensivas, and organopónicos. These can be seen simply as progressively more sophisticated production methods, with patios being basic home gardens, parcelas being unused parcels of land given to individuals in usufruct for rudimentary cultivation, and huertas intensivas and organopónicos being well-structured production systems. The last two differ only with respect to the cultivation beds themselves, with a huerta intensiva comprising rows of simple mounds of soil but the organopónicos employing walled cultivation beds (canteros) roughly 1 m wide and 15–30 m long (Koont 2011). In a GNAU (2007) census, 370,653 patios were officially registered but Koont (2011) believes that there are likely to be >1 million across the country. The same census identified 135,870 parcelas, and provided production goals for vegetables and condiments of 1,413,000, 2,093,288, and 793,712 tons for organopónicos/huertas intensivas (collectively), parcelas, and patios, respectively.

Table 2 Agricultural land tenancy arrangements in Cuba (from Koont 2011)

Crucial to the success of Cuba’s urban agriculture has been the development of a well-organized coordinating body: the GNAU. Membership of the GNAU comprises producers, agricultural specialists, and civil servants (Febles-Gonzalez et al. 2011). It has 28 sub-programs, with about half of these relating to horticultural/crop production. The vegetables and herbs sub-program saw a thousandfold increase in production (4,000 to 4.2 million tons) from 1994 to 2005 (Koont 2011). This was achieved not only through an expansion of cultivated area but also through increased yields. For example, annual crop yield in organopónicos increased by 17 % from 1994 to 2001.

Complementing the activities of the GNAU is an extensive network of extension officers within the specialized Department of Urban Agriculture of the Ministry of Agriculture (Chaplowe 1998). Their responsibilities include initiating and servicing popular gardens and horticultural clubs (>400). In 1995, there was about one extension officer for each of Havana’s 43 districts (Chaplowe 1998). This extension program was adapted from the existing rural agricultural service to meet the needs of urban agriculture in the Special Period. Another State service is the Casa de Semilla (House of Seeds). There are eight of these suppliers in Havana alone. They are places where growers can purchase seeds, manure and compost, microbial insecticides (mostly Bacillus thuringiensis), bio-control agents, and tools (Altieri et al. 1999; Chaplowe 1998). There is also a network of Centres for Reproduction of Entomopathogens and Entomophagous Agents dedicated to the rearing of bio-control agents (Altieri et al. 1999). Various local and international NGOs also provide assistance to growers, and there is a strong agricultural emphasis in schools (Chaplowe 1998).

Despite its significant contribution, urban agriculture has certainly not been a panacea for Cuba’s food provision problems. Daily per capita energy intake actually dropped by 36 % during the Special Period (Franco et al. 2008), and it was not until around 2005, almost a decade after the end of the Special Period, that the province of Havana satisfied the criterion set by the Food and Agriculture Organization of the United Nations (FAO) of 300 g/ca/day vegetable supply to a population (Koont 2011). Considering Cuban agriculture as a whole today, the recent critical analysis of Febles-González et al. (2011) concludes that in spite of the commendable and generally successful efforts put in place in the Special Period “. . . agricultural production in Cuba continues to be depressed and is one of the main focuses of attention. The agrarian sector is far from satisfying the needs of the population . . .”

This is not to say that urban agriculture has not played the crucial role in securing food for Cubans over the last two decades. Perhaps the more important question to ask is what would have happened if Cuba did not embrace the urban agricultural solution? The answer to this might lie 1,200 km to the west across the Pacific. Faced with an almost identical geopolitical situation to Cuba following the disintegration of the Soviet Bloc, the Democratic People’s Republic of Korea (North Korea) was not prepared with—and did not develop—an urban agricultural/self-sufficiency response, and in short suffered a series of famines in the early 1990s, which culminated in 2.5 million deaths in 1996/1997—the largest per capita famine of the twentieth century (Bhatia and Thorne-Lyman 2002). Moreover, starvation has continued to plague the nation (Watts 2005), with the recent drought, the most severe since records began almost 105 years ago, now placing about two thirds of the populous at risk of chronic food shortage (Hyon and Kim 2012).

The immediacy of urban agriculture as a rescue package in Cuba has also perhaps been overstated in the development literature: “With the onset of the crisis, urban gardens sprang up [our emphasis] all over Cuba . . . as a massive popular response of residents themselves to the food shortages . . . These new gardeners were growing food to satisfy [our emphasis] family needs” (Altieri et al. 1999). Likewise, “. . . new agricultural techniques developed in recent decades received their first extensive implementation . . .” (Funes 2002). The medical literature tells a somewhat different story—one of a massive epidemic of optical and peripheral neuropathy engulfing the entire country for the 4-year period (1991–94) shadowing the collapse of the Bloc (Barry 2000; Johns and Sadun 1994; Kirkpatrick 1997; Roman 1994a, b, 1995). More than 50,000 Cubans were affected, with optical and peripheral forms occurring in roughly equal frequency. Malnutrition was believed to be the primary cause, but heavy tobacco use and other factors probably contributed. The Cuban Government’s admission of the nutritional problem is evidenced by the nationwide free oral vitamin (A, B, and E) supplement program it initiated to redress the disaster (Ordunez-Garcia et al. 1996). It is important to add this epidemic to the Cuban food story, particularly in the context of managing impending mass malnutrition in a broader sense. It does not detract from urban agriculture’s contribution; rather, the eventual steady supply of food it had to offer undoubtedly prevented further such nutritionally based epidemics. It also reaffirms the importance of history, yet again, because almost exactly a century earlier, Cuba experienced another neuropathic endemic, amblyopia (lazy eye), which was almost certainly the result of nutritional deficiencies resulting from the 1898 US naval blockade on Cuba in the Spanish–American war (Ordunez-Garcia et al. 1996).

The future of urban agriculture in Cuba remains uncertain. Just as political and economic factors molded its emergence, they could equally spell its demise. There are already signs of a very slow thawing of the frosty US–Cuba relationship, with Barack Obama stating that he is “not” a US President who brings “a lot of baggage” to the situation (Wilson 2012), as was evidenced by a slight loosening of travel restrictions in 2011 (Thompson 2011). The embargo still stands though and is unlikely to be lifted in the near future, but this has not quelled speculation about what its eventual removal will mean for urban agriculture in Cuba. Chaplowe (1998) notes that many believe that it will spell a return to the intensive agricultural practices and importation policies of Soviet-era Cuba. But he also observes that urban agriculture is not just about food provision, and its contribution to the social fabric of communities is not lost on Cubans. Perhaps such a cultural ethos will be its saving grace, but there is little empirical evidence to support this thesis. In fact, the only published study on farmers’ attitudes (Nelson et al. 2009) suggests the contrary: “. . . the majority of those interviewed had not chosen this path with conscious intent. Instead, all but a few expressed varying degrees of desire for more access to resources such as agrochemicals, gasoline, electricity, and machinery. In addition, when describing their ideal farm, many referred to the industrial agricultural model that predominates in developed countries, and tended to equate their current low external input model with underdevelopment. One farmer expressed . . . in an almost embarrassed tone that ‘. . . we are very backward now with agriculture in Cuba. We used to have everything. Everything was mechanized and all of the inputs were the best, but now we are incredibly backward’”.

4.2 Innovative urban agriculture programs in South America

Cuba has unquestionably been the flag bearer of urban agriculture in Latin America, and indeed the world, but the activity also has a long history throughout South America, where government support is also strong and many new initiatives and programs have been established. A study in Belem, Brazil, found that nearly a third of interviewees had been farming their plot for upwards of 20 years (Madaleno 2000). In Montevideo, Uruguay, urban agriculture has been practiced since the founding of the city in 1724, and in the 1950s it received a significant boost through active government support (Santandreu et al. 2009). Archeological evidence of urban agriculture has even been uncovered within ancient Mayan cities in current-day Guatemala (Dunning et al. 1997). More recently, however, the particularly progressive actions of Brazil, Argentina, and Peru serve as the exemplars of what urban agriculture might have to offer the massive population challenge facing South America.

South America has the highest rate of urbanization in the world, with the population living within cities predicted to reach 80 % by 2020 (Prain et al. 2010). Increasingly, poverty has become an urban problem, with upwards of half the absolute population of South America’s poor living in cities (Fay and Laderchi 2005). Poverty levels fluctuate, with higher rates of poverty during times of economic crisis. For example, during the economic crisis of 1999–2002, Argentina and Uruguay saw higher rates of poverty and concomitant higher adoption rates of urban agriculture to help feed citizens and increase family income, but subsequent improvements in local economies led to a reduction in the number of people involved in urban agriculture (Santandreu et al. 2009). Obesity is also on the rise in South America; many of the urban poor are unable to gain access to nutritious food and are now relying on cheap, high-calorie foods (Fraser 2005). Improving urban food security will require wide-ranging policy responses, including support for urban agriculture. Brazil’s “Zero Hunger” policy places much weight on urban agriculture as a means of improving food security and the nutrition of the nation (Fraser 2005; Ministry for Social Development and Fight Against Hunger 2008).

The Brazilian Ministry for Social Development and Combating Hunger (MDS) administers the National Urban and Peri-urban Agriculture Policy and links urban agriculture with the Social Protection Network and Network of Public Food and Nutrition Establishments to create food networks between growers, consumers, and processors (Ministry for Social Development and Fight Against Hunger 2008), and has also worked with international organizations to provide advice and training in Peru and Argentina. There have also been initiatives at the municipality level. The Secretariat for Food Policy Supply of the city of Belo Horizonte in southeastern Brazil incorporated urban agriculture into the municipal food system, and this became the model from which the Zero Hunger policy was developed (Gopel 2009). The Belo Horizonte City Strategic Agenda (Action Plan) on Urban Agriculture, 20082018 is a forum enabling dialogue between government and non-government organizations to improve the management of urban agriculture within Belo Horizonte. Outcomes of the Action Plan include the passing of a law (274/2009) to establish a municipal urban agriculture policy and review of the Master Land Use Plan, which recognizes urban agriculture as “an acceptable form of non-residential urban land use” (Lovo et al. 2011). The action plan also provides a framework to support national and international non-government organizations working with famers to improve farmer access to capital, education, and accreditation as micro-entrepreneurs, allowing farmers to sell their produce to local schools (Lovo et al. 2011).

In Argentina, urban agriculture is well supported at the both the national and provincial levels. After the economic crisis of 2003 the national government established the Pro Huerta (Pro Garden) program, which is administered by the National Institute of Agricultural Technology (Spiaggi 2005 ) and focuses on improving nutrition of low-income families through home and community garden initiatives; 40 % of the participants in this program are located in urban and peri-urban areas (Yavich et al. 2010). The Pro Huerta program supports local government initiatives, such as Rosario’s local government Urban Agriculture Program, which focuses on improving the living conditions of urban and peri-urban poor through local food production and the development of local markets (Yavich et al. 2010). The city of Rosario has developed a number of other local initiatives, including the Municipal Agricultural Land Bank, which was designed to link potential urban farmers with landholders who have land that could be used for urban farming (van Veenhuizen 2006). The municipality provides property tax reductions to landholders who lease out vacant land to farmers (Mubvami et al. 2006). Through this project, 10,000 families were able to collectively gain access to 60 ha of land for farming (Dubbeling 2006). With the assistance of international organizations the municipality of Rosaria developed an Urban Agriculture Action Plan, a key focus of which is the promotion and marketing of urban agricultural products.

Urban Agriculture also has a well-established history in Peru, especially its capital, Lima. The practice in Lima increased markedly after land reforms in the 1970s, which resulted in a gradual redistribution of land into small plots. Government support for family and community gardens was established in the early 1980s (Villavicencio 2009) although recently urban agriculture has come under threat from residential development (Villavicencio 2009). A survey of urban farmers in the Lima suburb of Carapongo found that many were concerned about water pollution, fluctuation of market prices, a lack of information about markets and farmer networks, and low profit margins (Villavicencio 2009). In the same study, interviews with authorities revealed they had poor knowledge about urban agriculture within the city (Villavicencio 2009). In 1999, the municipality of Villa de Maria del Triunfo developed an Urban Food Security Strategy. The role of urban agriculture in improving household access to food and raising incomes was recognized in the strategy. The municipality built upon this strategy by incorporating urban agriculture into the Integrated Development Plan (2001–2010) and created a Municipal Urban Agriculture and Environmental Protection Program (PAU). This was further refined, ultimately to produce the Strategic Plan for Urban Agriculture (2007–2011) (Dubbeling and de Zeeuw 2007, 2011; Dubbeling and Merzthal 2006; Dubbeling et al. 2010). The aim was to address issues such as the marketing of urban farming and the high use of synthetic pesticides and fertilizers (Arce et al. 2007; Prain et al. 2010). This project also resulted in an increase in awareness amongst policy makers of the offerings of urban agriculture and its extent within their constituencies. Consequently, a sub-divisional office of urban agriculture was established in the eastern district of Lima (Prain et al. 2010). In 2010, the Peruvian government adopted guidelines for the reuse of wastewater in urban irrigation, and while initially the guidelines were intended for the irrigation of urban green spaces, there is potential in coming years for the adoption of wastewater reuse in agriculture (Global Water Intelligence 2010).

5 Asia

Urban agriculture in Asia, or the world for that matter, cannot even begin to be contemplated without consideration of China, if for no other reason than the fact it hosts one fifth of the world’s population and produces the vast majority of its vegetables on the outskirts of its many populous cities (Deelstra and Girardet 2000; Drakakis-Smith 1991). Just as Africa has served as the test bed for studying wastewater-borne diseases and malaria interactions with urban agriculture, China has been the global laboratory for studying the effects of city chemical pollution on urban agriculture. Chinese science has undoubtedly been the world powerhouse of research on the translocation of the chemical cocktails spewed out by huge cities to urban and peri-urban agricultural lands and their subsequent relationships with soils and plants. To date the focus of the research has largely been on heavy metals and polycyclic aromatic hydrocarbons (PAHs) (Table 3). Heavy metals have been observed to exceed official Chinese safety standards for soils and edible plants in various urban/peri-urban production systems (Table 3). Wastewater irrigation, sludge or manure amendment of soils, and atmospheric deposition from mining and smelting have all been identified as sources of heavy metal contamination, but in other instances the origin of the contaminants is unclear (Table 3). PAHs of high molecular weight, particularly those with four or more rings, are of serious concern with respect to their mutagenicity and carcinogenicity (Kalf et al. 1997). They are largely produced as a result of incomplete combustion (Kong et al. 2010; Mastral and Callen 2000; Wang et al. 2010), and various chemical diagnostic ratios have been employed successfully to confirm combustion as the source of PAHs contaminating vegetable fields (Cai et al. 2007), and in some cases to pinpoint the exact source, e.g., coal or straw combustion (Yin et al. 2008). While remediation technologies for heavy metals (Mulligan et al. 2001; Peng et al. 2009) and polycyclic aromatic hydrocarbons (Bamforth and Singleton 2005; Zang et al. 2007) are being developed, they are yet to see widespread application, owing both to technological and cost-effectiveness barriers.

Table 3 Studies on potential contamination of urban agricultural sites in Chinese cities. Where a “standard” is referred to, consult the source paper for the exact standard

Despite the dominance of Chinese research, there also exists a substantial body of work on chemical contaminants interacting with urban agriculture elsewhere in Asia, including, inter alia, Vietnam (Huong et al. 2010), India (Agrawal et al. 2003; Gupta et al. 2008; Singh et al. 2009, 2010), and Pakistan (Jamali et al. 2007). Of course, heavy metals and PAHs from combustion and various other industrial processes in cities are not the only chemical contaminants of concern to urban agriculture. Throughout Asia and many other developing countries the heavy use of pesticides has led to dangerous contamination of produce (Kannan et al. 1997). But this is not a story that is unique to urban agriculture, but rather agriculture more generally, and it is therefore not pursued further here.

5.1 Urban agriculture and Asia’s urban poor

Placing aside the serious problem of the potential for contamination of produce and the future challenges to be tackled on this front, China’s peri-urban market gardens have been indispensable in meeting its cities’ needs, but this has not granted them complete immunity from the global trend of urban sprawl engulfing productive agricultural land. From the mid-1990s there was a decline in rural labor as agricultural land was developed for burgeoning cities. It is predicted that by 2020 the rate of urbanization will rise to 60 % from 37 % in 2000 (Yang et al. 2010). China is nonetheless different from other Asian countries in its approach to urban agriculture in that it has made numerous attempts to incorporate agro-tourism into its productive land. An example is Xiedao Green Resort in Beijing, which has 90 % of its area devoted to agricultural production and 10 % to tourism, including accommodation (Yang et al. 2010). Over 30 different vegetables are produced organically, the organic produce fulfilling an eco-tourism niche market. Wastes from farming and tourism are treated on-site and used for irrigation and fertilizer (Yang et al. 2010). Additionally, the combined use of agricultural land for eco-tourism also ensures the long-term retention of such land for agriculture in a rapidly growing city (Yang et al. 2010). Such foresight into urban agriculture in China began over 20 years ago when the Chinese government introduced policies aimed at increasing agricultural production by households (Fan et al. 2004), resulting in some alleviation of extreme poverty (Khan et al. 2009), although the poor are now mainly located in rural China and not in urban areas (Fan et al. 2004).

In Vietnam, 70 % of households earn their income from agriculture (Zezza and Tasciotti 2010). Urban agriculture is at its most intensive 20 km from Hanoi (Moustier and Danso 2006), where 70 % of urban production and supply is devoted to leafy vegetables. This amount rises to 100 % in neighboring Phnom Penh, Cambodia and Vientiane, Laos (Moustier and Danso 2006). Leafy vegetables have a short shelf life (about 1 day) and, as most consumers in Asian countries do not have refrigerators, freshness and proximity of the grower to market is paramount (Moustier and Danso 2006). Such markets are usually within 30 km of city centers. Freshness was the main criterion cited by 74 % of respondents for favoring peri-urban horticulture purchases in Hanoi (Figuié 2004 as cited in Moustier and Danso 2006).

In 2003, allotment gardens were established in Cagayan de Oro, the fastest growing city in the Philippines, under the UN-Habitat Sustainable Cities Program. The allotments enable urban poor families to cultivate the land for vegetable production and income generation. Of the vegetables produced, 25 % are eaten by the family and 68 % are sold to customers on-site (the remaining 7 % are given to friends). Customers value the product freshness, convenience, and lower prices compared with that available in public markets (Holmer and Drescher 2006). In addition, vegetable consumption by 75 % of allotment members doubled, to reach the minimum intake suggested by the FAO (Agbayani et al. 2001 as cited in Holmer and Drescher 2006).

Thailand has a long history of urban agriculture, evident in Bangkok’s edible street tree fruit (tamarind, mango, jackfruit), which are freely available to the public and often preferred because of freshness and microbial safety (Suteethorn 2009). Urban agriculture is common in Thailand, with most households growing vegetables (fertilized by solid house wastes); additionally, a rooftop vegetable garden was started on a city building in Bangkok in 2009. The urban poor living in slum areas also grow vegetables to sustain themselves and swap food, which creates a sense of community (Suteethorn 2009). Nevertheless, in Bangkok, shrimp farming is replacing market gardening because of the higher rate of return for the product (Vagneron et al. 2003 as cited in Moustier and Danso 2006).

As noted in our model of the global extent of urban crop production, Indonesia stands out among the other Asian nations surveyed by Zezza and Tasciotti (2010) as having a relatively low (10 %) household participation rate (cf 26 %, 52 %, and 65 % for Bangladesh, Nepal, and Vietnam but only 4 % for Pakistan). In Jakarta, which hosts approximately 10 million people, urban agriculture occupies > 11,000 ha, with riverbanks, roadsides, and vacant land commonly being used. A small income is obtained by the urban poor, who work the land and mostly live in slums. Leafy vegetables are the most commonly grown crops (e.g., spinach, lettuce, Ipomoea aquatica or water convolvulus, basil), with harvest times of 24–30 days (Purnomohadi 1999), but traffic congestion limits the ready transport of these crops from the periphery to the city center. Despite its widespread occurrence, urban agriculture is seen as a temporary use of land that occurred in response to the Asian economic crisis and will not be developed further. It is not included in any of the Urban Green Open Spaces policies, despite the “greening program” initiated in the 1990s (Purnomohadi 1999).

In Nepal, 52 % of households earn income from urban crops, but this amounts to only 11 % of total income (Zezza and Tasciotti 2010). Nevertheless, 13 % of urban Nepalese households have an income from agriculture exceeding 30 % (Zezza and Tasciotti 2010). Urban agriculture is predominantly undertaken by the poor, who are lacking in both education and employment opportunities (Tsubota 2006). In Bangladesh, gardens are traditionally grown on rooftops and only for the 4 months of the cool season (Talukder et al. 2000). Nevertheless, urban agriculture contributes to 30 % of household income and demonstrates a statistically significant (P < 0.05) positive association with calorie consumption, resulting in an improved diet through greater dietary diversity (Zezza and Tasciotti 2010). An initiative to encourage home gardening was begun in 1990 by Helen Keller International, which aimed to reduce blindness caused by vitamin A deficiency and dietary inadequacies (Marsh 1998). Indigenous high-vitamin content vegetables were grown, and not only did household incomes increase by up to 25 %, but night blindness (symptomatic of vitamin A deficiency) in children decreased from 2.3 to 1.2 % (Marsh 1998). Non-government gardening initiatives were set up following the success of the Helen Keller International program and reached over 700,000 households in 2000 (Talukder et al. 2000).

Although traditionally India’s agricultural needs have been met from rural areas, there is increasing recognition of the role of urban agriculture in fulfilling some of the requirements of the urban poor (te Lintelo et al. 2001). Vegetables (cauliflower, cabbage, spinach, carrot, okra, and tomato) and herbs (coriander and fenugreek) are grown in urban and peri-urban areas around Delhi, India’s second largest city, and agriculture contributes to the livelihoods of peripheral urban dwellers (te Lintelo et al. 2001). Within the city, the main area for urban agriculture is along the floodplain of the Yamuna River, while an additional 44 % of land around the city is used for crop production, lies fallow, or is grassland (te Lintelo et al. 2001). Rooftops are also used for urban agriculture in India. The system, known as the “Doshi method”, uses 50-kg bags tightly packed with biomass (usually sugarcane refuse) and compost, and it is purported to require less water than traditional rooftop gardening (Doshi et al. 2003). Rooftop gardens in Mumbai, the largest city, grow fruit (pomegranates and guava), sugarcane, and vegetables (tomato and radish) using the Doshi method (Nowak 2004).

6 Eastern Europe

6.1 Urban agriculture in revolutionary and post-Soviet Russia

Urban agriculture was a common feature of pre-Soviet Russia. Saint Petersburg, for example, would not have been considered urban by any modern definition, but was rather a conglomeration of single-storey wooden houses with small food gardens (Moldakov 2000). This changed drastically after the Revolution, with the Communist policy being overtly unsympathetic to urban agriculture, let alone that of a subsistence nature. Not only were the single-storey houses replaced with apartment blocks, but under Stalin’s collectivization policy food production was taken away from the individual, be it a city farmer or a rural peasant, with collective farms (kolkhozy) expected to meet almost all of the nation’s food needs.

Khrushchev’s destalinization of Russia led to some more lenient and even welcoming policies toward urban agriculture. By the 1960s, some agricultural activity was allowed on the premises of public utilities, such as schools and hospitals (Moldakov 2000), and kitchen–garden cooperatives were also actively supported by the government (Struyk and Angelici 1996). The privileged members of Russia’s high society, mostly intelligentsia and Communist Party functionaries, were granted dachas—second homes on about 0.1 ha of land in the exurbs (Struyk and Angelici 1996). Dachas existed well before the Revolution, and in fact they were originally given by the Tsar to loyal nobility as fiefs in feudal times (hence the literal translation: “a gift”). By the early twentieth century, many middle and upper class Russians owned dachas, but most were confiscated by the Communist Party at the onset of the Revolution. While dachas offered the elite an opportunity to grow food, they were primarily used for recreation, or hobby farming at best, because food security was not a concern for the upper class. This soon changed though, and in the 1970s many Soviet enterprises were granted dachas, primarily for the purpose of growing fruit and vegetables, and by the mid-1980s individuals were purchasing dachas through loans arranged by their employer enterprises (Moldakov 2000).

When the Iron Curtain was drawn open on Russia in 1991, massive rural-to-urban migration by citizens seeking employment placed significant stress on city food supplies. Dachas played a crucial role in the response to the challenge, with the number of people owning one doubling between 1986 and 1996 (Moldakov 2000). Their function had clearly changed too: they were no longer simply a summer holiday retreat for the privileged few but a serious means of producing food for the populous. A survey in 1996 described three basic types of dachas: garden-plot, dacha-plot, and country-house (Struyk and Angelici 1996). The sole function of garden-plot dachas is horticultural production. These are the smallest dachas, and they host very simple dwellings only. Dacha-plot dachas are slightly larger, and while primarily seen as simple retreats, they are also used for growing food in most instances. Country-house dachas are comparable to the former Soviet dachas of the elite, but these are now clearly in the minority, and just 5 years after the dissolution of the Soviet Bloc almost 80 % of dachas in Russia were used for producing food. In 1995, almost 40 % of Russia’s total agricultural production took place at the household level, either at dachas or rural household plots (Seeth et al. 1998). This equated to roughly 40 million households with 55 million plots. Meat, milk, potato, and fruit and vegetable production were particularly significant sectors at this scale, respectively accounting for 42, 44, 82, and 90 % of the nation’s output.

Dachas are not the only contributor to urban agriculture in Russia and other former Soviet lands. In a study of St Petersburg’s urban agricultural sector, Moldakov (2000) identified several other well-established modes. They reported that there are about 2,800 sadovodstvos (gardening communities) in the peri-urban fringe. These comprise 500–600 plots of around 0.06 ha each, and they are mainly used for subsistence production. There are also about 180,000 small (0.02–0.3 ha), independent garden plots known as ogorods in St Petersburg, mostly in the peri-urban zone as well. During the Soviet era, most factories grew food in gardens and greenhouses, and such activity continues today in private and state-owned factories.

6.2 Rooftop production

Rooftop gardening is also a key feature in many Russian cities, and the practice has been documented in detail for St Petersburg by Gavrilov (2000), from where the following summary is drawn. One clear advantage of using roofs is that there are no taxes levied on the use of the space. Bureaucratic obstacles are commonly encountered when trying to establish a rooftop garden though, and gaining official permission can be difficult. There is also some opposition to the concept, often by older residents accustomed to more traditional urban lifestyles. Nevertheless, the practice is gaining momentum and has some benefits. Food is produced very close to the consumer, thus saving on transport costs. These gardens also offer a means of reusing some of the waste directly produced by the household or apartment block. For example, vermicomposting in apartment buildings in St Petersburg has proved a useful means of recycling kitchen waste. There is also anecdotal evidence to suggest that produce from St Petersburg’s rooftop gardens contains lower levels of heavy metals than that from the city’s dachas (Gavrilov 2000). As with any vegetable production system, a rooftop garden can supply a family with essential vitamins and micronutrients that might otherwise be lacking in their diet. Such nutritional shortfalls are particularly likely to occur in the winter, simply because vegetables cannot be grown then. To this end, the St. Petersburg Urban Gardening Club and the Vavilov Institute for Horticulture initiated the Witloof Program, whereby rooftop gardeners were advised on how to grow witloof as a winter vegetable staple. Using different varieties, witloof can be grown continuously from October through to May in cool, dark conditions.

Another advantage of rooftop gardens is that they are safe havens from theft (Gavrilov 2000). Dacha plots, on the other hand, are vulnerable, especially given they are left unattended during the working week, and in some districts, this is addressed through levied taxes to fund police patrols or direct hiring of private guards by plot owners (Gavrilov 2000). In 2000, a Czech man whose Dacha plot cottage had been repeatedly robbed took the matter of security into his own hands by installing a crossbow that would be automatically triggered by a trip wire. The recidivist was injured by the arrow on his next visit and the plot owner was convicted (Ostrava 2012).

7 Pacific Island Countries and Territories

7.1 Increasing reliance upon imported food

Despite a long history of urban agriculture, Oceania has all but been ignored in the literature—of the United Nations’ six macro-geographical regions it is the only one to have almost completely escaped the attention of all of the major overviews of urban agriculture (Bryld 2003; De Bon et al. 2010; Smit et al. 1996; van Veenhuizen 2006). This is particularly surprising when one considers that most Pacific Island Countries and Territories (PICT), owing to their geography and small population sizes, tend to have relatively little large-scale industrial agriculture. Of course, many are heavily reliant upon food imports, but much of the food that is grown for local consumption is produced in the vicinity of towns and cities.

Urban agriculture takes on two basic forms in the towns and cities of the Pacific: cultivation of gardens adjacent to the home on land usually owned by the family, and gardening on vacant, undeveloped land within the town, typically away from the home (Thaman 1995). In his survey of urban agriculture in Papua New Guinea, Fiji, Tonga, Kiribati, and Nauru, Thaman (1995) found a high diversity food crops under cultivation. The most commonly cultivated plants in the region are the staple root crops such as taro (and giant taro and giant swamp taro in Tonga and Kiribati, respectively), cassava, sweet potato, and yams. Many of the other crops grown are often used as intercrops between rows of root vegetables. These include pineapple, sugarcane, cabbages, cucurbits, tomatoes, eggplants, onions, peanuts, corn, okra, various beans and edible legumes, spices (e.g., chilies, ginger, coriander, and mint), and stimulant and depressant plants, especially kava, betelnut, betel pepper, tobacco, and lemon grass (Thaman 1995).

In recent years, over 60 % of the population in PICT has moved from outlying islands and atolls to the capital cities. Such urban drift has put increasing pressure on the supply of produce to city markets, while also resulting in a glut of produce in regional areas that was either unable to be harvested or transported to cities (SPC 2010). Yet, traditional methods of agriculture are perceived as beneficial to the environment, and can even confer a competitive advantage (SPC 2010) when exporting produce, although there may be quarantine and cool chain issues associated with export markets.

In PICT, traditionally food is grown locally (mainly in peri-urban areas) and transported to a municipal market. However, such markets are invariably overcrowded and unhygienic, with no storage facilities or quality monitoring of produce. Hence, consumers are often reluctant to buy at local markets, instead preferring the local supermarket. There is a need to marry the goals of sellers and buyers at such markets through improved infrastructure (Naidu 2011). Urban agriculture in the Pacific is unique in that it does not have the pollution constraints that may beset food grown in the urban environments of more densely populated developing countries. Instead, growers in the Pacific have other problems; for example, they may not be paid for the full amount of their produce because of a lack of weighing equipment on farms (Newman et al. 2011). Additional threats to traditional urban agriculture result from land disputes, as well as the decline in the culture of food sharing and loss of traditional knowledge (Rodgers 2011).

Pacific island countries have customarily been food secure, with local staples such as root crops, bananas, and sustainable agriculture and fishing, but an increasing over-reliance on imported food has led to a less secure food future, particularly as imported food costs have increased markedly since 2008 (Teo 2011). The consequence of this is an urban poor, both in terms of income and nutrient intake. The consumption of imported foods of low nutritional value has resulted in Nauru, American Samoa, Tokelau, and Tonga having the highest rates of obesity in the world at 78.5, 74.6, 63.4, and 56 % of the adult population, respectively (WHO 2011). Additionally, the increasing population rate in some PICT—more than 2 % per annum in Melanesian countries (Bell 2008)—means that local food production is not keeping up with demand, and hence the reliance by more than two thirds of PICT on imported food (SPC 2011) of low nutritional value with its concomitant health costs. In addition to obesity, the prevalence of other non-communicable diseases such as diabetes and heart disease has placed an added burden on Pacific countries. An estimated 75 % of deaths in PICT are attributable to non-communicable disease, and over 50 % of many PICT health budgets are spent on addressing it (SPC 2011).

School gardening programs have recently been introduced into urban schools in Suva, Fiji, to not only expose urban school children to agriculture but also to encourage them to consider agriculture as a career (SPC 2010). A competition also exists in several countries to encourage youth to write blogs on urban agriculture and share the information between Pacific, African, and Caribbean countries, with individual winners receiving up to €1,500 (SPC 2011).

A recent survey of squats in and around the capital of New Caledonia, Nouméa, revealed that while 69 % of squatter households had access to home-grown produce, for 67 % it made up less than half of their total food supplies (TNS 2007). The main plants cultivated were cassava, corn, yams, and taro, as well as banana and coconut trees (Dussy 2005). It was found that the produce was more often used for bartering than for personal consumption and that most urban agriculture undertaken by the squatters was opportunistic, rather than their main professional undertaking. The agricultural tasks had to fit around paid employment (Dussy 2005). Recently, the city council of Nouméa began granting “family gardens” for AU$10 per month on the condition that no flowers were grown and that the only trees to be planted were bananas. Those who asked for such gardens were mainly relocated squatters. The reasons cited for growing their own produce were: agricultural traditions, convenience of supply, financial savings, better nutritional balance, and access to organically grown fruit and vegetables (Cochin 2011).

7.2 Obesity

Obesity began as an epidemic in the USA in the 1970s and in the ensuing 30 years has become a pandemic that has spread to the world’s urban poor (Prentice 2006). The causes are linked to high-energy diets containing highly refined fats and carbohydrates and a more sedentary lifestyle (Prentice 2006). Pacific Island countries feature predominantly in the global obesity statistics, occupying the top six positions (WHO 2012). Traditionally, many Pacific Island cultures have valued a larger body size as a sign of wealth, strength, beauty, and fertility (Brewis et al. 1998; Pollock 1995). Until the 1920s, young men and women were placed in separate, secluded fattening huts for 1 month and fed copious amounts (termed haapori) of breadfruit, bananas, coconut, Pandanus, and fish by relatives “in order to grow fat and lusty and high spirited” (Corney 1915 as cited in Oliver 1974). The attractiveness of a larger body size still persists in some Pacific Island countries, in which a study of Samoans rated obese individuals as more attractive than those who were overweight (Brewis et al. 1998).

The Republic of Nauru, a small island in the Pacific Ocean with a population of approximately 10,000 people, has the highest rate of obesity in the world, with 79 % of its adult population having a body mass index (BMI) greater than 30 kg/m2 (Prentice 2006). The worldwide standard of the BMI as a measure of human body fat and disease propensity does have its drawbacks though, especially when comparing across ethnicities with different body frames (Razak et al. 2007). Other critics claim that it does not discriminate between body fat and lean mass (Romero-Corral et al. 2008). Yet, despite Nauruans’ claims that the statistic of being the most obese nation in the world does not account for their body structure (Vatucawaqa 2010), it is perhaps most telling that when McDaniel and Gowdy (2000) walked through the island’s cemetery they noted the early age at which many Nauruans had died. Changes in the health of Nauruans can be traced back to the new diseases and alteration in eating habits that occurred with the influx of foreign workers to mine phosphate in the early 1900s (McDaniel and Gowdy 2000). Traditional vitamin-rich foods (e.g., toddy from immature coconut flower bud sap, a source of vitamin B1) were replaced by imported, low-nutrient foodstuffs, such as canned meat (McDaniel and Gowdy 2000). The life expectancy of Nauruan men is 50 years, and that of the women 58 years (US Department of State 2012), and Nauru has one of the highest rates of type 2 diabetes in the world (FAO 2012c). Furthermore, almost one quarter of Nauruan children show stunted growth and more than half are anemic (SPC 2007). Food security via locally produced food is regarded as paramount to addressing the health and nutrition of this small island nation (FAO 2012c).

Nauru once supplied most of the world’s phosphate (from seabird guano) for use as fertilizer, and has been left unworkable and barren from phosphate mining, with coral pinnacles up to 49 ft (15 m) high (US Department of State 2012) making up four fifths of the 21-km2 phosphate rock island (Scott 1993). Prior to mining, the land was cleared of topsoil and vegetation and was not rehabilitated (Manner et al. 1984). To reclaim the wasteland left by phosphate mining, Manner et al. (1984) suggested that the coral pinnacles be leveled and crushed, soil imported and purposeful planting be undertaken. This has not occurred and the country is reliant on the importation of nearly all foodstuffs, which are processed, highly priced, and beyond the reach of many families (FAO 2012c). Imported fresh fruit and vegetables are also costly. Nauru has a coastal belt, 100–300 m wide, comprising sands and corals of low water-holding capacity, along which coconut palms, Pandanus tectorius, mango trees, and breadfruit grow, as well as some banana trees and vegetables, the latter grown mainly by the former indentured workers (Manner et al. 1984; Scott 1993; Thomson et al. 2006). P. tectorius fruit was a food staple in Nauru until imported foods took precedence in recent decades (Kayser 2002). The available land for agriculture in this narrow coastal strip also competes with residential housing (FAO 2012c), and gardening is therefore often limited to plants grown in containers (e.g., cassava and sweet potatoes), apart from the coconut palms and banana trees (Thaman 1995). Agriculture is further constrained because of the reliance on limited bore water supplies (FAO 2012c), although funding for a new desalination plant has recently been announced (IISD 2012). In addition, the growing of food is frequently affected by drought (McDaniel and Gowdy 2000), the lost desire to cultivate crops, land tenure, and water rights (FAO 2012c; Thaman 1995). Any food that is grown locally is for household consumption only and limited in variety (FAO 2012c).

The FAO has recognized the health and sustainability time bomb that is occurring in Nauru and therefore school education programs have begun, focusing on raising awareness of producing food locally, and propagation and production methods (FAO 2012c). In 2007, the Strategic Plan for the Sustainable Development of Agriculture was developed to improve agricultural development, food security, and promote self-sufficiency of Nauruans. Some of the policy goals of the Plan included the increase of locally grown horticultural produce; the consumption of those foods to improve nutrition over imported, processed foods; and improved irrigation for growing crops via water conservation measures (FAO 2012c). Foreign aid and support from Australia, New Zealand, Japan, Taiwan, and the WHO have been vital for the implementation of these goals (FAO 2012c). The FAO is supporting the growing of home and community gardens in Nauru by rehabilitating land and implementing kitchen garden programs to increase the population’s ability to gain access to local fresh fruit and vegetables (FAO 2012c). Furthermore, the draft Framework for Action on Food Security in the Pacific 2011–2015 acknowledges that “healthy [food] choices need to be easy choices” that are both available and preferred (FSPWG, no date). Therefore, the education of consumers is also imperative if the increase in non-communicable diseases in the Pacific from the consumption of nutritionally poor foods is to be addressed (FSPWG, no date).

7.3 Cassava’s future

Cassava (Manihot esculenta) deserves special consideration here, given it is the second most common urban agricultural crop in PICT after taro (Thaman 1995), it is the sixth most important crop worldwide by tonnage (Lebot 2009), and it is widely grown in both urban and subsistence agriculture in over 100 countries (FAO 2008). Cassava is known as “the drought, war and famine crop of the developing world” (Burns et al. 2010) because it can be grown in dry, nutrient-poor soils (Pearce 2007), and is the staple food for nearly one billion people, mainly in sub-Saharan Africa, Asia, South America, and the Pacific (FAO 2008). Cassava was introduced to the Pacific islands by missionaries in the early nineteenth century (FAO 1990). The root tubers and the leaves are both used, mainly for human consumption (75 %), while the remaining quarter is used as animal feed (FAO 2000). But it is the tubers that are the main dietary staple because of both their high starch content (up to 85 % of root dry matter) and underground storage properties (Lebot 2009). The tubers can remain underground for up to 3 years (Lebot 2009) and can be dug up when required for food, propagation, or after villagers return to a war-ravaged area (Pearce 2007). It is this underground “household food bank” (Lebot 2009) that bears an all-too familiar similarity to a population’s reliance on a stored tuber and famine: the Irish potato famine of the 1840s was caused by the fungal-like oomycete, Phytophthora infestans, rotting the underground tubers; and now the Cassava Brown Streak Virus is destroying cassava crops in the East African countries of Zanzibar (Pearce 2007) and Nairobi (FAO 2011a, b). Above-ground parts of the plants appear to be healthy, but when the tubers are dug up, they have rotted. The FAO has predicted a cassava virus epidemic around the Great Lakes region of East Africa that will affect the food security of the region (FAO 2011a, b). To counteract the widespread impacts of the disease, research into varieties that tolerate the virus is being fast-tracked (FAO 2011a, b).

A further cause for concern, because of the reliance on cassava by the developing world, is the increase in its toxic properties with rising atmospheric CO2 concentrations. Cassava is well-known for its cyanogenic glycoside content (McMahon et al. 1995), but traditional preparation methods such as sun drying, soaking, and boiling remove most of the toxins (Padmaja 1995). Nevertheless, some cyanides remain and toxin levels in the prepared cassava product have been found to increase during drought (Cardoso et al. 1999). Cassava toxicity has been linked to a low-protein diet (Padmaja 1996) because the sulfur-containing essential amino acids, cysteine, and methionine, are required to detoxify the cyanogens (Cardoso et al. 2004). If the dietary intake of protein is insufficient, and particularly lacking in cysteine and methionine, not only does the prevalence of cyanide poisoning from cassava ingestion increase, but so does the incidence of protein deficiency diseases (Padmaja 1996) and a disease known as konzo, which causes permanent paralysis of the lower limbs (Ernesto et al. 2002). Diets that are low in animal proteins, such as cassava (FAO 2004) and cereal grain-based diets, are often low in cysteine and methionine (FAO 2012c).

Recent studies have shown that elevated atmospheric CO2 levels will not only result in decreased tuber mass of cassava, but the cyanogenic compounds in the leaves will increase (Gleadow et al. 2009). The leaves contain 7 % protein, as well as some vitamins and minerals, so their use has been encouraged to counteract the high starch diets that are prevalent in PICT and elsewhere in the developing world (Lancaster and Brooks 1983). It is a cruel irony though that methionine, the essential amino acid necessary for the cyanogenic detoxification of cassava, is lacking (Lancaster and Brooks 1983). If the leaves are boiled prior to use, the level of toxin is reduced, but in many cultures the raw leaves are used as a salad (Gleadow et al. 2009). Clearly, if atmospheric CO2 levels continue to rise at the current rate of approximately 10 ppm per year, the reeducation of those societies that use uncooked cassava leaves, or the introduction of other protein sources, would be considered an ethical necessity.

8 Conclusion

Urban agriculture, in various guises, has a long-established history in the developing world, and while the literature is biased toward certain countries, it appears on the whole to be making a significant contribution to the lives of many. Through this global review, we have identified five high-priority aspects that require further attention if urban agriculture in the developing world is to achieve its full potential, whatever that may be.

  1. 1.

    The global and regional extent of urban agriculture needs to be quantified far more rigorously. The confidence bounds on our estimates are very large indeed. This can only be achieved through comprehensive surveys and inventories, and, given the results of our sensitivity analyses, these are especially needed for Asian countries.

  2. 2.

    A better understanding of the contribution of urban agriculture to communicable diseases is sorely needed. In terms of lives lost in low-income countries, urban agriculture is pertinent to two of the top five causes of death, namely diarrheal disease (8.2 % of all deaths) and malaria (5.2 %) (WHO 2011). Malaria is less problematic in middle-income countries, but diarrheal disease occupies the second position, behind lower respiratory infections (which are rarely a primary cause of death, in any case), in both low- and middle-income countries. In addition to quantifying urban agriculture’s contribution to these disease burdens in different contexts, appropriate management techniques need to be developed to lessen its impact. As outlined, some progress has been made in this regard with respect to diarrheal disease associated with wastewater irrigation (also see WHO 2006). For malaria, on the other hand, the situation is more difficult to manage, and the potential contribution of urban agriculture not only in supporting vector mosquito populations but to insecticide resistance development demands serious attention. While “evolution-proof” insecticides for malaria control can in theory be developed by targeting old, malaria-infected mosquitoes (Read et al. 2009), this dream is yet to be realized, so we need to manage the arsenal at hand judiciously and limit any additional selection pressure imposed by the use of insecticides in crop protection.

  3. 3.

    The role that urban agriculture does and/or could play in abating both malnutrition and obesity needs to be addressed seriously, with most evidence to date being either confounded or anecdotal. With respect to obesity, we believe that urban agriculture shows particularly significant potential for stifling the epidemic that is engulfing PICT.

  4. 4.

    The relationships between urban agriculture and the rights of women need further investigation in a range of cultural contexts, with the limited research to date pointing to positive outcomes, such as providing women with security and a voice through strength in numbers in farming groups, but also negative consequences such as increased work burden and repression of the opportunity to enter the higher-paid formal work sector.

  5. 5.

    Appropriate mechanisms of governance and institutional support for urban agriculture need to be developed. Without such official backing, the activity will continue on an ad hoc, opportunistic, inefficient, and uncoordinated basis. One of the major stumbling blocks in many countries remains the view of governments that urban agriculture is a retrograde step, and it is thus met with either indifference or repression. While Cuba has undoubtedly been the leader with respect to active support of urban agriculture, it must be recognized that it is the only country in Latin America with a Democracy Index < 4 (3.52), thus making it the only authoritarian regime in the region (Economist Intelligence Unit 2011). This has unquestionably aided the implementation of such a sophisticated and coordinated urban agriculture program, and the limited research to date suggests that many feel that urban agriculture has been foisted upon them, that the practice is archaic, and that they would prefer to return to a productivist model.

  6. 6.

    The risks posed by chemical pollutants demand continued vigilance and research. As Africa becomes increasingly industrialized, attention to the safe management of wastewater irrigation will have to be extended beyond microbial to chemical contaminants. In Asia, particularly China, PAHs are likely to continue to be of concern for some time yet but will slowly decrease as the heavy reliance on coal is replaced with fuel sources that combust much more completely, such as oil and gas. A decade ago, 30 % of the nation’s energy needs were met with coal (240,459 kt of oil equivalent, KTOE) but this is predicted to decline to 4.8 % (76,191 KTOE) by the end of this decade (Adams and Shachmurove 2008). On the other hand, the other major source of PAHs, combustion of waste products, will see a slight absolute increase as an energy source, from 216,104 KTOE in 2002 to 251,595 KTOE in 2020.

Clearly, arguments for and against urban agriculture will always have a strong element of context specificity. But, on the whole, when planning for the future, should we give peas a chance in developing countries? On balance, given its track record over space, culture, and time, we believe the answer is a qualified and cautious yes; but this immediately raises a more important and difficult question: how much of a chance? The global demand for food is increasing rapidly, and what contribution urban agriculture should, could, or will make toward meeting the challenge remains unclear. Cities have various functions to perform, notably trade, manufacturing, and housing of people; and foisting agriculture onto a city’s plan has implications for all of these.

This is ultimately a mathematical optimization problem, and a highly complicated one at that. In essence, not only do the direct benefits and risks of urban agriculture need to be weighed against each other, as has been done in a qualitative sense only in this review, but this result in turn needs to be considered in the context of the other functions of a city. Such a model would also need to consider factors beyond the city’s borders. For example, urban agriculture might reduce food miles, but its necessarily smaller scale and fragmented nature lead to lower economies of scale and production inefficiencies in comparison to conventional agriculture, which ultimately mean that for the same amount of production more land and energy is required (and the energy required to power a human as a harvester appears to have escaped any attention to date in the literature on urban agriculture). Also, it has been argued that in some instances the reduced carbon emissions achieved through growing a crop in favorable climatic and edaphic conditions elsewhere in the world far outweigh the ground made in terms of reduced food miles through local production (Saunders et al. 2006; Webber and Matthews 2008), but other studies have suggested the opposite (Blanke and Burdick 2005). Overlaying this are questions relating to how food miles are calculated in the first place, where economies of scale come into the equation. Other aspects of the carbon footprint of urban agriculture must also be considered; for the example, it has been argued that the resultant greenhouse gas emissions from the spreading out of a city to accommodate urban agriculture would, like production inefficiencies, be far greater than the emission reductions arising from reduced food miles (Glaeser 2011). These issues are pursued in further detail in the sister paper on developed countries (Mok et al. 2013) because it is in this context that most of the quantitative analyses have been done; but with most of the world’s urban expansion set to take place in developing countries it is critical that these debates no longer be restricted to the developed world.

Theoretically, there would be an optimal amount of land that should be devoted to urban agriculture in any particular city, and this could be determined only be solving an optimization model. Simplistically, at one extreme is a city landscape dominated by agriculture to the extent that it would no longer be considered a city, and at the other is an urban metropolis completely devoid of agriculture and totally reliant upon importation of food. A rigorous model could be used to determine where along this continuum a sensible answer lies. Addressing the research gaps identified in this review and outlined above would certainly help with parameterizing such a model for a given city.

Of course, the idea of constructing such models, let alone imagining appropriate social and political will to allow their implementation, is somewhat utopian, particularly for developing countries, where urban agriculture has to date been largely opportunistic basis. Nonetheless, in response to the massive population growth in developing world, new cities are being planned and built and existing cities drastically modified, so the opportunity exists for urban agriculture to be included in food systems in an organized rather than reactionary manner. Developing countries also host the overwhelming majority of the world’s biodiversity hotspots for conversation priorities (Myers et al. 2000), which is a further incentive to get these sums right. As demonstrated by the area statistics in the introduction, championing urban agriculture as the solution to the world’s food security problems would be absurd, and the overstatements of its potential contribution that abound in the popular press (e.g., Albeman 2000; Jones 2011) are likely only to damage its offerings by marginalizing it as a fringe movement with a philosophical agendum to push rather than a serious contributor, however large or small, to the global food crisis. The time is ripe for urban agriculture to be taken seriously and its potential contributions assessed rigorously.