التسميات

الخميس، 12 يوليو 2018

Scale in GIS: An overview‏ - Michael F. Goodchild ...


Scale in GIS: An overview‏

Michael F. Goodchild 

Center for Spatial Studies, and Department of Geography, University of California, Santa Barbara, CA 93106-4060, USA


Geomorphology Volume 130, Issues 1–2, 1 July 2011, Pages 5-9:


Abstract

   Scale has many meanings, but in GIS two are of greatest significance: resolution and extent. Ideally models of physical process would be defined and tested on scale-free data. In practice spatial resolution will always be limited by cost, data volume, and other factors. Raster data are shown to be preferable to vector data for scientific research because they make spatial resolution explicit. The effects of resolution are discussed for two simple GIS functions. Three theoretical frameworks for discussing spatial resolution are introduced and explored. The problems of cross-scale inference, including the modifiable areal unit problem and the ecological fallacy, are described and illustrated.

Keywords: Scale Representative fraction Resolution Extent Ecological fallacy Modifiable areal unit problem


1. Introduction 


   From the extensive literature on the topic it is clear that scale is a problematic issue in many sciences, notably those that study phenomena embedded in space and time. Numerous books and articles have provided perspectives, many of them focusing on specific disciplines (e.g., Lam and Quattrochi, 1992; Levin, 1992). Of particular interest here are the various books that have examined the role of scale in the social and environmental sciences (e.g., Foody and Curran, 1994; Quattrochi and Goodchild, 1997; Alexander and Millington, 2000; Tate and Atkinson, 2001; Sheppard and McMaster, 2004), where the space of interest is that of the Earth's surface and nearsurface, and where the geographic information technologies – geographic information systems (GIS), remote sensing, and the Global Positioning System (GPS) – play an increasingly important role in support of research. The purpose of this paper is not to add anything new to this extensive literature, but to provide an overview and summary of the major issues of scale that arise when GIS is used as a research tool, with particular emphasis on geomorphology.

    The first and most obvious problem is semantic – that the noun scale is used in three distinct senses in science, and in many other senses in society generally. To a cartographer scale normally refers to the representative fraction, the parameter that defines the scaling of the Earth's surface to a sheet of paper. Like all analog representations, a paper map sets a given ratio between distance on the map and the corresponding distance on the ground, and this ratio has traditionally been used to define a map's level of detail, its content, and its positional accuracy (despite the fact that in principle the ratio cannot be exactly constant over a paper map that flattens the curved surface of the Earth, as all paper maps must). Representative fraction also plays a key role in the analog models that are still used in some areas of engineering. Proctor and I (Goodchild and Proctor, 1997) have argued that representative fraction is undefined for digital data, although a series of conventions have been adopted to give it meaning in specific circumstances.

    The remaining two meanings, both of which are defined for digital data, refer on the one hand to the extent of a study area, primarily its extent in space but also to some degree its extent in other dimensions including time; and on the other hand to its resolution, or degree of detail, again primarily in the spatial and sometimes the temporal dimensions. Both can be expressed for the spatial dimensions in either linear or areal measure (or volumetric if the third spatial dimension is included), and in other work (Goodchild, 2001) I have argued that their dimensionless ratio, which I have termed the large over small ratio (LOS), is in practice remarkably constant across a wide range of data sources and applications, within the range 103 to 104.

   In what follows, and in the overwhelming majority of the literature on scale, the emphasis is on the small or resolution dimension of spatial scale. The Earth's surface is infinitely complex, and in principle could be mapped down to the sub-millimeter and even molecular level. In practice, however, given our limited ability to sense, capture, and handle massive volumes of data, it is essential to reduce detail, capturing only the largest and therefore likely the most important features of any spatially distributed phenomenon. Sampling achieves this, as do processes of cartographic generalization (McMaster and Shea, 1992), aggregation, and approximation. These processes tend to remove unwanted short-wavelength detail, though it is often difficult to express the effects in formal terms as the action of a low-pass filter.

   Domain sciences such as geomorphology are concerned with the modeling of processes that occur on the physical landscape. If the spatial resolution of data is always limited, it is essential therefore in the study of any given process that the data used to model the process include all of the important detail needed for accurate modeling. If the process is significantly influenced by detail smaller than the spatial resolution of the data, then the results of analysis and modeling will clearly be misleading. Thus in addition to the spatial resolution of data, it is also important to consider the spatial resolution of processes. However few theories of process make spatial resolution explicit, most being essentially scale-free. The Darcy equations of groundwater flow, for example, or the Navier–Stokes equations of viscous fluid motion, are expressed as partial differential equations in scale-free variables. When solved using finite-difference or finite-element methods, spatial resolution is suddenly introduced in the size of the raster cells or finite elements, and it is difficult to characterize the uncertainties associated with the inevitable information loss. Thus the researcher whose model fits reality to a level that is less than perfect, as all models must, is left not knowing whether the misfit is due to the effects of spatial resolution, or due to an imperfection in the model, or both.

   The first major section discusses scale within the framework of alternative conceptualizations and representations of spatial data, with emphasis first on discrete objects and continuous fields, and then on raster and vector data structures. This is followed by a section dealing with scale and semantics, or the effects of scale on the definitions of the terms, classes and variables that are commonly acquired and used in GIS-supported research. The third major section discusses efforts to formalize scale through concepts of fractals, Fourier analysis, and the underlying assumptions of geostatistics, followed by a section discussing the difficulties of cross-scale inference. The paper ends with a short summary of the major points

2. Scale in GIS representations 

  It is now widely accepted that two methods exist for conceptualizing phenomena distributed over the surface of the Earth (Couclelis, 1992). The discrete object conceptualization imagines that the world is a surface like a table-top, empty except where there exist discrete, countable things. These things can overlap, and in many cases will maintain their integrity through time and when moved. This conceptualization is particularly appropriate when dealing with biological organisms, vehicles, or buildings. In practice discrete objects may be represented as points, lines, areas, or volumes depending on their size and the purposes of the representation.

   However other phenomena present distinct problems when conceptualized in this way. Many natural features, including mountains, lakes, and rivers, have only vaguely defined limits and may be better conceptualized as continuous, in what is known as the continuous-field conceptualization. In this view phenomena are expressed as mappings from location to value or class, such that every location in space–time has exactly one value of each variable (the full four-dimensional spacetime is often simplified by ignoring time, in the case of static phenomena, or ignoring the third spatial dimension, or both). Thus, for example, topography is better conceptualized as a mapping everywhere from location (x,y) to value z than as a collection of vaguely defined discrete features, and soil type is better conceptualized as a mapping from location to class c than as a collection of homogeneous and non-overlapping areas separated by infinitely thin boundaries, across which class transition is instantaneous. Many physical phenomena, from air temperature to soil moisture content and land cover class, are more often conceptualized as fields than as collections of discrete objects.

   In principle bothfield and object conceptualizations are independent of scale. In practice, however, their digital representations always embody scale to some degree. Both discrete-object and continuous-field conceptualizations can be represented as either raster or vector data. In the raster case, resolution is always explicit in the size of the raster cells, which are almost always square in the two-dimensional case (in the spirit of the previous comment about the impossibility of a constant representative fraction on a flat map, note that a raster cannot be laid over a curved surface, and note the existence of an extensive literature on discrete global grids (e.g. Sahr et al., 2003). In the three-dimensional case it is common for the vertical dimension to be sampled differently from the two horizontal dimensions, leading to differential resolution. In atmospheric science, for example, the intervals between sampled heights may be quite different from the horizontal intervals between sampled profiles. Modifications to simple raster structures are sometimes made to allow for a degree of sub-cell representation, for example by recording more than one attribute per cell. But spatial resolution remains unaffected unless the within-cell locations of each attribute are also recorded, as they are for example in the hierarchical quadtree structures (Samet, 2006).

  Resolution is more difficult to define in the vector case. If data are captured at irregularly spaced points, there may be some justification for using the distance between points as a measure of resolution, but little basis for deciding whether the minimum, mean, or maximum nearest-neighbor distance should be used. When data are captured as attributes of areas, it is common to represent the geometric form of each area as a polygon, and volumes are similarly represented as polyhedra. Resolution now appears in several forms: in the willingness to represent boundaries as infinitely thin, in the density with which boundaries are sampled, in the within-area or within-volume variation that has been replaced by an assumption of homogeneity, and in the sizes of areas and volumes. Unfortunately this often creates the mistaken impression that vector data sets have infinitely fine resolution. For such maps we can state in general that at finer resolutions the numbers of areas and volumes would increase, their boundaries would be given more detail, and they would be more homogeneous. In the case of lines the conventional digital representation is as a polyline, a set of points connected by straight-line segments, and a similar issue now arises over the density of sampling of the line and its infinitesimally small width.

   In short, vector representations leave resolution poorly defined. Moreover it is difficult if not impossible to infer resolution a posteriori from the contents of a vector data set. While the choice between raster and vector is often guided in practice by the nature of existing data, by the software available for handling the data, and by the types of analysis and modeling that are to be conducted, in principle the poor definition of resolution for vector data is a strong argument for the use of raster data in rigorous scientific research.

3. The semantics of scale 

   The power of GIS lies in its ability to transform, analyze, and manipulate geographic data. But since all geographic data must be specific to resolution, as discussed in the previous section, it follows that all such transformations, analyses, and manipulations must also be scale-specific. Consider, for example, the simple task of measuring the length of a digitized line. If the line is represented in vector form as a polyline, an easily computed estimate of length will be the sum of the straight-line segments that compose the polyline. If we assume that each sampled point lies exactly on the true line, the length of each segment will be a lower bound on the length of the true line. In general, estimates of length obtained in this way (and of perimeters of areas and surface areas of volumes) will be underestimates of the truth, by an amount that depends on the sampling density. This phenomenon has long been recognized in the fractal literature, in Mandelbrot's 1967 question “How long is the coastline of Britain?” (Mandelbrot, 1967), and the relationship between length and sampling density is often termed a Richardson Plot in recognition of the work of Lewis Fry Richardson (1960). More generally, we can state that the lengths of natural features such as coastlines cannot be defined – or measured – independently of scale. Note however that in the cases of area for polygons and volume for polyhedra, there is the potential for both under- and over-estimation of measures.

   This same conclusion about the importance of scale in length measurement applies to a remarkable number of geographic data types. Consider the measurement of slope, which is typically done from a raster of elevations, otherwise known as a digital elevation model (DEM). A common algorithm due to Horn (1981) takes the eight cells forming a given cell's Moore (or queen's case) neighborhood, weights the eight neighbors depending on their distance from the central cell, and obtains estimates of the two components of slope. But the resulting estimates are dependent on the raster cell size, and in general larger cells will yield smaller estimates of slope. Hence again slope cannot be defined or measured independently of scale. More fundamentally, topographic surfaces are often subject to breaks of slope where derivatives are undefined (another link to the fractal literature, see for example Mandelbrot, 1982).

   These two examples both concern geometry, but insofar as they address the ways parameters such as length and slope are defined, they can be regarded as problems of meaning. More broadly, the meanings of many more geographic data types prove to be scaledependent. Consider land cover, for example. At a coarse scale it may be possible to define broad categories of land cover, such as urban or oak savannah. But in both of these examples definitions break down at finer scales, as the landscape begins to fragment itself into individual buildings, gardens, and roads, or into individual oak trees and surrounding grassland. At even finer scales further fragmentation occurs, as the individual leaves of the trees or tiles of the roofs become apparent. Such issues tend to be far more obvious in the raster case, particularly when data have been obtained by remote sensing or other types of imaging, since resolution is explicit in the raster cell size. Efforts are often made to extend the range of resolutions that are valid for any given set of class definitions, by recognizing mixed pixels, for example. But even though a pixel may be regarded as split between two or more classes, the inhomogeneities of those classes will become glaringly apparent as resolution becomes finer.

  Unfortunately this essential importance of scale in geographic data is often missed by the systems that have been developed for searching and discovering geographic data sets. Such systems, variously known as data warehouses, digital libraries, or portals, give greatest prominence to the area of interest and the data type or theme. For example, in the search screen for the US Geospatial One-Stop portal (www.geodata.gov; Goodchild, et al., 2007), a project initiated by the Bush Administration and designed to provide a single point of entry to the geographic data resources of the US Federal government, the user is able to specify area of interest, data type, date, and storage medium, but has no ability to specify scale despite its critical importance.

4. Formalizing scale 

   The previous section ended with a comment on the paucity of useful information about scale in geographic data sets. In part this is attributable to the lack of theory about scale, and thus to the difficulty of formalizing it, a problem that was discussed earlier in the context of vector data. The purpose of this section is to review the available frameworks for a formal discussion of scale, drawing from a variety of literatures.

   Some of the earliest efforts to deal with scale in geographic data were concerned with very practical issues. If scale is an essential parameter of any data set, and the costs of acquiring, handling, storing, and processing data are dependent on scale, then it would be helpful to have some objective bases for resolving the associated issues of GIS design. How much information is lost when data are collected at coarser resolution? What would be the benefits of collecting data at finer resolution, and would these justify the increased costs? What confidence limits can be placed on estimates of properties obtained from coarse data?.

  Frolov and Maling (1969; see also Maling, 1989) analyzed the errors introduced by representing a perfectly known area in a raster of a given cell size, using simple models from the literature of geometric probability. In a subsequent paper (Goodchild, 1980), I showed that their results could be framed within the theory of fractals (Mandelbrot, 1977) given that many geographic phenomena exhibit fractal behavior (see below), allowing one to estimate exactly how much information is lost by coarsening resolution. More recently Shortridge and I (Shortridge and Goodchild, 2002) showed that such problems could be addressed by an extension of Buffon's Needle, a classic problem in geometric probability.

   Mandelbrot's essential thesis (Mandelbrot, 1977) in advancing the theory of fractals was that the rate of information loss or gain with scale change was orderly and predictable through principles that have come to be called scaling laws. The Richardson Plot referenced in the previous section shows length plotted against resolution on doublelog paper, and commonly results in a close fit to a straight line. Mandelbrot generalized this result to argue that phenomena exhibiting what he termed fractal behavior would show such power-law behavior whatever the method used to obtain the measure. In the case of length measurement this might be the spacing of a pair of dividers used to step along the line, or the size of raster cells in a raster representation of the line, or a resolution parameter from any of a host of other methods. Fractal behavior includes the property of selfsimilarity, meaning that any part of the feature is statistically indistinguishable from the feature as a whole. Simulations of selfsimilar lines and surfaces show striking resemblance to certain geomorphological features, such that exceptions to power-law behavior become scientifically interesting (Goodchild and Mark, 1987).

   The fractal properties of geomorphic features have been the subject of an extensive literature (Xu et al., 1993). Andrle investigated the surfaces of talus slopes (Andrle and Abrahams, 1989) and coastlines (Andrle, 1996); Mark (1984) investigated coral reefs; Klinkenberg (1994) examined regional topographies of the United States; Clarke and Schweizer (1991) measured the fractal dimensions of some natural surfaces; and Tarboton et al. (1988), La Barbera and Rosso (1989), Liu (1992) and Nikora et al. (1993) studied the fractal properties of river networks.

   Geostatistics (Goovaerts, 1997) provides another powerful framework for addressing issues of scale. One of the most widely used functions in GIS is spatial interpolation (Longley, et al., 2005), the task of estimating the value of some variable z at locations x where it has not been measured, based on measured values (z1,z2,…,zn) at some set of locations (x1,x2,…,xn). In effect this proposes to refine the spatial resolution of a point data set artificially, replacing a finite number of observations with potentially an infinite number. It is used for interpolating contours between point observations, and for resampling raster or vector data to a different set of points (a different support in the terminology of geostatistics) A wide range of techniques have been proposed, varying with the set of assumptions the user is willing to make.

   Geostatistics provides what is perhaps the technique of spatial interpolation with the strongest theoretical basis. In brief, the theory of regionalized variables proposes that values of variables z distributed over geographic space (continuous fields in the sense discussed above) are not sampled independently, but instead show strong spatial autocorrelation. This is an entirely reasonable proposition, since physical laws ensure that properties such as atmospheric temperature, soil moisture, or elevation show strong autocorrelations over short distances. Geostatistics further proposes, however, that the mathematical form of the decline of spatial autocorrelation (or the increase in variance) with distance is a general and measurable property of each field that can be estimated from a sample of data points. Armed with such a correlogram (or more commonly its close relative the variogram, depicting the increase in variance with distance) estimated from sampled values of the field, it is possible to make generalized leastsquares estimates of values at points where the field was not measured, and to compute estimation variances. This last property, together with the use of the data to parametrize the correlogram, are what give this class of techniques, generally known as Kriging, their claim to theoretical rigor.

  The correlogram or variogram allow the researcher to see the distances over which values are strongly correlated. Pairs of observations separated by such distances are partially redundant, since one partially duplicates the other. More specifically, the distance beyond which observations are not even partially redundant is termed the range of the variable, and provides an empirical estimate of the detail that can be discarded without substantial loss of information. Thus we have a rigorous way of defining the spatial scales of a variable that has interval or ratio properties and is conceptualized as a continuous field.

Recently, Boucher et al. (2008) have shown how geostatistics can be used to support the challenging task of downscaling fields. In remote sensing, for example, the researcher is often constrained by the spatial resolution of the instrument used to create images of the Earth's surface. The 1 km resolution of the AVHRR instrument, for example, clearly limits the applicability of its imagery, since it is difficult to identify and impossible to position features that are only a fraction of 1 km across. Thus it is reasonable to ask how much information has been lost due to the coarse resolution, and what uncertainties this introduces in the results of analysis. Downscaling attempts to insert the missing detail, using properties that can be identified from the coarse imagery. In a geostatistical framework, in essence this means estimating a correlogram using the coarse data, inferring its behavior at shorter distances, and then using that inferred correlogram to generate simulated data that are consistent with the coarse imagery.

  An alternative theoretical framework to geostatistics is provided by spectral or Fourier analysis, which decomposes any field variable into its harmonic components. Spatial resolution now becomes a matter for the spectrum: variations over wavelengths less than the spatial resolution are discarded or already missing. Clarke (1988), for example, has shown how Fourier analysis can be used to discard detail in topography. However Fourier analysis has not achieved the same level of popularity in the research community as geostatistics, due perhaps to the obvious lack of regular periodicity in most geographic fields. Wavelet analysis provides an interesting extension to Fourier analysis by allowing spectral properties to vary spatially in a hierarchical fashion.

5. Cross-scale inference 

   The effects of coarsening spatial resolution on the results of analysis have long been a topic for investigation, and recently this has expanded into a major effort by the GIS research community. Consider, for example, an analysis of the correlation of two variables that are expressed on area support, in other words, their values are both known for a set of geographic areas. In the investigation of social processes the areas might be counties, or the smaller reporting zones used by the US Bureau of the Census. In hydrology, they might be the lumped properties of small watersheds. It has been known for a long time, and it is easy to show analytically, that aggregation of the small areas into larger ones, and averaging of the variables over each aggregation, results in a stronger correlation between the variables; it tends to be no more significant, however, because the improvement in correlation is offset by the loss of degrees of freedom.

   What is less well known, however, is that significant variation can also be produced by holding spatial resolution constant, but repeating the analysis over alternative sets of areas at the same spatial resolution. Openshaw and Taylor (Openshaw, 1983) provided what has become the classic example of the modifiable areal unit problem (MAUP), by showing that reaggregation of county data for Iowa could be used to produce correlations ranging from extremely negative to extremely positive. When the 99 counties were aggregated to alternative arrangements of 12 regions, correlations between % over 65 and % registered Republican voters ranged from −0.936 to +0.996. While one might think of this issue as akin to the random variation produced by alternative samples, Openshaw and Taylor argue convincingly that the effect must be treated as systematic.

   Closely allied to theMAUP is the problem of cross-scale inference – of inferring behavior of a system at one scale from its observed behavior at another, coarser scale. In the extreme this is known as the ecological fallacy, which is made when a researcher assumes that correlations observed for aggregates can be transferred to the individual. King (1997) has provided a comprehensive analysis of the problem and has proposed some powerful methods for appropriate inference. To my knowledge neither the MAUP nor cross-scale inference have been investigated extensively for physical properties that are typically analyzed at the aggregate level, such as the properties of watersheds or geomorphic units.

   The problem of downscaling, of which the ecological fallacy is an extreme example, has been widely examined in the geomorphological literature, and discussed in the previous section in the context of geostatistics. Luoto and Hjort (2008) downscaled data on periglacial features using simple techniques, and compared the results to ground truth. Zhu et al. (2001) and Toomanian et al. (2006) experimented with downscaling soil properties while Zhang et al. (1999) attempted the downscaling of slope estimates.

6. Conclusions 

  One of the effects of the development and widespread adoption of GIS has been the attention that has been devoted to issues of scale. While many naïve users, including the multitude of users who have encountered digital geographic data through Web services such as Google Earth, might assume otherwise, seasoned users of GIS know well that scale, and specifically spatial resolution, is an important factor in any application. People who think about the broader implications of their use of GIS, rather than worrying about which button to push next, are increasingly referred to as critical spatial thinkers (NRC, 2006); and much of that thinking inevitably revolves around scale.

   GIS forces these issues to the fore because it insists on formalizing geographic data and analysis, by reducing both to a series of coded, reproducible binary signals. This is often perceived as an advantage, particularly in the use of GIS by regulatory agencies, because it forces all manipulations to meet standards of replicability and accountability. On the other hand it is clear that in many cases the data input to such manipulations cannot stand up to the same level of objective, scientific scrutiny.

    The norms of science, as expressed in countless volumes (e.g. Harvey, 1969), establish certain principles that are intended to apply to the process of scientific research. The issues discussed in this paper intersect with the norms of science in several important and fundamental ways. First, spatial resolution is in practice a result of a mostly implicit analysis of the benefits of detail, versus the costs. But science provides no general guidance on this issue, and no basis on which to value science-on-the-cheap against expensive science. Second, the norms of science include replicability, and oblige a researcher to report his or her results to a level of detail sufficient to allow someone else to repeat the research. But in GIS-based research it is common for the documentation of software to fall short of this standard, and the algorithms of commercial software are often regarded as valuable intellectual property. Moreover it is common in GIS for researchers to use data created by others, without full documentation of the provenance and lineage of the data. Third, reference has already been made to the dilemma faced by a scientist who cannot determine whether the failure of a model to fit perfectly is due to the model itself, or to the coarse spatial resolution of the analysis.

  Finally, the paper has drawn attention to the fact that issues of spatial resolution span, and raise similar issues, in a wide range of sciences from the social to the environmental, in fact in all sciences that deal with the surface and near-surface of the Earth. This creates an enormous opportunity for cross-fertilization, as researchers experiment with techniques and ideas generated in disciplines that are often far removed from their own. The last section, on cross-scale inference, suggested for example that much could be gained by transferring ideas concerning the MAUP to environmental sciences that use spatially lumped models. In the four decades since its initial development, GIS has become a common technology across many disciplines, and a basis for conversation and the transfer of ideas between them.

References 

Alexander, R., Millington, A.C. (Eds.), 2000. Vegetation Mapping: From Patch to Planet. Wiley, New York. 

Andrle, R., 1996. Complexity and scale in geomorphology: statistical self-similarity vs. characteristic scales. Mathematical Geology 28, 275–293. 

Andrle, R., Abrahams, A.D., 1989. Fractal techniques and the surface roughness of talus slopes. Earth Surface Processes and Landforms 14, 197–209.

Boucher, A., Kyriakidis, P.C., Cronkite-Ratcliff, C., 2008. Geostatistical solutions for super-resolution land cover mapping. IEEE Transactions on Geoscience and Remote Sensing 46, 272–283.

Clarke, K.C., 1988. Scale-based simulation of topographic relief. The American Cartographer 15, 173–181. 

Clarke, K.C., Schweizer, D.M., 1991.Measuring the fractal dimension of natural surfaces using a robust fractal estimator. Cartography and Geographic Information Systems 18, 37–47. 

Couclelis, H., 1992. People manipulate objects (but cultivate fields): beyond the raster vector debate in GIS. In: Frank, A.U., Campari, I. (Eds.), Theories and Methods of Spatio-Temporal Reasoning in Geographic Space: Lecture Notes in Computer Science, 639. Springer-Verlag, Berlin, pp. 65–77.

Foody, G., Curran, P. (Eds.), 1994. Environmental Remote Sensing from Regional to Global Scales. Wiley, New York. 250 pp. 

Frolov, Y.S., Maling, D.H., 1969. The accuracy of area measurements by point counting techniques. Cartographic Journal 6, 21–35.

Goodchild, M.F., 1980. Fractals and the accuracy of geographical measures. Mathematical Geology 12, 85–98. 

Goodchild, M.F., 2001. Metrics of scale in remote sensing and GIS. International Journal of Applied Earth Observation and Geoinformation 3, 114–120. 

Goodchild, M.F., Mark, D.M., 1987. The fractal nature of geographic phenomena. Annals of the Association of American Geographers 77, 265–278. 

Goodchild, M.F., Proctor, J., 1997. Scale in a digital geographic world. Geographical and Environmental Modelling 1, 5–23. 

Goodchild, M.F., Fu, P., Rich, P., 2007. Sharing geographic information: an assessment of the Geospatial One-Stop. Annals of the Association of American Geographers 97, 249–265. 

Goovaerts, P., 1997. Geostatistics for natural resources evaluation. Oxford University Press, New York. 496 pp.

Harvey, D., 1969. Explanation in Geography. Edward Arnold, London. 542 pp.

Horn, B.K.P., 1981. Hill shading and the reflectance map. Proceedings of the Institute of Electrical and Electronic Engineers 69, 14–47.

King, G., 1997. A solution to the ecological inference problem: reconstructing individual behavior from aggregate data. Princeton University Press, Princeton NJ. 

Klinkenberg, B., 1994. A review of methods used to determine the fractal dimension of linear features. Mathematical Geology 26, 23–46.

La Barbera, P., Rosso, R., 1989. On the fractal dimension of stream networks. Water Resources Research 25, 735–741. 

Lam, N., Quattrochi, D.A., 1992. On the issues of scale, resolution, and fractal analysis in the mapping sciences. Professional Geographer 44, 88–98. 

Levin, S.A., 1992. The problem of pattern and scale in ecology. Ecology 73, 1943–1967. 

Liu, T., 1992. Fractal structure and properties of stream networks. Water Resources Research 28, 2981–2988. 

Longley, P.A., Goodchild, M.F., Maguire, D.J., Rhind, D.W., 2005. Geographic Information Systems and Science, 2nd Edition. Wiley, Hoboken, NJ. 

Luoto, M., Hjort, J., 2008. Downscaling of coarse-grained geomorphological data. Earth Surface Processes and Landforms 33, 75–89.

Maling, D.H., 1989. Measurement from maps: principles and methods of cartometry. Pergamon, New York. 

Mandelbrot, B.B., 1967. How long is the coastline of Britain? Statistical self-similarity and fractional dimension. Science 156 (3775), 636–638. 

Mandelbrot, B.B., 1977. Fractals: Form. Chance and Dimension, Freeman, San Francisco. 

Mandelbrot, B.B., 1982. The fractal geometry of nature. Freeman, San Francisco. 

Mark, D.M., 1984. Fractal dimension of a coral reef at ecological scales: a discussion. Marine Ecology Progress Series 14, 293–294. 

McMaster, R.B., Shea, K.S., 1992. Generalization in digital cartography. Association of American Geographers, Washington, DC.

National Research Council (NRC), 2006. Learning to Think Spatially: GIS as a Support System in the K-12 Curriculum. National Academies Press, Washington, DC. 

Nikora, V.I., Sapozhnikov, V.B., Noever, D.A., 1993. Fractal geometry of individual river channels and its computer simulation. Water Resources Research 29, 3561–3568.

Openshaw, S., 1983. The modifiable areal unit problem. Geobooks, Norwich, UK. 

Quattrochi, D.A., Goodchild, M.F. (Eds.), 1997. Scale in Remote Sensing and GIS. CRC Press, Boca Raton, FL.

Richardson, L.F., 1960. Statistics of deadly quarrels. Boxwood, Pittsburgh.

Sahr, K., White, D., Kimerling, A.J., 2003. Geodesic discrete global grid systems. Cartography and Geographic Information Science 30, 121–134. 

Samet, H., 2006. Foundations of multidimensional and metric data structures. MorganKaufmann, San Francisco. 

Sheppard, E., McMaster, R.B. (Eds.), 2004. Scale and Geographic Inquiry: Nature, Society, and Method. Blackwell, Malden, MA. 

Shortridge, A.M., Goodchild, M.F., 2002. Geometric probability and GIS: some applications for the statistics of intersections. International Journal of Geographical Information Science 16, 227–243.

Tarboton, D.G., Bras, R.L., Rodriguez-Iturbe, I., 1988. The fractal nature of river networks. Water Resources Research 24, 1317–1322. 

Tate, N.J., Atkinson, P.M. (Eds.), 2001. Modelling Scale in Geographical Information Science. Wiley, New York. 292 pp. 

Toomanian, N., Jalalian, A., Khademi, H., Eghbal, M.K., Papritz, A., 2006. Pedidiversity and pedogenesis in Zayandeh-rud Valley, Central Iran. Geomorphology 81, 376–393.

Xu, T., Moore, I.D., Gallant, J.C., 1993. Fractals, fractal dimensions and landscapes—a review. Geomorphology 8, 245–262.

Zhang, X., Drake, N.A., Wainwright, J., Mulligan, M., 1999. Comparison of slope estimates from low resolution DEMs: scaling issues and a fractal method for their solution. Earth Surface Processes and Landforms 24, 763–779. 

Zhu, A.X., Hudson, B., Burt, J., Lubich, K., Simonson, D., 2001. Soil mapping using GIS, expert knowledge, and fuzzy logic. Soil Science Society of America Journal 65, 1463–1472.


Full Text     Click Here



هناك تعليق واحد:

آخرالمواضيع






جيومورفولوجية سهل السندي - رقية أحمد محمد أمين العاني

إتصل بنا

الاسم

بريد إلكتروني *

رسالة *

Related Posts Plugin for WordPress, Blogger...

آية من كتاب الله

الطقس في مدينتي طبرق ومكة المكرمة

الطقس, 12 أيلول
طقس مدينة طبرق
+26

مرتفع: +31° منخفض: +22°

رطوبة: 65%

رياح: ESE - 14 KPH

طقس مدينة مكة
+37

مرتفع: +44° منخفض: +29°

رطوبة: 43%

رياح: WNW - 3 KPH

تنويه : حقوق الطبع والنشر


تنويه : حقوق الطبع والنشر :

هذا الموقع لا يخزن أية ملفات على الخادم ولا يقوم بالمسح الضوئ لهذه الكتب.نحن فقط مؤشر لموفري وصلة المحتوي التي توفرها المواقع والمنتديات الأخرى . يرجى الاتصال لموفري المحتوى على حذف محتويات حقوق الطبع والبريد الإلكترونيإذا كان أي منا، سنقوم بإزالة الروابط ذات الصلة أو محتوياته على الفور.

الاتصال على البريد الإلكتروني : هنا أو من هنا