Next Article in Journal
Urban Hotspot Area Detection Using Nearest-Neighborhood-Related Quality Clustering on Taxi Trajectory Data
Previous Article in Journal
Creation of a Multimodal Urban Transportation Network through Spatial Data Integration from Authoritative and Crowdsourced Data
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Review

Panoramic Street-Level Imagery in Data-Driven Urban Research: A Comprehensive Global Review of Applications, Techniques, and Practical Considerations

1
Department of Community, Culture and Global Studies, University of British Columbia Okanagan, 3333 University Way, Kelowna, BC V1V 1V7, Canada
2
Department of Geography and Environmental Studies, Ryerson University, 350 Victoria St, Toronto, ON M5B 2K3, Canada
*
Author to whom correspondence should be addressed.
ISPRS Int. J. Geo-Inf. 2021, 10(7), 471; https://0-doi-org.brum.beds.ac.uk/10.3390/ijgi10070471
Submission received: 22 June 2021 / Revised: 7 July 2021 / Accepted: 8 July 2021 / Published: 9 July 2021

Abstract

:
The release of Google Street View in 2007 inspired several new panoramic street-level imagery platforms including Apple Look Around, Bing StreetSide, Baidu Total View, Tencent Street View, Naver Street View, and Yandex Panorama. The ever-increasing global capture of cities in 360° provides considerable new opportunities for data-driven urban research. This paper provides the first comprehensive, state-of-the-art review on the use of street-level imagery for urban analysis in five research areas: built environment and land use; health and wellbeing; natural environment; urban modelling and demographic surveillance; and area quality and reputation. Panoramic street-level imagery provides advantages in comparison to remotely sensed imagery and conventional urban data sources, whether manual, automated, or machine learning data extraction techniques are applied. Key advantages include low-cost, rapid, high-resolution, and wide-scale data capture, enhanced safety through remote presence, and a unique pedestrian/vehicle point of view for analyzing cities at the scale and perspective in which they are experienced. However, several limitations are evident, including limited ability to capture attribute information, unreliability for temporal analyses, limited use for depth and distance analyses, and the role of corporations as image-data gatekeepers. Findings provide detailed insight for those interested in using panoramic street-level imagery for urban research.

1. Introduction

As a merger of cartographic and photographic forms of representation, the addition of the Street View platform to Google’s stable of geolocation applications (also including Maps and Earth) in 2007 not only introduced interactive geolocated panoramas to the masses [1,2], it also advanced the use of street-level imagery for research purposes. Although Google Street View coverage is patchy and incomplete overall, many cities are now included and regularly updated on the platform, particularly those in the global North [3]. Following Google Street View, other digital platforms have released street-level panoramic imagery products including Microsoft Bing StreetSide (US and select European cities), Apple Look Around (select US and international cities), Baidu Total View and Tencent Street View (Chinese cities), Kakao/Daum Road View and Naver Street View (South Korea), and Yandex (Russia and some Eastern European countries), as well as the corporate-owned crowdsourced platforms KartaView (formerly OpenStreetCam, operated by Telenav) and Mapillary (recently acquired by Facebook). Viewers of these platforms can navigate between images captured at regularly defined intervals along streets, virtually experiencing a diversity of urban streetscapes, built environments, and human activity from an eye-level, 360° perspective. While the interactive, immersive potential of street-level imagery platforms draws in users for informational, educational, and experiential purposes, researchers around the world are drawn to their vast repositories of panoramic imagery as a source of urban big data [4]. This paper provides the first comprehensive, global, state-of-the-art review of literature on the use of panoramic street-level imagery for data-driven urban research.
Corporate street-level imagery platforms typically provide access to panoramic images and their associated metadata (e.g., geographic coordinates, timestamp) for free or low cost to researchers. As global coverage has increased in recent years, researchers have developed both manual and computational techniques to analyze and extract location-based information from street-level images for a diverse array of urban research topics—from access to greenspace [5] and estimation of pedestrian volumes [6], to perceptions of safety [7], car ownership patterns [8], the relationship between physical attributes of homes and crime risk [9], and between street sign linguistics and area-based socioeconomic status [10]. Considerable recent growth in the analysis of street-level panoramic imagery aligns with a surging interest in ‘proximate sensing’—in which urban datasets are derived from images captured at high resolution near to the subject matter—an exciting prospect in the era of data-driven urban analytics.
Despite the rapid growth in use of street-level imagery for research on cities around the world, the few attempts to assess the scope of this research area have focused on Google Street View only, and on narrowly defined application domains (e.g., in health research, see [11,12]). This paper addresses this knowledge gap through a comprehensive global review of research using street-level imagery from corporate panoramic street view platforms from around the world. The following overall research question guides this study: how is imagery from panoramic street-level image platforms used in data-driven urban research? To answer this question, we systematically review literature on the use of street-level imagery for research on cities, focusing on identifying the imagery platforms used, research application areas, methods of data extraction and analysis, and benefits and limitations. Key findings are summarized, highlighting the potential value of panoramic imagery for a wide range of urban research applications. Notably, this review identifies an increasing interest in the use of street-level imagery as a source of data on urban research problems via both manual ‘virtual audit’ and survey approaches applied to small image samples, as well as computationally driven visual analytics and machine learning techniques applied to massive image datasets. The considerable advantages are described in detail, as well as the limitations that researchers should consider before using street-level imagery in urban research projects. Finally, we summarize new advances in panoramic imaging technology poised to further advance the collection and analysis of street-level images.

2. Methods

An English language literature search was undertaken to capture academic literature on the use of panoramic street-level imagery for urban research published in the 14-year period between 1 January 2007 and 31 December 2020 (when Google Street View was released in the US up to the time of review completion). This paper follows the scoping review methodology, a comprehensive literature review method undertaken to summarize the scope and extent of an area of knowledge [13]. As a knowledge synthesis approach, scoping reviews provide a means to rapidly consolidate knowledge about an area of research or practice, and are especially useful for delineating the contours, themes, concepts, and key issues pertaining to new and emerging topics, with the objective of shaping future research priorities [14]. Although conducted similarly to systematic reviews, scoping reviews differ in that they aim to summarize the state of a research domain through more expansive inclusion criteria, rather than answer narrow research questions [15]. As such, scoping reviews are ideal for synthesizing an area of research defined by heterogeneity with regard to research design, methods, and application domains [16]. Three academic databases were used to access published academic research from journals, conference proceedings, and university theses: Google Scholar, Web of Science, and ProQuest Dissertations and Theses, respectively. Further, reference lists of key articles were hand searched to identify relevant citations not acquired through the database searches.
Guided by the project research question and based on a preliminary scan of the published literature, the research team identified 5 key types of information to extract from each item in the scoping review sample:
  • Urban research application area.
  • Source of panoramic imagery.
  • Methods for extracting and analyzing data from the imagery.
  • Benefits and advantages of panoramic imagery.
  • Barriers and limitations of panoramic imagery.
Since the overall aim of the review was to understand how images from street-level imagery platforms are used across a wide variety of urban research topics, an assessment of research design quality or consistency of outcomes across studies was not appropriate and, therefore, was not conducted. The final list of literature search terms used is provided in Table 1. The searches combined one or more terms from Group A with one or more terms from Group B, in an iterative process and with various truncated versions applied.
Several exclusion criteria were applied to refine the scope of the review. Articles were excluded if the study used conventional street-level images; for instance, studies based on imagery captured from a dashcam and made available on the crowdsourced KartaView or Mapillary platforms. Articles were also excluded if the main focus was not an application to an urban research topic. Although most studies were based on imagery of urban settings since coverage is most complete in cities, examples of excluded studies include those focused on the development of data extraction algorithms, image processing or enhancement, and the assessment of imagery completeness across different platforms. Some studies were identified that reported on the use of panoramic imagery platforms primarily for immersive experience, orientation, and spatial cognition purposes rather than the extraction of data, and were thus excluded from the review. Further exclusions were applied to articles in which panoramic imagery was produced for the project rather than accessed from an existing street-level imagery platform, and when imagery was used solely for illustrative purposes. Both authors reviewed all articles included in the final scoping review sample. In a small number of cases the reviewers disagreed on whether to include an article in the sample, and the final decision to include or exclude was resolved through discussion.

3. Results

The literature search produced a total of 234 articles that fit the inclusion criteria. The majority were published in academic journals, followed by conference proceedings, and university dissertations or theses. The final scoping review sample reveals a wide variety of disciplines in which this imagery has been used—including across the health and social sciences, and in urban planning, environmental studies, and computer science. Figure 1 illustrates the year of study publication, highlighting a progressive increase in the use of this imagery in the latter years of the review period. Note that no studies included in this review were published in 2007 or 2008. Table 2 highlights the findings for a selection of 10 articles representing the various areas of urban research and imagery platforms used. The full list of publications reviewed is available at this link: https://tinyurl.com/panoramic-images-review (accessed on 8 June 2021).

3.1. Study Locations and Street-Level Imagery Platforms Used

The scoping review identified seven panoramic street-level imagery platforms used for urban research globally. Google Street View provided the source of imagery in a majority of the studies (166, 71%). The second and third most popular sources of imagery were Baidu Maps Total View (40, 17%) and Tencent Street View (23, 10%), both of which have comprehensive coverage of cities in China only. Bing StreetSide from Microsoft—available in US cities primarily—was used in a small number of studies (5). Two South Korean platforms were used in a handful of studies, Kakao/Daum Road View (2) and Naver Street View (5). Three studies used imagery from Yandex Panorama, the Russian platform available in Russian and some Eastern European cities.
A majority of publications focused on cities in the USA (87, 37% of studies) and the Republic of China (69, 29%), reflecting the wide coverage of Google Street View and Baidu Total View/Tencent Street View in those countries, respectively. European cities (primarily in the UK and France) accounted for 64 (27%) studies. A small number were conducted in Canada (7, 3%), New Zealand (6, 3%), Australia (5, 2%), and South Korea (5, 2%). No studies were from cities in Africa, likely due to the very limited coverage in any platform (apart from Google Street View in South Africa). Limited literature in the sample from South America (4 studies across Chile and Brazil) is likely due to both the exclusion of articles not published in English and the limited availability of imagery there.

3.2. Techniques Used for Data Extraction and Analysis

Most of the studies included in the scoping review used street-level imagery as a source of evidence for the presence or absence of various urban features or phenomena, based on the premise that data can be extracted from the imagery using either a manual or computational approach, and that the imagery is a good representation of urban space. Studies extracted both quantitative information (e.g., presence, absence, quantity of a given attribute) as well as qualitative information (e.g., interpretations, subjective scene ratings). Temporal change analyses were conducted in a small number of studies based on a systematic analysis of imagery captured at different time periods (available in some platforms including Google Street View and Tencent Street View). In many studies, data extracted from the imagery were then analyzed in conjunction with additional datasets and evidence to support urban theories or models, from health behaviours to crime prevention (see application areas below). In most cases the urban environments investigated by the researchers had pre-determined parameters (e.g., specific locations of interest, features or phenomena to record), with the rare exception of studies that chose to randomize these parameters [27].
A total of 162 (69%) studies used manual methods of data extraction, largely undertaken as a form of ‘virtual audit’ or survey. In such cases, imagery was analyzed either directly in the fully 360° environment of the street-level imagery platform—including the ‘drop-and-spin’ method [28]—or by first obtaining imagery through the platform’s application programming interface (API) and then viewing it in digital imaging software. Many studies used custom virtual audit techniques based on a predefined checklist or survey instrument to record the object, feature, or phenomena of interest. Several established virtual auditing tools adapted for panoramic image scenes were used across several studies, including FASTVIEW [29], CANVAS [30], and SPACES [31]. These studies largely relied on trained members of the research team to conduct virtual audits, in some cases also evaluating inter-rater reliability [32]. A further subset of studies developed crowdsourced research designs in which distributed participants were provided training to carry out audits directly via the Web platform’s public interface [33].
The review identified a continued growth in the use of computational methods for analyzing street-level images, accounting for 72 (31%) studies. Many such studies aimed to go beyond the descriptive and inventorying objectives of studies based on manual methods, analyzing this imagery for the purposes of more advanced feature recognition, segmentation, modelling, and inference. In most such studies, computational methods were applied to images accessed in bulk through platform APIs, enabling analysis of vast amounts of imagery using statistical modelling and data science methodologies. Computational techniques applied to street-level imagery include a variety of automated and algorithmic methods, in which imagery is analyzed and processed under researcher-supervised conditions. A number of studies also utilized artificial intelligence (AI) and machine learning (ML) methods, particularly ‘computer vision’ feature recognition and image segmentation techniques based on convolutional neural networks (CNNs) designed to recognize, extract, categorize, and quantify visual information with lower levels of human oversight [34,35]. Some studies accessed street-level imagery from large pre-assembled datasets designed for computational analysis, including the Place Pulse dataset built from images extracted from the Google Street View API [36,37]. To train and test computer vision models, further urban imagery datasets containing pre-labelled and segmented imagery are often used, such as the CityScapes [38] and ADE20K [39] datasets.

3.3. Urban Research Areas

The findings of the scoping review reveal a wide range of applications for panoramic street-level imagery in urban research. We refine this heterogeneity into five overarching thematic categories: built environment and land use; health and wellbeing; natural environment; urban modelling and demographic surveillance; and area quality and reputation. As illustrated in Figure 2, the two most common applications of street-level imagery were research on built environment/land use and health/wellbeing, with 91 studies in each category. In general, across all application areas, the objective of most studies was the acquisition of data based on individual aspects of the image (objects, features, or phenomena) or the whole image itself, what Kang et al. [12] refer to as ‘element’ level and ‘scene’ level observation, respectively.

3.3.1. Built Environment and Land Use

With complete or near-complete coverage in many global cities, analysis of the built environment is perhaps the most ready-made application area for street-level imagery. With regard to manual techniques to identify built environment features, Hara et al. [40] recruited participants to identify the presence of accessibility features such as curb ramps in Google Street View imagery using a crowdsourcing approach and manual audit data extraction technique. Plascak et al. [41] produced a high-resolution map of New Jersey’s sidewalk conditions using a manual virtual survey approach based on the CANVAS auditing tool. The Russian-owned platform Yandex Panorama has been used in a small number of instances to manually audit built environments, including to identify the locations and aesthetics of ‘third-wave’ coffeeshops in Istanbul [42], and to evaluate building quality for seismic risk assessments in Sochi [43]. More commonly, however, street-level imagery is analyzed computationally to detect built environment features such as street infrastructure [44,45], as part of a wider area of research applying computer vision methods for urban design and built environment problems. As an example of how features recognized in imagery are used as a basis to explain social processes, Jiang et al. [46] analyzed imagery from Baidu Street View in a CNN model to detect traffic signs in two Chinese cities, enabling the linkage of sign placement in the built environment with the presence of ‘traffic violation-prone’ areas.
Regarding urban land use, analyses of street-level imagery are often presented as a complement to or alternative approach to land use classification using conventional data sources, particularly remotely sensed data (e.g., satellite imagery). Given the fully panoramic horizontal field of view captured at the level of human interaction with the city, street-level imagery offers an alternative viewpoint for land use analysis which may afford a more detailed classification of urban land use types—which can sometimes be difficult to differentiate from the aerial nadir perspective [47]. Land use applications of street-level imagery typically require alternative classification methods than those conventionally used in remote sensing given the absence of multiple spectral bands (only RGB are available), so much of the literature utilizes an array of machine learning techniques known as computer vision. In essence, these approaches undertake feature recognition and classification based on geometric and topological properties rather than spectral signatures, and, thus, can be applied to both remote and street-level imagery. Cao et al. [48] document a process for land use classification at pixel level from both viewpoints using a scene segmentation neural network for New York City, illustrating the potential of higher accuracy for classifying socioeconomic land use categories compared to aerial imagery solely (see also an analysis of house prices [34] and patterns of gentrification [49]). Notably, this method relies on a spatial interpolation technique to estimate the pixel values between street-level images (since they are captured intermittently), which adds an additional element of uncertainly when fusing remote and ‘proximate’ sensing datasets. Srivastava et al. [50] developed an urban land use detection approach that fused aerial remote sensing imagery from Google Maps with street-level imagery from Google Street View to detect element-level land use features, using a CNN model trained on a crowdsourced-labelled OpenStreetMap dataset. Illustrating the added value of the horizontal viewpoint of street-level imagery, Li et al. [19] analyzed images from three US cities to classify residential land uses at the block level, based on detailed building facades as opposed to the more generic rooftops analyzed in aerial imagery classification. The results produced land use maps at the individual block-level, which represents a scalar improvement over conventional neighborhood-level land use classification.

3.3.2. Health and Wellbeing

Some of the earliest analysis of street-level imagery was for urban health and wellbeing studies, which itself now represents an established study design in this field as evidenced through recent reviews [11,12]. Urban health applications are situated in an understanding of health and wellbeing as shaped by social factors and the places in which we live, work, and play—known as the social and environmental determinants of health [51]. Most analyses focus on associating urban natural- and built-environment characteristics with observed health patterns. In the majority of studies, the aim is to examine the influence of potential exposures (e.g., to local environment features) on health and wellbeing, rather than to extract evidence of actual health conditions or health behaviours of individuals or groups captured in the imagery. One notable exception is a study of alcohol in urban streetscapes in Wellington, New Zealand, that included an audit of ‘visible alcohol consumption’ [52], and a small number of studies of pedestrian behaviours and volumes [53]. This relationship between health, society, and place has been examined through street-level imagery across a wide range of physical and mental health conditions. A large number of studies have examined the relationship between urban design and health, including the presence of physical activity infrastructure and rates of obesity [54,55] and mental wellbeing [56], and between street characteristics/infrastructure and walking behaviours [57,58,59], pedestrian injury [30,60], and cycling safety [23,61]. The relationship between exposure to urban natural environments and a range of health outcomes is a further substantial research focus (e.g., [62,63]), as well as between natural environments and health risk factors including stress [64]. Based conceptually on notions of salutogenesis (exposure to factors that enhance rather than impair health), a number of studies also explored the impact of urban natural environments in promoting health and a wider sense of wellbeing [65,66].
From a risk identification and mitigation standpoint, studies analyze this imagery to identify areas or individual elements of the built environment that could be targeted for intervention. Examples of this approach include a manual audit of Google Street View to identify significantly greater ‘obesogenic advertising’ than other forms of sign advertising within an 800m radius of schools in Auckland [67]. Nguyen et al. [68] analyzed 164 million Google Street View images in a machine learning model to predict areas at elevated risk of COVID-19 based on recognition of built environment features thought to be associated with the elevated virus risk, including non-single family homes, dilapidated structures, and visible wires.

3.3.3. Natural Environment

The review identified a considerable focus on the use of street-level imagery for scene-level analyses of urban greenery, blue spaces, and open space using computational techniques. In a study of Beijing, Tencent Street View images were trained in a machine learning model (the FCN8 CNN model) against labelled imagery from the ADE20K dataset to recognize blue and green space [69]. Also in Beijing, images from the Tencent Street View API were used to calculate the Green View Index (a measure of green in images as an indicator of vegetation presence) using an automated scene segmentation algorithm based on SegNet, a pixel-level CNN for semantic segmentation [70]. At the element-level of analysis, researchers use this imagery to recognize, categorize, and quantify individual natural environment features within the image scene. To answer the question of whether virtual audits can replace in-situ visits by field tree surveyors, volunteers with varying levels of experience were recruited to manually inventory street trees present in Google Street View images of suburban Chicago [71]. Findings indicated that such approaches can replace in-person surveys for basic information such as tree location but are less likely to be a replacement if tree species or diameter estimations are required. In many studies focused on natural environments, researchers also attempted to identify a correlation with health outcomes, mobility, or socioeconomic status, illustrating considerable crossover between this area of urban research and the other thematic categories used for this review. Notably, researchers have analyzed street-level imagery to make spatial associations between the presence of greenery and various social and structural conditions, including active transportation behaviours [62,72], pollution [73], gentrification [74], and housing prices [75,76,77], as well as the relationship between the availability of open space and poverty [78].

3.3.4. Urban Modelling and Demographic Surveillance

This research category includes studies that applied street-level imagery in large-scale computational modelling of urban environments or populations. In this category, the objective of most studies was to examine the potential of this imagery as an alternative or complement to computer-generated 3D media, and conventional forms of demographic indicator data. Through applying a variety of computation methods, researchers have used street-level imagery to reconstruct urban scenes and scene elements, particularly for use in urban planning. With the aid of GSV imagery, Takizawa and Kinugawa [79] reconstructed 3D cityscapes, and Cetiner [80] focused on a specific element of the urban environment, the modelling of bridges. A few studies have examined the characteristics of ‘urban canyons’; this includes a study that used Baidu imagery to estimate daily sun duration at street-level, and another study that used Google Street View imagery to model the pedestrian experience in an urban canyon [81]. Across these studies, street-level imagery provided an alternative to methods requiring more costly 3D-modelled reconstructions.
With regard to the use of street-level imagery to infer socioeconomic and demographic characteristics of cities, a number of studies illustrate how it may serve as a proxy for administrative datasets for comprehensive health and demographic surveillance. Suel et al. [82] analyzed over a million Google Street View images in a neural network to categorize areas of London according to common measures of socio-demographic inequality. In comparison to conventional sources of such data (e.g., census), the predicted results demonstrated considerable alignment with observed data, suggesting a new big data source for area-level demographic surveillance. Similarly, a study by Gebru and colleagues [83] used a deep learning model to identify car year and make information from 50 million Google Street View images across 200 US cities, demonstrating how neighbourhood-level socioeconomic status could be inferred at a large scale, with considerable accuracy and time and cost efficiency (see also [84]). In another study, a CNN was trained to predict neighbourhood-level income brackets based on Google Street View images of Oakland, aiming to answer the question “what observable features predispose a locale to low or high poverty levels?” [85].

3.3.5. Area Quality and Reputation

Street-level imagery is also analyzed to assess the quality or reputation of urban areas based on both subjective human perceptions as well as computational approaches that seek to provide objective comparative assessments. As part of a larger research focus on so-called ‘neighbourhood effects’ linking social outcomes to urban spatial characteristics [86], area quality research using street-level imagery is often based on inferring the reputation of an area through the presence of particular elements within imagery, associating these features with phenomena such as crime, inequality, and ‘anti-social behavior’. In general, this research analyzes place characteristics via street-level imagery deductively in an attempt to explain existing urban social phenomena (see [87]). Two distinct objectives can be discerned—analyses aimed at characterizing ‘risky’ areas (especially related to crime, disorder, and environmental hazards), and analyses aimed at characterizing ‘livable’ urban environments. Regarding risky urban spaces, researchers have analyzed street-level imagery to identify scenes and individual elements associated with criminal activity, often drawing from established theories in environmental criminology such as broken windows theory [88,89]. For instance, Langton and Steenbeek [9] developed a method to analyze burglary susceptibility through manually auditing images of residential properties in Google Street View. Comparing findings with local crime statistics, results suggested that certain characteristics of properties (e.g., ease of escape, extent to which it is closed to surveillance from neighbours) are associated with increased risk of burglary, allowing for more local-scale (individual properties) risk assessments compared with studies identifying risk as a function of neighbourhood-level wealth.
Computational methods are comparatively less common in this research area due to the more subjective nature of assessing area quality and reputation. A significant attempt to examine subjective perceptions of urban space at a large scale was the Place Pulse project at MIT (https://www.media.mit.edu/projects/place-pulse-new/overview/) (accessed on 8 June 2021). Place Pulse assembled a large database of images from Google Street View and enrolled members of the public to compare images according to perceptions of wealth, safety, liveliness, and so on. The resulting dataset of labelled and categorized images has been used to train deep learning models to recognize urban quality of life indicators embedded in urban environments, including research that used this dataset to create the Streetscore measure of urban safety for the US [37], and to identify six quality of life indicators (safe, lively, beautiful, wealthy, boring, depressing) globally [90]. Similarly, Choiri [91] crowdsourced the labelling of 800 street-level images of Amsterdam for perceptions of ‘urban attractiveness’ and used it to train a CNN model which would enable the automated identification of attractive areas from much larger imagery datasets. Several studies used Tencent Street View images to assess the ‘visual quality’ of streetscapes in Chinese cities. This includes Tang and Long’s study [92] of the historic Hutong areas of Beijing which accessed imagery captured between 2012 and 2016 from Tencent’s ‘Time Machine’ feature, illustrating the potential for temporal analyses with street-level imagery (see also [93]).

3.4. Advantages and Benefits of Street-Level Imagery in Urban Research

A majority of studies explicitly considered the benefits of street-level imagery, and in many cases its added value over conventional urban data sources. Notably, however, more recent studies were less likely to explicitly indicate advantages and benefits, which, along with the substantial increase in published studies in recent years, suggests a maturation of street-level imagery as a recognized data source for urban research. From analyzing such statements, this review identified several key benefits and advantages of street-level imagery in two distinct but related areas: (1) research design, and (2) knowledge production.

3.4.1. Research Design

In studies based on manual data extraction, many researchers noted that virtually auditing urban streetscapes enables rapid data collection, across a growing number of cities around the globe covered by street-level imagery, and at a cost deemed considerably lower than in-person visits [94,95,96]. Chang et al. [97] praised Baidu imagery due to its low cost, its coverage of 95% of Chinese cities (three million kilometres of streetscapes), and regular updates which enable spatial and temporal analysis. Since street-level imagery is generally one component of a digital mapping platform, researchers also explicitly identified the ability to extract precise geographic coordinates through the API as a key advantage over other sources of imagery [45,98,99], such as Flickr, which may not include location metadata. Interestingly, however, no studies explicitly detailed the cost of accessing imagery in bulk via an API, although costs are likely to be quite low for all but the largest-scale image acquisitions. The Google Street View Static API currently charges just $5.60 USD per 1000 panoramic images (when up to 500,000 are accessed) via its ‘pay-as-you-go’ pricing model, which also provides access to image metadata including geographic coordinates and timestamp. Familiarity with street-level imagery platforms was identified as a benefit for studies employing a crowdsourced research design, since participants would not need explicit training in their use [40]. Similarly, researchers noted the potential for enhanced researcher/participant comfort and safety through the ‘remote presence’ of virtual audits, which may be particularly valued when auditing areas deemed risky or dangerous [100].

3.4.2. Knowledge Production

The literature also argues the case for the epistemic advantages of street-level imagery; in other words, its unique affordances enable new knowledge about the world to be produced. A number of studies claim that street-level imagery can enhance our understanding of urban spaces, features, and processes through the unique pedestrian/vehicle point of view (POV), not easily captured in others forms of imagery or representation [62,101,102]. The possibility of new forms of analysis and knowledge production is identified in built-environment feature recognition and land use classification applications, as images are generally captured close to the subject matter, and from a POV that reveals greater detail and variation, especially compared to the aerial POV. With regard to urban modelling and demographic surveillance applications, researchers noted the general ease with which millions of street-level images could be accessed and analyzed in a deep learning environment. This facilitates large-scale comparative analysis and generalization under accelerated timescales, which could substantially improve our understanding of urban form, pattern, and process [83,84].

3.5. Limitations and Weaknesses of Street-Level Imagery in Urban Research

Much of the literature also considered limitations and weaknesses. Although only explicitly mentioned in a few studies, a key weakness identified across the studies in this review is the limited dimensionality of data extracted from images, whether using a manual or computational approach (Berland et al., 2019; Meunpong et al., 2019). In other words, many studies used street-level imagery as a source of data on the binary presence/absence of features, objects, or phenomena in geographic space—where things are located as opposed to what their attributes are (e.g., quantities, qualities, values). A number of studies did attempt to infer quality and quantity information of scenes and objects; however, the overall limited ability to definitively capture attribute information reveals a substantial limitation of street-level imagery as a source of data on cities. Estimates and inferences may become more reliable as standardized virtual audit instruments and machine learning techniques develop further and are systematically compared against other data sources. The presence of objects such as light poles, vehicles, or pedestrians can obstruct features of interest [103]. This limitation can also be further exacerbated by factors intrinsic to image capture and processing; namely, lighting, seasonality, weather conditions, or privacy blurring, which can reduce image quality and, therefore, their potential representational affordances [82]. In particular, this means that street-level images are sometimes unreliable for temporal analyses due to shifts in image quality between years and missing data during a particular time period, in addition to missing time stamps in older imagery [32,104,105,106].
While some studies based on manual data extraction interacted directly with street-level imagery in the platform’s virtual 360° environment, a considerable amount of research is based on analysis of rectangular flat images extracted from the full panorama. Depending on the area of the image captured and the specific projection used, there may be some geometrical distortions that might affect the accuracy of results in feature recognition applications [46,107,108]. There is currently limited applicability of this ‘fake 3D’ imagery [109] for analyses based on depth and distance [79]; however, the recent integration of LiDAR sensors into street-level imagery capture processes by Google, Microsoft, and Apple may provide future studies with the ability to undertake distance and depth measurements. Relatedly, however, the role of private corporations as gatekeepers of street-level imagery may be a concern for at least two reasons. First, access depends on the company continuing to apply the current low or no-cost structure for research, but this could easily change. Google, for example, is attempting to diversify their revenue generation streams away from their current business model based on personal data accumulation and targeted advertising, including increasing fees to access their APIs [110,111]. Second, corporate platforms decide what cities, and what areas within cities, are captured in street-level imagery, as well as how often updates occur. Platforms make these decisions based largely on an economic return on investment, and so relying on street-level imagery as a source of information on cities could result in a further urban inequality—between those areas deemed worthy of street-level imagery coverage (and, therefore, analysis), and those left invisible and unanalyzed [3]. On the other hand, as seen in some studies described above, large-scale automated classification of urban spaces according to risk categories (e.g., unhealthy, crime-ridden) produces ‘hypervisibility’, which may have negative consequences for identified social groups and neighbourhoods [112]. While individual privacy is addressed through blurring of faces and sensitive information, more recent articulations of ‘group privacy’ [113] suggest that such analyses can have harmful effects and so the ethical consequences require further attention. Finally, largely missing from the reviewed literature is a substantial awareness about threats to civic and national security due to the comprehensive, panoramic evidence of cities and their inhabitants provided through this imagery. For instance, imagery showing critical urban infrastructure, national security sites, and gathering locations for large groups could provide nefarious actors with a source of data for targeting their actions.

4. Conclusions and Future Directions

This paper provides the first comprehensive state-of-the-art review of the use of panoramic images from street-level imagery platforms around the world, across the full spectrum of research on cities. Results of this scoping review provide a detailed knowledge base for researchers interested in using street-level imagery, by identifying the platforms used around the world for accessing images, key areas of application in urban studies, methods of data extraction and analysis, and the key benefits and limitations. Overall, the results point to an accelerated use of street-level imagery as a source of urban data in recent years, as corporations continue to increase the amount of urban area captured in 360°, and as manual and computational methods of image-based data extraction and analysis become further established in the research community. The review identified considerable advantages and benefits including low-cost, rapid, and widespread data collection, enhanced safety through remote data collection, and a unique pedestrian POV that presents not just an alternative data source but a way to capture cities from the perspective in which people experience them. However, several limitations constrain its potential as an urban data source, including the limited ability to capture information beyond the existence/absence of spatial features, unreliability for temporal analyses, limited use for depth and distance analyses, and the role of corporations as image-data gatekeepers.
The continued growth of research applications for street-level imagery over the 14-year review period suggests expanded use of this technology as new opportunities arise, which may also address some of its current limitations. As indicated above, corporate panoramic imagery platforms are integrating additional sensors into image capture processes, including Google Street View’s addition of LiDAR for depth sensing and 3D modelling, higher resolution cameras for automatic object recognition, and Aclima air quality sensors [114]. Public access to data from these sensors is limited, although Google has now made street segment-level air quality data for Copenhagen and London available via their Environmental Insights Explorer program [115]. Further access to these and other sensor data would expand possibilities for 360° urban sensing. Although few studies were identified that used street-level imagery to analyze rural spaces, increasing coverage outside of cities suggests the potential for greater use of this imagery for analysis of rural areas in the future. The use of street-level imagery in virtual reality environments [116] is another area of research likely to see increased attention in the coming years, in particular since it is now easy to create your own panoramic imagery and immersive environments using low-cost technologies [3]. Consumer-grade 360° cameras are capable of capturing not just photos but also panoramic videos and spatial audio, which could make possible new forms of urban analytics independent of the restrictions of corporate street-level imagery ecosystems.
This review was conducted systematically following the scoping review approach and stands as a comprehensive synthesis of academic research on the use of street-level imagery for research on cities. However, it may be possible that a small number of studies were missed due to non-inclusion in the databases used for the review. Further, some studies may have been unintentionally excluded due to human judgement or error; for instance, deciding whether an article fit the ‘urban research’ inclusion criterion was not straightforward in a minority of instances. Finally, the focus on English language publications will have likely excluded some research published in other languages, and so future reviews could consider including research published in a variety of languages.

Author Contributions

Conceptualization, Jonathan Cinnamon; methodology, Jonathan Cinnamon, Lindi Jahiu; formal analysis, Jonathan Cinnamon, Lindi Jahiu; data curation, Jonathan Cinnamon, Lindi Jahiu; writing—original draft preparation, Jonathan Cinnamon, Lindi Jahiu; writing—review and editing, Jonathan Cinnamon, Lindi Jahiu. Both authors have read and agreed to the published version of the manuscript.

Funding

This research received no external funding.

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Data Availability Statement

The full list of publications analyzed in this review is available at this link: https://tinyurl.com/panoramic-images-review, accessed on 22 May 2021.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Campkin, B.; Ross, R. Negotiating the city through Google Street View. In Camera Constructs: Photography, Architecture and the Modern City; Higgott, A., Wray, T., Eds.; Ashgate: Farnham, UK, 2012; pp. 147–158. [Google Scholar]
  2. Gilge, C. Google Street View and the Image as Experience. GeoHumanities 2016, 2, 469–484. [Google Scholar] [CrossRef]
  3. Cinnamon, J.; Gaffney, A. Do-It-Yourself Street Views and the Urban Imaginary of Google Street View. J. Urban Technol. 2021, 1–22. [Google Scholar] [CrossRef]
  4. Shapiro, A. Street-level: Google Street View’s abstraction by datafication. New Media Soc. 2018, 20, 1201–1219. [Google Scholar] [CrossRef]
  5. Cheng, L.; Chu, S.; Zong, W.; Li, S.; Wu, J.; Li, M. Use of tencent street view imagery for visual perception of streets. ISPRS Int. J. Geo-Inf. 2017, 6, 265. [Google Scholar] [CrossRef]
  6. Chen, L.; Lu, Y.; Sheng, Q.; Ye, Y.; Wang, R.; Liu, Y. Estimating pedestrian volume using Street View images: A large-scale validation test. Comput. Environ. Urban Syst. 2020, 81, 101481. [Google Scholar] [CrossRef]
  7. De Nadai, M.; Vieriu, R.L.; Zen, G.; Dragicevic, S.; Naik, N.; Caraviello, M.; Hidalgo, C.A.; Sebe, N.; Lepri, B. Are safer looking neighborhoods more lively? A multimodal investigation into urban life. In Proceedings of the 24th ACM International Conference on Multimedia, Amsterdam, The Netherlands, 15–19 October 2016; pp. 1127–1135. [Google Scholar]
  8. Guo, Z. Residential street parking and car ownership: A study of households with off-street parking in the New York City region. J. Am. Plan. Assoc. 2013, 79, 32–48. [Google Scholar] [CrossRef]
  9. Langton, S.H.; Steenbeek, W. Residential burglary target selection: An analysis at the property-level using Google Street View. Appl. Geogr. 2017, 86, 292–299. [Google Scholar] [CrossRef]
  10. Hong, S.-Y. Linguistic Landscapes on Street-Level Images. ISPRS Int. J. Geo-Inf. 2020, 9, 57. [Google Scholar] [CrossRef] [Green Version]
  11. Rzotkiewicz, A.; Pearson, A.L.; Dougherty, B.V.; Shortridge, A.; Wilson, N. Systematic review of the use of Google Street View in health research: Major themes, strengths, weaknesses and possibilities for future research. Health Place 2018, 52, 240–246. [Google Scholar] [CrossRef]
  12. Kang, Y.; Zhang, F.; Gao, S.; Lin, H.; Liu, Y. A review of urban physical environment sensing using street view imagery in public health studies. Ann. GIS 2020, 26, 261–275. [Google Scholar] [CrossRef]
  13. Arksey, H.; O’Malley, L. Scoping studies: Towards a methodological framework. Int. J. Soc. Res. Methodol. 2005, 8, 19–32. [Google Scholar] [CrossRef] [Green Version]
  14. Colquhoun, H.L.; Levac, D.; O’Brien, K.K.; Straus, S.; Tricco, A.C.; Perrier, L.; Kastner, M.; Moher, D. Scoping reviews: Time for clarity in definition, methods, and reporting. J. Clin. Epidemiol. 2014, 67, 1291–1294. [Google Scholar] [CrossRef]
  15. Munn, Z.; Peters, M.D.J.; Stern, C.; Tufanaru, C.; McArthur, A.; Aromataris, E. Systematic review or scoping review? Guidance for authors when choosing between a systematic or scoping review approach. BMC Med. Res. Methodol. 2018, 18, 143. [Google Scholar] [CrossRef]
  16. Nabors, L.; Monnin, J.; Jimenez, S. A Scoping Review of Studies on Virtual Reality for Individuals with Intellectual Disabilities. Adv. Neurodev. Disord. 2020, 4, 344–356. [Google Scholar] [CrossRef]
  17. Stubbings, P.; Peskett, J.; Rowe, F.; Arribas-Bel, D. A hierarchical urban forest index using street-level imagery and deep learning. Remote Sens. 2019, 11, 1395. [Google Scholar] [CrossRef] [Green Version]
  18. Long, Y.; Liu, L. How green are the streets? An analysis for central areas of Chinese cities using Tencent Street View. PLoS ONE 2017, 12, e0171110. [Google Scholar] [CrossRef] [Green Version]
  19. Li, X.; Zhang, C.; Li, W. Building block level urban land-use information retrieval based on Google Street View images. GIScience Remote Sens. 2017, 54, 819–835. [Google Scholar] [CrossRef]
  20. Zhang, M.; Liu, Y.; Luo, S.; Gao, S. Research on Baidu Street View Road Crack Information Extraction Based on Deep Learning Method. In Proceedings of the Journal of Physics: Conference Series, Mangalore, India, 1 August 2020; p. 012086. [Google Scholar]
  21. Quinn, J.W.; Mooney, S.J.; Sheehan, D.M.; Teitler, J.O.; Neckerman, K.M.; Kaufman, T.K.; Lovasi, G.S.; Bader, M.D.; Rundle, A.G. Neighborhood physical disorder in New York City. J. Maps 2016, 12, 53–60. [Google Scholar] [CrossRef] [Green Version]
  22. Marco, M.; Gracia, E.; Martín-Fernández, M.; López-Quílez, A. Validation of a Google Street View-Based Neighborhood Disorder Observational Scale. J. Urban Health 2017, 94, 190–198. [Google Scholar] [CrossRef] [Green Version]
  23. Badland, H.M.; Opit, S.; Witten, K.; Kearns, R.A.; Mavoa, S. Can Virtual Streetscape Audits Reliably Replace Physical Streetscape Audits? J. Urban Health 2010, 87, 1007–1016. [Google Scholar] [CrossRef] [Green Version]
  24. Zhou, H.; He, S.; Cai, Y.; Wang, M.; Su, S. Social inequalities in neighborhood visual walkability: Using street view imagery and deep learning technologies to facilitate healthy city planning. Sustain. Cities Soc. 2019, 50, 101605. [Google Scholar] [CrossRef]
  25. Fu, X.; Jia, T.; Zhang, X.; Li, S.; Zhang, Y. Do street-level scene perceptions affect housing prices in Chinese megacities? An analysis using open access datasets and deep learning. PLoS ONE 2019, 14, e0217505. [Google Scholar] [CrossRef]
  26. Zhang, L.; Pei, T.; Wang, X.; Wu, M.; Song, C.; Guo, S.; Chen, Y. Quantifying the Urban Visual Perception of Chinese Traditional-Style Building with Street View Images. Appl. Sci. 2020, 10, 5963. [Google Scholar] [CrossRef]
  27. Hyam, R. Automated Image Sampling and Classification Can Be Used to Explore Perceived Naturalness of Urban Spaces. PLoS ONE 2017, 12, e0169357. [Google Scholar] [CrossRef]
  28. Plascak, J.J.; Rundle, A.G.; Babel, R.A.; Llanos, A.A.M.; LaBelle, C.M.; Stroup, A.M.; Mooney, S.J. Drop-And-Spin Virtual Neighborhood Auditing: Assessing Built Environment for Linkage to Health Studies. Am. J. Prev. Med. 2020, 58, 152–160. [Google Scholar] [CrossRef]
  29. Brookfield, K.; Tilley, S. Using virtual street audits to understand the walkability of older adults’ route choices by gender and age. Int. J. Environ. Res. Public Health 2016, 13, 1061. [Google Scholar] [CrossRef] [Green Version]
  30. Mooney, S.J.; DiMaggio, C.J.; Lovasi, G.S.; Neckerman, K.M.; Bader, M.D.M.; Teitler, J.O.; Sheehan, D.M.; Jack, D.W.; Rundle, A.G. Use of Google Street View to Assess Environmental Contributions to Pedestrian Injury. Am. J. Public Health 2016, 106, 462–469. [Google Scholar] [CrossRef]
  31. Gullón, P.; Badland, H.M.; Alfayate, S.; Bilal, U.; Escobar, F.; Cebrecos, A.; Diez, J.; Franco, M. Assessing Walking and Cycling Environments in the Streets of Madrid: Comparing On-Field and Virtual Audits. J. Urban Health 2015, 92, 923–939. [Google Scholar] [CrossRef] [Green Version]
  32. Kelly, C.M.; Wilson, J.S.; Baker, E.A.; Miller, D.K.; Schootman, M. Using Google Street View to audit the built environment: Inter-rater reliability results. Ann. Behav. Med. 2013, 45, S108–S112. [Google Scholar] [CrossRef] [Green Version]
  33. Hanibuchi, T.; Nakaya, T.; Inoue, S. Virtual audits of streetscapes by crowdworkers. Health Place 2019, 59, 1–8. [Google Scholar] [CrossRef]
  34. Law, S.; Seresinhe, C.I.; Shen, Y.; Gutierrez-Roig, M. Street-Frontage-Net: Urban image classification using deep convolutional neural networks. Int. J. Geogr. Inf. Sci. 2018, in press. [Google Scholar] [CrossRef] [Green Version]
  35. Novack, T.; Vorbeck, L.; Lorei, H.; Zipf, A. Towards Detecting Building Facades with Graffiti Artwork Based on Street View Images. ISPRS Int. J. Geo-Inf. 2020, 9, 98. [Google Scholar] [CrossRef] [Green Version]
  36. Li, X.; Zhang, C.; Li, W. Does the visibility of greenery increase perceived safety in urban areas? Evidence from the place pulse 1.0 dataset. ISPRS Int. J. Geo-Inf. 2015, 4, 1166–1183. [Google Scholar] [CrossRef] [Green Version]
  37. Naik, N.; Raskar, R.; Hidalgo, C.A. Cities are physical too: Using computer vision to measure the quality and impact of urban appearance. Am. Econ. Rev. 2016, 106, 128–132. [Google Scholar] [CrossRef] [Green Version]
  38. Cai, B.; Li, X.; Ratti, C. Quantifying Urban Canopy Cover with Deep Convolutional Neural Networks. In Proceedings of the Climate Change AI Workshop at NeurIPS (2019), Vancouver, BC, Canada, 14 December 2019. [Google Scholar]
  39. Hu, F.; Liu, W.; Lu, J.; Song, C.; Meng, Y.; Wang, J.; Xing, H. Urban Function as a New Perspective for Adaptive Street Quality Assessment. Sustainability 2020, 12, 1296. [Google Scholar] [CrossRef] [Green Version]
  40. Hara, K.; Le, V.; Froehlich, J. Combining crowdsourcing and google street view to identify street-level accessibility problems. In Proceedings of the SIGCHI Conference on Human Factors in Computing Systems, Paris, France, 27 April–2 May 2013; pp. 631–640. [Google Scholar]
  41. Plascak, J.J.; Schootman, M.; Rundle, A.G.; Xing, C.; Llanos, A.A.M.; Stroup, A.M.; Mooney, S.J. Spatial predictive properties of built environment characteristics assessed by drop-and-spin virtual neighborhood auditing. Int. J. Health Geogr. 2020, 19, 21. [Google Scholar] [CrossRef]
  42. Uluengin, M.B. Istanbul and third-wave coffee shops? A match made in heaven? In Proceedings of the Archi-Cultural Interactions through the Silk Road 4th International Conference, Mukogawa Women’s University, Nishinomiya, Japan, 16–18 July 2016; pp. 53–56. [Google Scholar]
  43. Osipov, V.; Larionov, V.; Sushchev, S.; Frolova, N.; Ugarov, A.; Kozharinov, S.; Barskaya, T. Seismic risk assessment for the Greater Sochi area. Water Resour. 2016, 43, 982–997. [Google Scholar] [CrossRef]
  44. Lu, Y.; Lu, J.; Zhang, S.; Hall, P. Traffic signal detection and classification in street views using an attention model. Comput. Vis. Media 2018, 4, 253–266. [Google Scholar] [CrossRef] [Green Version]
  45. Balali, V.; Ashouri Rad, A.; Golparvar-Fard, M. Detection, classification, and mapping of U.S. traffic signs using google street view images for roadway inventory management. Vis. Eng. 2015, 3, 1–18. [Google Scholar] [CrossRef] [Green Version]
  46. Jiang, Z.; Chen, L.; Zhou, B.; Huang, J.; Xie, T.; Fan, X.; Wang, C. iTV: Inferring Traffic Violation-Prone Locations With Vehicle Trajectories and Road Environment Data. IEEE Syst. J. 2020. [Google Scholar] [CrossRef]
  47. Workman, S.; Zhai, M.; Crandall, D.J.; Jacobs, N. A unified model for near and remote sensing. In Proceedings of the IEEE International Conference on Computer Vision, Venice, Italy, 22–29 October 2017; pp. 2688–2697. [Google Scholar]
  48. Cao, R.; Zhu, J.; Tu, W.; Li, Q.; Cao, J.; Liu, B.; Zhang, Q.; Qiu, G. Integrating Aerial and Street View Images for Urban Land Use Classification. Remote Sens. 2018, 10, 1553. [Google Scholar] [CrossRef] [Green Version]
  49. Ilic, L.; Sawada, M.; Zarzelli, A. Deep mapping gentrification in a large Canadian city using deep learning and Google Street View. PLoS ONE 2019, 14, e0212814. [Google Scholar] [CrossRef]
  50. Srivastava, S.; Vargas Muñoz, J.E.; Lobry, S.; Tuia, D. Fine-grained landuse characterization using ground-based pictures: A deep learning solution based on globally available data. Int. J. Geogr. Inf. Sci. 2018. In press. [Google Scholar] [CrossRef]
  51. Marmot, M.; Friel, S.; Bell, R.; Houweling, T.A.J.; Taylor, S. Closing the gap in a generation: Health equity through action on the social determinants of health. Lancet 2008, 372, 1661–1669. [Google Scholar] [CrossRef]
  52. Clews, C.; Brajkovich-Payne, R.; Dwight, E.; Fauzul, A.A.; Burton, M.; Carleton, O.; Cook, J.; Deroles, C.; Faulkner, R.; Furniss, M. Alcohol in urban streetscapes: A comparison of the use of Google Street View and on-street observation. BMC Public Health 2016, 16, 1–8. [Google Scholar] [CrossRef] [Green Version]
  53. Ewing, R.; Hajrasouliha, A.; Neckerman, K.M.; Purciel-Hill, M.; Greene, W. Streetscape Features Related to Pedestrian Activity. J. Plan. Educ. Res. 2016, 36, 5–15. [Google Scholar] [CrossRef] [Green Version]
  54. Bethlehem, J.R.; Mackenbach, J.D.; Ben-Rebah, M.; Compernolle, S.; Glonti, K.; Bárdos, H.; Rutter, H.R.; Charreire, H.; Oppert, J.-M.; Brug, J. The SPOTLIGHT virtual audit tool: A valid and reliable tool to assess obesogenic characteristics of the built environment. Int. J. Health Geogr. 2014, 13, 1–8. [Google Scholar] [CrossRef] [Green Version]
  55. Feuillet, T.; Charreire, H.; Roda, C.; Ben Rebah, M.; Mackenbach, J.D.; Compernolle, S.; Glonti, K.; Bárdos, H.; Rutter, H.; De Bourdeaudhuij, I.; et al. Neighbourhood typology based on virtual audit of environmental obesogenic characteristics: Virtual audit and neighbourhood typology. Obes. Rev. 2016, 17, 19–30. [Google Scholar] [CrossRef] [Green Version]
  56. Wang, R.; Helbich, M.; Yao, Y.; Zhang, J.; Liu, P.; Yuan, Y.; Liu, Y. Urban greenery and mental wellbeing in adults: Cross-sectional mediation analyses on multiple pathways across different greenery measures. Environ. Res. 2019, 176, 108535. [Google Scholar] [CrossRef] [Green Version]
  57. Griew, P.; Hillsdon, M.; Foster, C.; Coombes, E.; Jones, A.; Wilkinson, P. Developing and testing a street audit tool using Google Street View to measure environmental supportiveness for physical activity. Int. J. Behav. Nutr. Phys. Act. 2013, 10, 1–7. [Google Scholar] [CrossRef] [Green Version]
  58. Kim, E.J.; Won, J.; Kim, J. Is Seoul walkable? Assessing a walkability score and examining its relationship with pedestrian satisfaction in Seoul, Korea. Sustainability 2019, 11, 6915. [Google Scholar] [CrossRef] [Green Version]
  59. Nagata, S.; Nakaya, T.; Hanibuchi, T.; Amagasa, S.; Kikuchi, H.; Inoue, S. Objective scoring of streetscape walkability related to leisure walking: Statistical modeling approach with semantic segmentation of Google Street View images. Health Place 2020, 66, 102428. [Google Scholar] [CrossRef]
  60. Nesoff, E.D.; Milam, A.J.; Pollack, K.M.; Curriero, F.C.; Bowie, J.V.; Gielen, A.C.; Furr-Holden, D.M. Novel methods for environmental assessment of pedestrian injury: Creation and validation of the Inventory for Pedestrian Safety Infrastructure. J. Urban Health 2018, 95, 208–221. [Google Scholar] [CrossRef]
  61. Vanwolleghem, G.; Van Dyck, D.; Ducheyne, F.; De Bourdeaudhuij, I.; Cardon, G. Assessing the environmental characteristics of cycling routes to school: A study on the reliability and validity of a Google Street View-based audit. Int. J. Health Geogr. 2014, 13, 1–9. [Google Scholar] [CrossRef] [Green Version]
  62. Lu, Y.; Sarkar, C.; Xiao, Y. The effect of street-level greenery on walking behavior: Evidence from Hong Kong. Soc. Sci. Med. 2018, 208, 41–49. [Google Scholar] [CrossRef]
  63. Xiao, Y.; Zhang, Y.; Sun, Y.; Tao, P.; Kuang, X. Does Green Space Really Matter for Residents’ Obesity? A New Perspective From Baidu Street View. Front. Public Health 2020, 8, 332. [Google Scholar] [CrossRef]
  64. Jiang, B. Establishing Dose-Response Curves for the Impact of Urban Forests on Recovery from Acute Stress and Landscape Preference; University of Illinois at Urbana-Champaign: Urbana-Champaign, IL, USA, 2013. [Google Scholar]
  65. Wu, J.; Wang, B.; Ta, N.; Zhou, K.; Chai, Y. Does street greenery always promote active travel? Evidence from Beijing. Urban For. Urban Green. 2020, 56. [Google Scholar] [CrossRef]
  66. Villeneuve, P.J.; Ysseldyk, R.L.; Root, A.; Ambrose, S.; DiMuzio, J.; Kumar, N.; Shehata, M.; Xi, M.; Seed, E.; Li, X.; et al. Comparing the Normalized Difference Vegetation Index with the Google Street View Measure of Vegetation to Assess Associations between Greenness, Walkability, Recreational Physical Activity, and Health in Ottawa, Canada. Int. J. Environ. Res. Public Health 2018, 15, 1719. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  67. Egli, V.; Zinn, C.; Mackay, L.; Donnellan, N.; Villanueva, K.; Mavoa, S.; Exeter, D.J.; Vandevijvere, S.; Smith, M. Viewing obesogenic advertising in children’s neighbourhoods using Google Street View. Geogr. Res. 2019, 57, 84–97. [Google Scholar] [CrossRef]
  68. Nguyen, Q.C.; Huang, Y.; Kumar, A.; Duan, H.; Keralis, J.M.; Dwivedi, P.; Meng, H.-W.; Brunisholz, K.D.; Jay, J.; Javanmardi, M. Using 164 Million Google Street View Images to Derive Built Environment Predictors of COVID-19 Cases. Int. J. Environ. Res. Public Health 2020, 17, 6359. [Google Scholar] [CrossRef] [PubMed]
  69. Helbich, M.; Yao, Y.; Liu, Y.; Zhang, J.; Liu, P.; Wang, R. Using deep learning to examine street view green and blue spaces and their associations with geriatric depression in Beijing, China. Environ. Int. 2019, 126, 107–117. [Google Scholar] [CrossRef]
  70. Dong, R.; Zhang, Y.; Zhao, J. How green are the streets within the sixth ring road of Beijing? An analysis based on tencent street view pictures and the green view index. Int. J. Environ. Res. Public Health 2018, 15, 1367. [Google Scholar] [CrossRef] [Green Version]
  71. Berland, A.; Roman, L.A.; Vogt, J. Can Field Crews Telecommute? Varied Data Quality from Citizen Science Tree Inventories Conducted Using Street-Level Imagery. Forests 2019, 10, 349. [Google Scholar] [CrossRef] [Green Version]
  72. Zang, P.; Liu, X.; Zhao, Y.; Guo, H.; Lu, Y.; Xue, C.Q.L. Eye-Level Street Greenery and Walking Behaviors of Older Adults. Int. J. Environ. Res. Public Health 2020, 17, 6130. [Google Scholar] [CrossRef]
  73. Wu, D.; Gong, J.; Liang, J.; Sun, J.; Zhang, G. Analyzing the Influence of Urban Street Greening and Street Buildings on Summertime Air Pollution Based on Street View Image Data. ISPRS Int. J. Geo-Inf. 2020, 9, 500. [Google Scholar] [CrossRef]
  74. Sanchez, M. Using Google Street View to Examine Green Gentrification: A Case Study in Chile; Iowa State University: Ames, IA, USA, 2019. [Google Scholar]
  75. Ye, Y.; Xie, H.; Fang, J.; Jiang, H.; Wang, D. Daily Accessed Street Greenery and Housing Price: Measuring Economic Performance of Human-Scale Streetscapes via New Urban Data. Sustainability 2019, 11, 1741. [Google Scholar] [CrossRef] [Green Version]
  76. Chen, L.; Yao, X.; Liu, Y.; Zhu, Y.; Chen, W.; Zhao, X.; Chi, T. Measuring Impacts of Urban Environmental Elements on Housing Prices Based on Multisource Data—A Case Study of Shanghai, China. ISPRS Int. J. Geo-Inf. 2020, 9, 106. [Google Scholar] [CrossRef] [Green Version]
  77. Zhang, Y.; Dong, R. Impacts of Street-Visible Greenery on Housing Prices: Evidence from a Hedonic Price Model and a Massive Street View Image Dataset in Beijing. ISPRS Int. J. Geo-Inf. 2018, 7, 104. [Google Scholar] [CrossRef] [Green Version]
  78. Meng, Y.; Xing, H.; Yuan, Y.; Wong, M.S.; Fan, K. Sensing urban poverty: From the perspective of human perception-based greenery and open-space landscapes. Comput. Environ. Urban Syst. 2020, 84, 101544. [Google Scholar] [CrossRef]
  79. Takizawa, A.; Kinugawa, H. Deep learning model to reconstruct 3D cityscapes by generating depth maps from omnidirectional images and its application to visual preference prediction. Des. Sci. 2020, 6. [Google Scholar] [CrossRef]
  80. Cetiner, B. Image-Based Modeling of Bridges and Its Applications to Evaluating Resiliency of Transportation Networks; UCLA: Los Angeles, CA, USA, 2020. [Google Scholar]
  81. Middel, A.; Lukasczyk, J.; Zakrzewski, S.; Arnold, M.; Maciejewski, R. Urban form and composition of street canyons: A human-centric big data and deep learning approach. Landsc. Urban Plan. 2019, 183, 122–132. [Google Scholar] [CrossRef]
  82. Suel, E.; Polak, J.W.; Bennett, J.E.; Ezzati, M. Measuring social, environmental and health inequalities using deep learning and street imagery. Sci. Rep. 2019, 9, 6229. [Google Scholar] [CrossRef] [PubMed]
  83. Gebru, T.; Krause, J.; Wang, Y.; Chen, D.; Deng, J.; Aiden, E.L.; Fei-Fei, L. Using deep learning and Google Street View to estimate the demographic makeup of neighborhoods across the United States. Proc. Natl. Acad. Sci. USA 2017, 114, 13108–13113. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  84. Diou, C.; Lelekas, P.; Delopoulos, A. Image-based surrogates of socio-economic status in urban neighborhoods using deep multiple instance learning. J. Imaging 2018, 4, 125. [Google Scholar] [CrossRef] [Green Version]
  85. Acharya, A.; Fang, H.; Raghvendra, S. Neighborhood Watch: Using CNNs to Predict Income Brackets from Google Street View Images; Stanford University: Stanford, CA, USA, 2017. [Google Scholar]
  86. Van Ham, M.; Manley, D.; Bailey, N.; Simpson, L.; Maclennan, D. (Eds.) Neighbourhood Effects Research: New Perspectives; Springer: Dordrecht, The Netherlands, 2012; pp. 1–21. [Google Scholar]
  87. Slater, T. Your life chances affect where you live: A critique of the ‘cottage industry’of neighbourhood effects research. Int. J. Urban Reg. Res. 2013, 37, 367–387. [Google Scholar] [CrossRef]
  88. Kronkvist, K. Systematic Social Observation of Physical Disorder in Inner-City Urban Neighborhoods through Google Street View: The Correlation between Virtually Observed Physical Disorder, Self-Reported Disorder and Victimization of Property Crimes; Malmö University: Malmö, Sweden, 2013. [Google Scholar]
  89. Fu, K.; Chen, Z.; Lu, C.-T. StreetNet: Preference learning with convolutional neural network on urban crime perception. In Proceedings of the 26th ACM SIGSPATIAL International Conference on Advances in Geographic Information Systems, Seattle, WA, USA, 6–9 November 2018; pp. 269–278. [Google Scholar]
  90. Dubey, A.; Naik, N.; Parikh, D.; Raskar, R.; Hidalgo, C.A. Deep learning the city: Quantifying urban perception at a global scale. In Proceedings of the European Conference on Computer Vision, Amsterdam, The Netherlands, 11–14 October 2016; pp. 196–212. [Google Scholar]
  91. Choiri, H.H. Quantifying and Predicting Urban Attractiveness with Street-View Data and Convolutional Neural Networks; Delft University of Technology: Delft, The Netherlands, 2017. [Google Scholar]
  92. Tang, J.; Long, Y. Measuring visual quality of street space and its temporal variation: Methodology and its application in the Hutong area in Beijing. Landsc. Urban Plan. 2019, 191, 103436. [Google Scholar] [CrossRef]
  93. Najafizadeh, L.; Froehlich, J.E. A feasibility study of using Google street view and computer vision to track the evolution of urban accessibility. In Proceedings of the 20th International ACM SIGACCESS Conference on Computers and Accessibility, Galway, Ireland, 22–24 October 2018; pp. 340–342. [Google Scholar]
  94. Rundle, A.G.; Bader, M.D.M.; Richards, C.A.; Neckerman, K.M.; Teitler, J.O. Using Google Street View to audit neighborhood environments. Am. J. Prev. Med. 2011, 40, 94–100. [Google Scholar] [CrossRef] [Green Version]
  95. Odgers, C.L.; Caspi, A.; Bates, C.J.; Sampson, R.J.; Moffitt, T.E. Systematic social observation of children’s neighborhoods using Google Street View: A reliable and cost-effective method. J. Child Psychol. Psychiatry 2012, 53, 1009–1017. [Google Scholar] [CrossRef] [Green Version]
  96. Zang, P.; Lu, Y.; Ma, J.; Xie, B.; Wang, R.; Liu, Y. Disentangling residential self-selection from impacts of built environment characteristics on travel behaviors for older adults. Soc. Sci. Med. 2019, 238, 112515. [Google Scholar] [CrossRef]
  97. Chang, S.; Wang, Z.; Mao, D.; Guan, K.; Jia, M.; Chen, C. Mapping the Essential Urban Land Use in Changchun by Applying Random Forest and Multi-Source Geospatial Data. Remote Sens. 2020, 12, 2488. [Google Scholar] [CrossRef]
  98. Larkin, A.; Hystad, P. Evaluating street view exposure measures of visible green space for health research. J. Expo. Sci. Environ. Epidemiol. 2019, 29, 447–456. [Google Scholar] [CrossRef]
  99. Liu, Z.; Yang, A.; Gao, M.; Jiang, H.; Kang, Y.; Zhang, F.; Fei, T. Towards feasibility of photovoltaic road for urban traffic-solar energy estimation using street view image. J. Clean. Prod. 2019, 228, 303–318. [Google Scholar] [CrossRef]
  100. Steinmetz-Wood, M.; Velauthapillai, K.; O’Brien, G.; Ross, N.A. Assessing the micro-scale environment using Google Street View: The Virtual Systematic Tool for Evaluating Pedestrian Streetscapes (Virtual-STEPS). BMC Public Health 2019, 19, 1246. [Google Scholar] [CrossRef] [Green Version]
  101. Li, X.; Zhang, C.; Li, W.; Ricard, R.; Meng, Q.; Zhang, W. Assessing street-level urban greenery using Google Street View and a modified green view index. Urban For. Urban Green. 2015, 14, 675–685. [Google Scholar] [CrossRef]
  102. Chen, X.; Meng, Q.; Hu, D.; Zhang, L.; Yang, J. Evaluating Greenery around Streets Using Baidu Panoramic Street View Images and the Panoramic Green View Index. Forests 2019, 10, 1109. [Google Scholar] [CrossRef] [Green Version]
  103. Pliakas, T.; Hawkesworth, S.; Silverwood, R.J.; Nanchahal, K.; Grundy, C.; Armstrong, B.; Casas, J.P.; Morris, R.W.; Wilkinson, P.; Lock, K. Optimising measurement of health-related characteristics of the built environment: Comparing data collected by foot-based street audits, virtual street audits and routine secondary data sources. Health Place 2017, 43, 75–84. [Google Scholar] [CrossRef] [Green Version]
  104. Grubesic, T.H.; Wallace, D.; Chamberlain, A.W.; Nelson, J.R. Using unmanned aerial systems (UAS) for remotely sensing physical disorder in neighborhoods. Landsc. Urban Plan. 2018, 169, 148–159. [Google Scholar] [CrossRef]
  105. Huang, D.; Brien, A.; Omari, L.; Culpin, A.; Smith, M.; Egli, V. Bus Stops Near Schools Advertising Junk Food and Sugary Drinks. Nutrients 2020, 12, 1192. [Google Scholar] [CrossRef]
  106. Cohen, N.; Chrobok, M.; Caruso, O. Google-truthing to assess hot spots of food retail change: A repeat cross-sectional Street View of food environments in the Bronx, New York. Health Place 2020, 62, 102291. [Google Scholar] [CrossRef]
  107. Li, X.; Ratti, C.; Seiferling, I. Quantifying the shade provision of street trees in urban landscape: A case study in Boston, USA, using Google Street View. Landsc. Urban Plan. 2018, 169, 81–91. [Google Scholar] [CrossRef]
  108. Zhang, W.; Witharana, C.; Li, W.; Zhang, C.; Li, X.; Parent, J. Using deep learning to identify utility poles with crossarms and estimate their locations from google street view images. Sensors 2018, 18, 2484. [Google Scholar] [CrossRef] [Green Version]
  109. Li, X.; Ratti, C.; Seiferling, I. Mapping urban landscapes along streets using google street view. In Proceedings of the International Cartographic Conference, Washington, DC, USA, 2–7 July 2017; pp. 341–356. [Google Scholar]
  110. Forbes. Can Google’s Non-Advertising Revenue Streams Mitigate Impact of Slowing Advertising Growth? Available online: https://www.forbes.com/sites/greatspeculations/2019/11/04/can-googles-non-advertising-revenue-streams-mitigate-impact-of-slowing-advertising-growth/?sh=238ed5292652 (accessed on 28 May 2021).
  111. Singh, I. Insane, Shocking, Outrageous: Developers React to Changes in Google Maps API. Available online: https://geoawesomeness.com/developers-up-in-arms-over-google-maps-api-insane-price-hike/ (accessed on 12 November 2020).
  112. Cinnamon, J. Visual Data Justice? Datafication of Urban Informality in South Africa using 360° Imaging Technologies; Global Development Institute, University of Manchester: Manchester, UK, 2019. [Google Scholar]
  113. Taylor, L.; Floridi, L.; van der Sloot, B. (Eds.) Group Privacy: New Challenges of Data Technologies; Springer: Dordrecht, The Netherlands, 2017. [Google Scholar]
  114. Trek View. Google’s Street View Cameras—More Than Meets the Eye. Available online: https://www.trekview.org/blog/2019/google-street-view-cameras-more-than-meets-the-eye/ (accessed on 9 January 2021).
  115. Google. Environmental Insights Explorer—Labs: Air Quality. Available online: https://insights.sustainability.google/labs/airquality (accessed on 22 May 2021).
  116. Carbonell-Carrera, C.; Saorín, J.L. Geospatial Google Street View with Virtual Reality: A Motivational Approach for Spatial Training Education. ISPRS Int. J. Geo-Inf. 2017, 6, 261. [Google Scholar] [CrossRef] [Green Version]
Figure 1. Scoping review sample by year of publication.
Figure 1. Scoping review sample by year of publication.
Ijgi 10 00471 g001
Figure 2. Urban research application areas. Note that some studies were placed in more than one research category.
Figure 2. Urban research application areas. Note that some studies were placed in more than one research category.
Ijgi 10 00471 g002
Table 1. Scoping review search terms.
Table 1. Scoping review search terms.
Group A Search TermsGroup B Search Terms
360Urban
PanoramicCities
OmnidirectionalResearch
StreetviewAudit
Street ViewAnalysis
Street levelModel
AppleEnvironment
BaiduHealth
StreetsideLand use
GooglePlanning
TencentConstruction
YandexGreenery
KakaoStreetscape
Daum
Naver
Table 2. Sample of articles included in the scoping review by area of urban research.
Table 2. Sample of articles included in the scoping review by area of urban research.
Urban Research AreaCitationImagery SourceData Extraction
Methods
Advantages & BenefitsLimitations & Weaknesses
Natural
Environment
Stubbings et al. [17]GoogleComputational (PSPNet)Computer vision algorithms available to process street-level imageryObstructed imagery (e.g., vehicles, street light poles, pedestrians)
Long & Liu [18]TencentComputational
(Automated Algorithm)
Time and cost efficientLimited imagery data parameters, results may be misleading for some sites
Built
Environment and
Land Use
Li et al. [19] GoogleComputational
(Machine Learning)
High resolution representation of urban space at street levelLimited to acquiring data on buildings’ physical characteristics only
Zhang et al. [20]BaiduComputational (Deep Learning)Wide coverage, large amount of data, free to access, useful in multiple urban analysesImagery taken from non-orthogonal perspective limits value for analysis of ground/street conditions
Area Quality and ReputationQuinn et al. [21]GoogleManual (Virtual
audit with
CANVAS)
Efficiency and reliabilityLimited coverage at some sample points
Marco et al. [22]GoogleManual (Virtual
Audit with
Neighborhood
Disorder
Observational Scale)
Reliable alternative to in-person auditingDifficult to use for longitudinal studies, imagery cannot capture social characteristics (e.g., public disorder)
Health and WellbeingBadland et al. [23]GoogleManual (Virtual
audit with SPACES)
Ability to compare between international virtual audits, updated frequentlyTemporal differences between virtual audit data and physical audit data
Zhou et al. [24]BaiduComputational (DFCNN-SegNet)Avoid response bias, time efficient, pedestrian perspectiveCannot capture dynamic nature of the environment, imagery not available in all areas
Urban
Modelling
and Demographic Surveillance
Fu et al. [25]BaiduComputational (PSPNet)Availability of large imagery data setDifficult to capture nuances of the urban environment
Zhang et al. [26]TencentComputational (CNN)Effectively represents streetscapes Poor representation of populace, bias in data capture
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Share and Cite

MDPI and ACS Style

Cinnamon, J.; Jahiu, L. Panoramic Street-Level Imagery in Data-Driven Urban Research: A Comprehensive Global Review of Applications, Techniques, and Practical Considerations. ISPRS Int. J. Geo-Inf. 2021, 10, 471. https://0-doi-org.brum.beds.ac.uk/10.3390/ijgi10070471

AMA Style

Cinnamon J, Jahiu L. Panoramic Street-Level Imagery in Data-Driven Urban Research: A Comprehensive Global Review of Applications, Techniques, and Practical Considerations. ISPRS International Journal of Geo-Information. 2021; 10(7):471. https://0-doi-org.brum.beds.ac.uk/10.3390/ijgi10070471

Chicago/Turabian Style

Cinnamon, Jonathan, and Lindi Jahiu. 2021. "Panoramic Street-Level Imagery in Data-Driven Urban Research: A Comprehensive Global Review of Applications, Techniques, and Practical Considerations" ISPRS International Journal of Geo-Information 10, no. 7: 471. https://0-doi-org.brum.beds.ac.uk/10.3390/ijgi10070471

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop