Next Article in Journal
Blind Digital Watermarking Algorithm against Projection Transformation for Vector Geographic Data
Next Article in Special Issue
The Land Use Mapping Techniques (Including the Areas Used by Pedestrians) Based on Low-Level Aerial Imagery
Previous Article in Journal
Developing the Raster Big Data Benchmark: A Comparison of Raster Analysis on Big Data Platforms
Previous Article in Special Issue
Experts and Gamers on Immersion into Reconstructed Strongholds
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Panoramic Mapping with Information Technologies for Supporting Engineering Education: A Preliminary Exploration

Department of Civil Engineering, Feng Chia University, Taichung 40724, Taiwan
*
Author to whom correspondence should be addressed.
ISPRS Int. J. Geo-Inf. 2020, 9(11), 689; https://0-doi-org.brum.beds.ac.uk/10.3390/ijgi9110689
Submission received: 14 October 2020 / Revised: 9 November 2020 / Accepted: 18 November 2020 / Published: 19 November 2020
(This article belongs to the Special Issue Multimedia Cartography)

Abstract

:
The present researchers took multistation-based panoramic images and imported the processed images into a virtual tour platform to create webpages and a virtual reality environment. The integrated multimedia platform aims to assist students in a surveying practice course. A questionnaire survey was conducted to evaluate the platform’s usefulness to students, and its design was modified according to respondents’ feedback. Panoramic photos were taken using a full-frame digital single-lens reflex camera with an ultra-wide-angle zoom lens mounted on a panoramic instrument. The camera took photos at various angles, generating a visual field with horizontal and vertical viewing angles close to 360°. Multiple overlapping images were stitched to form a complete panoramic image for each capturing station. Image stitching entails extracting feature points to verify the correspondence between the same feature point in different images (i.e., tie points). By calculating the root mean square error of a stitched image, we determined the stitching quality and modified the tie point location when necessary. The root mean square errors of nearly all panoramas were lower than 5 pixels, meeting the recommended stitching standard. Additionally, 92% of the respondents (n = 62) considered the platform helpful for their surveying practice course. We also discussed and provided suggestions for the improvement of panoramic image quality, camera parameter settings, and panoramic image processing.

1. Introduction

Panoramic images are being made available on an increasing number of online media platforms, such as Google Maps. Virtual reality (VR) technology is also becoming more common in modern life (e.g., video games and street view maps), providing immersive and interactive experiences for users. Various industries have incorporated this technology into their businesses; for example, companies in the leisure industry, such as the Garinko Ice-Breaker Cruise 360 Experience in Hokkaido, include VR images on their official websites to attract tourists [1]. Similarly, several real estate companies are showcasing furnished interior spaces to potential buyers by using panoramic images; this helps customers visualize the actual setting of the houses they are interested in [2]. These multimedia approaches integrating image, video, audio, and animations can further obtain better presentation and communication methods [3,4,5,6,7], such as visualization, map, graphical user interfaces, interactive recommendation system, etc.
Surveying practice is a fundamental and essential subject for civil engineering students. However, university students, with little civil engineering experience, mostly do not know how to accomplish a surveying task because of (a) unfamiliarity with surveying points, (b) inability to connect knowledge acquired in class with actual practices, (c) inability related to or unfamiliarity with establishing a record table, and (d) inability to operate or unfamiliarity with instrument operations. Lu [8] studied computer-assisted instruction in engineering surveying practice. According to the questionnaire survey results, 91% of the students either strongly agreed or agreed that virtual equipment helped increase their learning motivation. Additionally, 66% of the respondents correctly answered a question concerning azimuth, a foundational concept in civil engineering. Lu [8] indicated that the introduction of digital instruction is more likely to spark learning interest and motivation than conventional training would.
By integrating panoramic images into a virtual tour platform, this study adopted VR technology to create a webpage that facilitates the instruction of surveying practice. The present researchers selected panoramic images because of the low cost of image construction and ability to create realistic and immersive visual effects. The designed assistance platform included (a) surveying tips, (b) various surveying routes, (c) corresponding measurement principles, and (d) instructional videos. Therefore, students had access to the supplementary materials on this platform before or during a lesson, thereby increasing learning efficiency and helping students acquire independent learning skills. This study explored student acceptance of technology-aided instruction and the practicality of such an instruction method by using a questionnaire survey. Subsequently, the original web design was modified per the students’ feedback. In this paper, the authors also discussed and proposed suggestions for the improvement of panoramic image quality, camera parameter settings, and panoramic image processing.

2. Related Works

2.1. Panorama

The word “panorama” originates from the Greek pan (“all”) and horama (“view”). In 1857, M. Garrela patented a camera in England that could rotate around its own axis and take a horizontal 360° photo; it was the first camera for panoramic photos that employed mainspring control. According to the field of capture, panorama can be divided into three patterns, as presented in Table 1. In this study, the shooting targets were all objects above the ground, and the angle of coverage was mainly landscape; thus, each shot did not fully cover the landscape in the vertical direction. According to the range and angle settings, the photos taken in this study are considered 360° panoramas.
Image stitching is a crucial step in panorama generation. Zheng et al. [10] described and explored shooting strategies and image stitching methods in detail. Regarding research on the stitching process, Chen and Tseng [11], by identifying the corresponding feature point in different images (i.e., tie point), determined the tie point quality of panoramic images. By using panoramic photography and photogrammetry, Teo and Chang [12] and Laliberte et al. [13] generated three-dimensional (3D) image-based point clouds and orthophotos. Studies have also applied image stitching in numerous areas, including landscape identification, indoor positioning and navigation, 3D city models, virtual tours, rock art digital enhancement, and campus virtual tours [14,15,16,17,18,19].

2.2. Virtual Reality (VR)

VR is a type of computer-based simulation. Dennis and Kansky [20] stated that VR can generate simulated scenes that enable users to experience, learn, and freely observe objects in a 3D space in real time. When users move, the computer immediately performs complex calculations and returns precise 3D images to create a sense of presence. VR integrates the latest techniques in computer graphics, artificial intelligence, sensing technology, display technology, and Internet parallel computing. Burdea [21] suggested defining VR according to its functions and proposed the concept of three Is (i.e., interaction, immersion, and imagination), and suggested that VR must have said three characteristics. Furthermore, Gupta et al. [22] connected VR with soundscape data.
VR devices can be either computer-based (high resolution) or smartphone-based (portable). For example, the HTC VIVE-Pro is computer-based, whereas Google Cardboard is smartphone-based. We selected the portable VR device to conduct experiments because it is inexpensive and does not require a computer connection. To access the designed online assistance platform for teaching, the participating students only had to click on the webpage link or scan the quick response (QR) code using their smartphones.

2.3. Education with Information Technologies

As a result of technological advancement, e-learning has become prevalent. For example, information technologies, such as smartphones, multimedia, augmented reality, and internet of things, have been adopted to increase or explore learning outcomes [23,24,25,26]. Lee [27] maintained that through learners’ active participation, computer-based simulation can effectively assist learners to understand abstract concepts, which in turn increases learning motivation and improves learning outcomes. VR can also help create a learning environment without time constraints; for example, Brenton et al. [28] incorporated VR into anatomy teaching to mitigate the major impediments to anatomy teaching, such as time constraints and limited availability of cadavers, by using 3D modeling as well as computer-assisted learning. The Archeoguide (Augmented Reality-based Cultural Heritage On-site Guide) proposed by Vlahakis et al. [29] demonstrated that VR is not bounded by spatial constraints. This on-site guide was used to provide a customized cultural heritage tour experience for tourists. ART EMPEROR [30] discovered that numerous prominent museums worldwide have established databases of their collections using high-resolution photography or 3D scanning and modeling. These databases enable users to explore art through the Internet.
Chao [31] surveyed and conducted in-depth interviews with VR users after they participated in VR-related scientific experiments; the results indicated that such experiments provide participants with the illusion that they are in a physical environment. Without spatial constraints, the VR-simulated environment offered the participants experiences that they could not have in real life. Therefore, VR helped the participants obtain information and knowledge of various fields in a practical manner. These experiences, compared with those obtained through videos and print books, left a stronger impression on students, prompting them to actively seek answers. Similarly, Chang [32] indicated that students receiving 3D panorama-based instruction significantly outperformed their counterparts who received conventional instruction. Liao [33] examined English learning outcomes and motivation among vocational high school students by using panorama and VR technology; the results revealed that the use of said technologies effectively improved learning outcomes, motivation, and satisfaction.

2.4. Summary

According to the aforementioned literature, panoramic photography and VR technology have advanced rapidly, have a wide range of applications, provide realistic 3D experiences, and enhance teaching effectiveness. However, few studies have applied said technologies to engineering education. The present study created panorama-based VR environments on a virtual tour platform and achieved a cost-effective display of on-site scenes. The research team hopes to help civil engineering students rapidly become familiar with the surveying practice elements in question and complement theories with practical knowledge and skills.

3. Materials and Methods

Figure 1 presents the study procedures. First, capturing stations were set up and images were collected (Section 3.1). Subsequently, image stitching was performed (Section 3.2) by combining multiple photos from a single station into a panoramic image. The combined panoramic images were then transformed into web format by using a virtual tour platform (Section 3.3); additional functions could be added to the web format. Next, a VR navigation environment was constructed, finalizing the development of a teaching assistance platform. The platform was assessed using a questionnaire survey (Section 3.4); the survey results and feedback from users were referenced to improve the platform design.

3.1. Image Capturing

The researchers mounted a full-frame digital single-lens reflex camera (CANON EOS 6D Mark II, Canon, Tokyo, Japan) equipped with an ultra-wide-angle zoom lens (Canon EF 16–35 mm F/4L IS USM, Canon, Tokyo, Japan) on a panoramic instrument (GigaPan EPIC Pro V, GigaPan, Portland, OR, USA; Figure 2) and tripod. To use the GigaPan, the horizontal and vertical coverage must be set, after which the machine automatically rotates and accurately divides the scene into several grid images; these images overlap, which facilitates the image stitching process. In addition, the camera took each photo by using bracketing and captured shots at three brightness levels (normal, darker, and brighter). These shots served as material for stitching and synthesis. Please refer to https://www.youtube.com/watch?v=JTkFZwhRuxQ for the actual shooting process employing the GigaPan.

3.2. Image Stitching

After we captured the images, Kolor Autopano was adopted in this study. The original images underwent feature point extraction, homography, warping, and blending to form the panoramic image of a station. The general process is detailed in Figure 3.

3.2.1. Homography

When two images partially overlap, they share several corresponding feature points. These points can be connected using computations; this process is known as homography, and the corresponding feature points are called tie points.
Kolor Autopano extracts, matches, and transforms feature points into tie points by using the scale-invariant feature transform (SIFT) algorithm [10]. The features extracted using this technique are invariant to image rotation and scaling as well as changes in grayscale values. There are four major steps, including (a) scale-space extrema detection, (b) key-point localization, (c) orientation assignment, and (d) key-point descriptor. The results of detecting points of interest are termed as key-point candidates in the step of scale-space extrema detection. The Gaussian filter was used to convolve at different scales, and difference of Gaussian-blurred images were obtained. Key-points of the maximum and minimum Difference of Gaussians (DoG) are further extracted at multiple scales. A DoG image (D) is given by Equation (1), where L represents the convolution of the original image (x,y) with the Gaussian blur at scales of kσ; k and σ indicate a scale factor and standard deviation of the Gaussian blur, respectively. The second step is to localize the key-points. The scale-space extrema detection might produce too many unstable key-point candidates. This step is to fit the nearby data for accurate location in consideration of scale and ratio of principal curvatures. For assigning the orientation of key-points, the local image gradient directions in achieving invariance to rotation were determined by Equations (2) and (3), where θ and m represent orientation and gradient magnitude, respectively. Finally, the invariance to image location, scale, and rotation was checked by pixel neighborhoods and histogram-based statistics. Relevant details are provided in the research of Lowe [34].
D ( x , y , σ ) = L ( x , y , k σ ) L ( x , y , σ )
θ ( x , y ) = tan 1 ( L y / L x )
m ( x , y ) = ( L x ) 2 + ( L y ) 2
After tie points were determined, homogeneous coordinates were used to generate a 3 × 3 matrix (H), which describes the spatial translation, scaling, and rotation of feature points in different images. If the feature point set of an image is (x,y,1) and that of another image (x’,y’,1), the mapping relationship between tie points of the two images is H. The relationship can be described using Equation (4), where H is to be calculated. At least four pairs of tie points are required to calculate H [35]. However, the degree of freedom must be zero and a unique solution. In practice, more than four pairs of tie points are often used, resulting in a degree of freedom > 0. Therefore, the match with minimum errors must be identified; this process is known as optimization. Kolor Autopano adopts the random sample consensus (RANSAC) algorithm, as shown in Equation (5), to minimize the errors between H and all tie points [10]. Specifically, random sample consensus randomly selects tie points as inliers to calculate matrix H and evaluate the errors between the matrix and other tie points. Subsequently, these tie points are divided into inliers and outliers before inliers are renewed. Said process is repeated until the matrix H with minimum errors is obtained, serving as the optimal solution. The number of iteration (N) in Equation (5) is chosen to ensure that the probability p (usually set to 0.99) and at least one of the sets of random samples exclude an outlier. Let u indicate the probability that the selected data point is an inlier, and v = 1 − u the probability of observing an outlier. N iterations of the minimum number of points show that m is required.
[ x y 1 ] = [ h 00 h 01 h 02 h 10 h 11 h 12 h 20 h 21 h 22 ] × [ x y 1 ]
1 p = ( 1 u m ) N

3.2.2. Image Warping and Blending

Image warping determines the distortion level of an image in a space by using the tie points identified in homography. One of the selected two images serves as a reference, whereas the other is projected onto the coordinate space according to the reference. After computation and distortion, the projected image is projected onto the reference image, thus achieving image warping. During image projection, the projected image is distorted to enable two images to be superposed. However, during image stitching, excessive distortion of the nonoverlapping area is likely. Cylindrical projection and spherical projection can mitigate such distortion; therefore, we confirmed that said projection methods are suitable for panoramic stitching and projected the images onto a cylinder or sphere.
Let the coordinates of an image be (x,y) and the projected coordinates on a sphere be (x,y,f). The spherical coordinate system is displayed as (r,θ,φ), where r denotes the distance between the sphere center and the target, θ is the included angle between r and the zenith (range = [0, π]), and φ represents the included angle between the r plane projection line and X-axis (range = [0, 2π]). The spherical coordinate system can also be converted into a Cartesian coordinate system. Accordingly, conversion between the image and spherical coordinate systems can be described using Equation (6). By converting the spherical coordinate system into a Cartesian one and implementing homography, we achieved image warping.
( r sin θ cos φ , r sin θ sin φ , r cos θ ) ( x , y , f )
Blending, the last step of image stitching, involves synthesizing warped images by using color-balancing algorithms to create an image gradient on the overlapping area of two images. In this manner, chromatic aberration of the resulting image stitched from multiple images can be minimized. Common methods include feather blending, and multiband blending; please refer to [36] for further details.

3.2.3. Accuracy Assessment

Kolor Autopano was then used to calculate the root mean square errors (RMSEs) of the stitched panoramic images, as shown in Equation (7), where N denotes the number of tie points. The resulting RMSE value in this study represents the pixel distance (Diff) between tie points [37]. A value < 5 pixels indicates favorable stitching quality. Conversely, a value ≥ 5 indicates undesirable quality and the possibility of mismatch; in such cases, the tie points should be reviewed.
RMSE = i = 1 N D i f f i N

3.3. Virtual Tour with VR

Panoramic images of several stations were obtained after image stitching. All the panoramic images in this study were imported into Kolor Panotour and displayed in a virtual tour. In addition to webpage construction, the Kolor Panotour software enables users to add data associated with a point of interest and attribute as well as insert images, videos, and hyperlinks. The software also facilitates the generation of VR navigation environments. Koehl and Brigand [16] provided an introduction to and outlined the application of Kolor Panotour.

3.4. Designed Questionnaire

The questionnaire employed in this study comprised four questions, which were rated on a 5-point Likert scale. A higher score indicates satisfaction with or interest in the designed platform. The questionnaire was designed using Google Forms with a quick response (QR) code attached. A total of 62 students completed the questionnaire, and some students were also interviewed. The survey questions are as follows:
Q1.
After using the virtual tour webpages, compared with the scenario where only an introduction is provided by the instructor, can you more easily identify surveying targets on campus?
Q2.
Are you satisfied with the overall webpage design?
Q3.
Are you interested in research on surveying practice courses that employ panorama and VR technology?
Q4.
Do you like courses that incorporate information technologies (e.g., e-learning)?

4. Results

4.1. Camera Settings

Appropriate exposure and focus are essential for taking a suitable picture [38]. Exposure is determined by the shutter speed, aperture size, and ISO value, whereas the image is in focus and becomes clear only once the lens–object distance is correctly adjusted.
A typical imaging device can only capture an extremely limited range within the complete dynamic range. Consequently, using general imaging devices can lead to a severe loss of scene information, particularly in highlights and shadows [39]. By using bracketing, we took normal, darker, and brighter photos and combined these photos, which have different exposure levels, to obtain a greater exposure dynamic range. After several tests, the suitable parameters for camera settings in the study case are shown in Table 2.

4.2. Study Site and Routes

The study site was on the main campus (northwestern side) of Feng Chia University in Taichung City, Taiwan (Figure 4). The two routes (symbols 1 and 2), comprising surveying targets surrounding the Civil/Hydraulic Engineering Building and Science Building, were regarded as elevation-based measurement routes (i.e., the blue and green lines in Figure 4D). The site for angle-based measurement (symbol 3) was located on the lawn, which is represented by red lines in Figure 4D. According to the measurement tasks and targets, a total of 15 stations for panoramic photography were set up (Figure 5).

4.3. Developed Platform

4.3.1. Image Stitching

Figure 6 depicts the stitched image of a surveying station as an example; the green marks denote an RMSE < 5 pixels, whereas the red marks denote stitching errors with an RMSE ≥ 5. This figure demonstrates that most of the tie points meet the suggested standard. However, the stitching quality between trees and the sky was lower because when the camera took pictures, the target object moved, causing image stitching to fail. Therefore, manual adjustment of the tie points or postprocessing of the image was required. Figure 7 exhibits the stitching results of panoramas at four stations. These images were later imported into the virtual tour platform to enable panoramic navigation.

4.3.2. Webpage and Virtual Tour

Figure 8 illustrates the designed webpage framework and panoramic stations, which have a spatial relationship that is consistent with that in Figure 5 from the top view. The homepage displays the panorama of Station 14. When users visit Station 15, they can read instructions on angle-based measurement and watch a tutorial on angular measuring device setup. Station 11-3 features instructions on elevation-based measurement and a tutorial on relevant instrument setup. This station also provides on-site views of the two routes available for surveying. Next, the research team connected the elements in Figure 8 to a virtual tour platform to establish a webpage, where various information can be added, including points of interest and attributes. Designers could also insert pictures, videos, and hyperlinks (Figure 9).

4.3.3. Demonstration

The platform homepage is displayed in Figure 10. On the upper left (symbol 1) is a link to Feng Chia University’s website; control bars (symbol 4) are located on the bottom left, enabling the user to zoom in, zoom out, or switch the image to VR mode. At the bottom middle (symbol 5) are relevant instructional documents, and on the bottom right (symbol 6) is a quick map showing the navigation direction. The homepage hyperlink is located in the upper right corner (symbol 3). By navigating around the homepage 360° panorama, the user can locate the entrances for tutorials on elevation-based and angle-based measurement (Figure 11A and Figure 12A).
After clicking on the entrance for elevation-based (or called leveling) measurement, users have two routes (symbols 2 and 3) to choose from in Figure 11B. Additionally, a tutorial video (symbol 4) on instrument setup for elevation-based measurement is available at Station 11-3 (Figure 11C). Figure 12b depicts the webpage shown when users click on the entrance for angle-based measurement instructions. This page not only instructs users on how to record surveying measurements (symbol 3), aim targets, and set the surveying direction (symbol 2), but also teaches them how to set up the measuring instrument in a tutorial video (Figure 12C). By clicking the VR icon on the control bars, the image switches to VR mode. Users can then place their smartphone inside a portable VR viewer, connect a joystick to the system, and enjoy the VR tour (Figure 12C).

4.4. Questionnaire-Based Results

We received 62 questionnaire responses from the participating students; the statistical results are listed in Table 3. A higher score indicates stronger approval of the use of the designed platform and information technologies in teaching. Q1 and Q2 aimed to investigate the effectiveness of the designed platform in teaching as well as its content display, and Q3 and Q4 attempted to determine whether the use of information technologies helps strengthen learning interest. Most of the ratings for these four questions were > 4 (92% of the total responses), signifying positive feedback from respondents. All the students used the web-based platform; only some of the students tested the head-mounted equipment because of limited devices. The major problem using a head-mounted instrument is the layout for display. The developed platform was revised based on feedback, such as adjusting the size of the quick map and adding other targets for measurement.

5. Discussion

5.1. Capturing Modes

To capture a high-quality image, one should observe the surrounding light, estimate the area of the surrounding environment, and adjust camera parameters (e.g., shutter, aperture, ISO value, and bracketing level) accordingly. By increasing the rate of overlap in the panoramic instrument, we expanded the overlapping area of adjacent photos, increased the number of tie points, and reduced the RMSEs, thereby enhancing the success rate of image matching.
In case of undesirable matching results based on GigaPan capturing, the problem must be identified in the photos. If overexposure or underexposure is confirmed, the aforementioned bracketing and high-dynamic range mode can be selected to overcome difficulties in feature point extraction caused by exposure-related problems. If the scene in question has few feature points, we recommend taking a new photo to increase the rate of overlap. Therefore, a larger overlapping area can be obtained with corresponding increases in the numbers of feature and tie points.
For example, Figure 13 exhibits a stitched panorama. Table 4 lists the capturing time required and post stitching RMSEs at different rates of overlap. When the targets are far from the viewer’s perspective in a scene, the rate of overlap contributes less to the stitching quality. However, when some of the targets are far from the viewer’s perspective and others are nearer, the rate of overlap can positively influence the stitching quality. Furthermore, the rate of overlap is directly proportional to capturing time.
In outdoor photography, objects often move, causing ghosting effects or mismatches during image stitching (e.g., characters or clouds moving and grass moving in the wind) in the case of GigaPan capturing. To address this problem, we recommend on-site investigation and planning in advance; for example, pictures could be taken when atmospheric pressure differences are small or when fewer people are in the area. Alternatively, the effects of object movement can be mitigated by adjusting the shooting sequence of the panoramic instrument. For instance, letting the camera shoot vertical photos before moving to the left or right (Column-left/right) can reduce the ghosting effects of characters. The shooting process can also be paused and continue after people pass by the target site. Row-down shooting after the camera takes a 360° horizontal shot also helps avoid problems related to clouds and sunlight; the taken photos can later be adjusted using postproduction software.

5.2. Contributions, Comparison and Limitations

There are two approaches for capturing a panoramic image in general. One is to use a 360-degree spherical camera. Another is to adopt a platform with a camera for capturing and stitching the images, such as GigaPan. The former can easily and simply create a panoramic image, but the resolution is not better than the latter, and there is no chance to improve the image quality. On the other hand, the platform-based method requires stable conditions to capture images. This is a trade-off between these operations for producing a panoramic image. High resolution [40] is necessary in this study to display the targets for measurement in the field. Thus, the GigaPan-based approach was chosen in this study case.
This study contributes to knowledge by (a) discussing panoramic photography and image stitching quality as well as providing suggestions on camera parameters and GigaPan settings, (b) proposing strategies for the production of panoramic images that are more operable and have a higher resolution than Google Street View, (c) using information technologies (i.e., virtual tour tools and VR) to develop an assistance platform for teaching, (d) applying the designed platform to an engineering course, and (e) assessing teaching effectiveness through a questionnaire survey.
Ekpar [41] proposed a framework for the creation, management, and deployment of interactive virtual tours with panoramic images. Based on this concept, many cases for displaying campuses [42,43], cathedrals [44], and culture heritage sites [45] were explored in the previous literature. This study not only visualized reality-based scenes using panorama- and virtual tour-based technologies, but also connected related documents for engineering education. Furthermore, the teaching site was also emphasized in this study. E-learning is a trend of education; it can help teachers reduce the load of teaching and further concentrate on the professional issues in subjects. Furthermore, this study provided a useful platform to help students who cannot come to the classroom because of special circumstances (e.g., COVID-19).
In terms of limitations, before students could use the platform designed in this study, instructors had to explain relevant theories and provide detailed instructions for equipment operations. If students lack basic understanding of surveying, their learning outcomes might not meet expectations. Additionally, we did not recruit a control group and thus, could not compare the learning results between students who used the designed platform and those who did not. We endeavor to test and verify the effect of the designed platform on learning outcomes in future research. The questionnaires and assessments for improving engineering education could be designed to put more emphasis on user testing and the responses from students, for example,
  • Sampling design for statistical tests:
    -
    Testing differences of final grades between student groups with the IT (Information Technologies)-based method and without the IT-based method.
    -
    Grouping samples by the background of sampled students.
  • Asking more aspects for comprehensive assessment:
    -
    “Would you recommend this to your friends and colleagues?” followed by “What points do you recommend/not recommend?”
    -
    “How long did you take to complete the IT-based program?”
    -
    “Does the virtual tour seamlessly/comfortably guide you?”
    -
    “Does the virtual tour sufficiently represent the real world?”
  • Comparing and exploring the problems on IT-based and traditional learning.

6. Conclusions

To create a multimedia platform that assists students in a surveying practice course, we initially took multiple overlapping images and stitched them into panoramas; subsequently, we used information technologies including virtual tour tools and VR. A full-frame digital single-lens reflex camera with an ultra-wide-angle zoom lens was mounted on a GigaPan panoramic instrument to obtain a 360° horizontal field of vision. The effectiveness of said visualization and information technology application was verified through a questionnaire survey.
The research results indicated that the RMSEs of stitched images were mostly < 5 pixels, signifying favorable stitching quality. The designed platform also features elevation-based and angle-based measurement instructions as well as instrument setup tutorials and documents as supplementary materials. A total of 15 panorama stations were set up for students to navigate. Of the 62 survey respondents, more than 92% agreed that the information technology-based platform improved their engagement in the surveying practice course. Moreover, we discussed and explored the improvement of panoramic image quality (RMSEs), camera parameter settings, capturing modes, and panoramic image processing, as shown in Section 4.1 and Section 5.1. In the future, we plan to compare the learning outcomes in students who used the designed platform (experimental group) with those who did not (control group).

Author Contributions

Jhe-Syuan Lai conceived and designed this study; Jhe-Syuan Lai, Yu-Chi Peng, Min-Jhen Chang, and Jun-Yi Huang performed the experiments and analyzed the results; Jhe-Syuan Lai, Yu-Chi Peng, and Min-Jhen Chang wrote the paper. All authors have read and agreed to the published version of the manuscript.

Funding

This study was supported, in part, by the Ministry of Education in Taiwan under the Project B from June 2019 to February 2021 for New Engineering Education Method Experiment and Construction.

Acknowledgments

The authors would like to thank Fuan Tsai and Yu-Ching Liu in the Center for Space and Remote Sensing Research, National Central University, Taiwan, for supporting the software and panoramic instrument operation.

Conflicts of Interest

The authors declare that they have no conflict of interest.

References

  1. AIR-360. Available online: https://www.youtube.com/watch?v=W3OWKEtVtUY (accessed on 30 September 2020).
  2. Super 720. Available online: https://super720.com/?page_id=8603 (accessed on 30 September 2020).
  3. Lorek, D.; Horbinski, T. Interactive web-map of the European freeway junction A1/A4 development with the use of archival cartographic source. ISPRS Int. J. GeoInf. 2020, 9, 438. [Google Scholar] [CrossRef]
  4. Cybulski, P.; Horbinski, T. User experience in using graphical user interfaces of web maps. ISPRS Int. J. GeoInf. 2020, 9, 412. [Google Scholar] [CrossRef]
  5. Kato, Y.; Yamamoto, K. A sightseeing spot recommendation system that takes into account the visiting frequency of users. ISPRS Int. J. GeoInf. 2020, 9, 411. [Google Scholar] [CrossRef]
  6. Wielebski, L.; Medynska-Gulij, B.; Halik, L.; Dickmann, F. Time, spatial, and descriptive features of pedestrian tracks on set of visualizations. ISPRS Int. J. GeoInf. 2020, 9, 348. [Google Scholar] [CrossRef]
  7. Medynska-Gulij, B.; Wielebski, L.; Halik, L.; Smaczynski, M. Complexity level of people gathering presentation on an animated map–objective effectiveness versus expert opinion. ISPRS Int. J. GeoInf. 2020, 9, 117. [Google Scholar] [CrossRef] [Green Version]
  8. Lu, C.-C. A Study on Computer-Aided Instruction for Engineering Surveying Practice. Master’s Thesis, National Taiwan University, Taipei, Taiwan, 2009. [Google Scholar]
  9. Panos. Available online: http://www.panosensing.com.tw/2018/03/28/faq6/ (accessed on 30 September 2020).
  10. Zheng, J.; Zhang, Z.; Tao, Q.; Shen, K.; Wang, Y. An accurate multi-row panorama generation using multi-point joint stitching. IEEE Access 2018, 6, 27827–27839. [Google Scholar] [CrossRef]
  11. Chen, P.-Y.; Tseng, Y.-T. Automatic conjugate point searching of spherical panorama images. J. Photogram. Remote Sens. 2019, 24, 147–159. [Google Scholar]
  12. Teo, T.-A.; Chang, C.-Y. The Generation of 3D point clouds from spherical and cylindrical panorama images. J. Photogram. Remote Sens. 2018, 23, 273–284. [Google Scholar]
  13. Laliberte, A.S.; Winters, C.; Rango, A. A Procedure for orthrectification of sub-decimeter resolution image obtained with an unmanned aerial vehicle (UAV). In Proceedings of the ASPRS Annual Conference, Portland, OR, USA, 28 April–2 May 2008. [Google Scholar]
  14. Wrozynski, R.; Pyszny, K.; Sojka, M. Quantitative landscape assessment using LiDAR and rendered 360° panoramic images. Remote Sens. 2020, 12, 386. [Google Scholar] [CrossRef] [Green Version]
  15. Huang, T.C.; Tseng, Y.-H. Indoor Positioning and navigation based on control spherical panoramic images. J. Photogram. Remote Sens. 2017, 22, 105–115. [Google Scholar]
  16. Koehl, M.; Brigand, N. Combination of virtual tours, 3D model and digital data in a 3D archaeological knowledge and information system. In Proceedings of the XXII ISPRS Congress, Melbourne, Australia, 25 August–1 September 2012. [Google Scholar]
  17. Lee, I.-C.; Tsai, F. Applications of panoramic images: From 720° panorama to interior 3D models of augmented reality. In Proceedings of the Indoor-Outdoor Seamless Modelling, Mapping and Navigation, Tokyo, Japan, 21–22 May 2015. [Google Scholar]
  18. Quesada, E.; Harman, J. A Step Further in rock art digital enhancement. DStretch on Gigapixel imaging. Digit. Appl. Archaeol. Cult. Heritage 2019, 13, e00098. [Google Scholar] [CrossRef]
  19. Bakre, N.; Deshmukh, A.; Sapaliga, P.; Doulatramani, Y. Campus virtual tour. IJARCET 2017, 6, 444–448. [Google Scholar]
  20. Dennis, J.R.; Kansky, R.J. Electronic slices of reality: The instructional role of computerized simulations. In Instructional Computing: An Acting Guide for Educators; Foresman: Glenview, IL, USA, 1984; 256p. [Google Scholar]
  21. Burdea, G. Virtual reality systems and applications. In Electro/93 International Conference Record; Edison: Middlesex, NJ, USA, 1993; 504p. [Google Scholar]
  22. Gupta, R.; Lam, B.; Hong, J.-Y.; Ong, Z.-T.; Gan, W.-S.; Chong, S.H.; Feng, J. 3D audio VR/AR capture and reproduction setup for Auralization of soundscapes. In Proceedings of the 24th International Congress on Sound and Vibration, London, UK, 23–27 July 2017. [Google Scholar]
  23. Shambour, Q.; Fraihat, S.; Hourani, M. The implementation of mobile technologies in high education: A mobile application for university course advising. J. Internet Technol. 2018, 19, 1327–1337. [Google Scholar]
  24. Wu, Z.-H.; Zhu, Z.-T.; Chang, M. Issues of designing educational multimedia metadata set based on educational features and needs. J. Internet Technol. 2011, 12, 685–698. [Google Scholar]
  25. Hsieh, M.-C.; Chen, S.-H. Intelligence augmented reality tutoring system for Mathematics teaching and learning. J. Internet Technol. 2019, 20, 1673–1681. [Google Scholar]
  26. Lin, C.-L.; Chiang, J.K. Using 6E model in STEAM teaching activities to improve university students’ learning satisfaction: A case of development seniors IoT smart cane creative design. J. Internet Technol. 2019, 20, 2109–2116. [Google Scholar]
  27. Lee, J. Effectiveness of computer-based instruction simulation: A meta-analysis. Int. J. Instruct. Media 1999, 26, 71–85. [Google Scholar]
  28. Brenton, H.; Hernandez, J.; Bello, F.; Strutton, P.; Purkayastha, S.; Firth, T.; Darzi, A. Using multimedia and web3D to enhance anatomyteaching. Comput. Educ. 2007, 49, 32–53. [Google Scholar] [CrossRef]
  29. Vlahakis, V.; Ioannidis, N.; Karigiannis, J.; Tsotros, M.; Gounaris, M.; Stricker, D.; Gleue, T.; Daehne, P.; Almeida, L. Archeoguide: An augmented reality guide for archaeological sites. IEEE Comput. Graph. Appl. 2002, 22, 52–60. [Google Scholar] [CrossRef]
  30. ART EMPEROR. Available online: https://www.eettaiwan.com/news/article/20161226NT22-MIC-Top-10-technology-trends-for-2017 (accessed on 30 September 2020).
  31. Chao, T.-F. The Study of Virtual Reality Application in Education Learning—An Example on VR Scientific Experiment. Master’s Thesis, National Kaohsiung First University of Science and Technology, Kaohsiung, Taiwan, 2009. [Google Scholar]
  32. Chang, S.-Y. Research on the Effects of English Teaching with 3D Panoramic Computer Skill Application. Master’s Thesis, National University of Tainan, Tainan, Taiwan, 2018. [Google Scholar]
  33. Liao, Y.-C. Research on Using Virtual Reality to Enhance Vocational Students’ Learning Performance and Motivation. Master’s Thesis, National Taiwan University of Science and Technology, Taipei, Taiwan, 2019. [Google Scholar]
  34. Lowe, D.G. Object recognition from local scale-invariant features. In Proceedings of the 7th IEEE International Conference on Computer Vision, Kerkyra, Greece, 20–27 September 1999. [Google Scholar]
  35. Wang, G.; Wu, Q.M.J.; Zhang, W. Kruppa equation based camera calibration from homography induced by remote plane. Pattern Recogn. Lett. 2008, 29, 2137–2144. [Google Scholar] [CrossRef]
  36. Sakharkar, V.S.; Gupta, S.R. Image stitching techniques—An overview. Int. J. Comput. Sci. Appl. 2013, 6, 324–330. [Google Scholar]
  37. Zeng, B.; Huang, Q.; Saddik, A.E.; Li, H.; Jiang, S.; Fan, X. Advances in multimedia information processing. In Proceedings of the 19th Pacific-Rim Conference on Multimedia, Hefei, China, 21–22 September 2018. [Google Scholar]
  38. Busch, D.D. Mastering Digital SLR Photography; Course Technology: Boston, MA, USA, 2005; 254p. [Google Scholar]
  39. Wang, Y.-C.; Shyu, M.J. A study of brightness correction method for different exposure in rebuilding scene luminance. J. CAGST 2015, 271–284. [Google Scholar]
  40. Li, D.; Xiao, B.J.; Xia, J.Y. High-resolution full frame photography of EAST to realize immersive panorama display. Fusion Eng. Des. 2020, 155, 111545. [Google Scholar] [CrossRef]
  41. Ekpar, F.E. A framework for interactive virtual tours. EJECE 2019, 3. [Google Scholar] [CrossRef] [Green Version]
  42. Perdana, D.; Irawan, A.I.; Munadi, R. Implementation of web based campus virtual tour for introducing Telkom University building. IJJSSST 2019, 20, 1–6. [Google Scholar] [CrossRef]
  43. Suwarno, N.P.M. Virtual campus tour (student perception of university virtual environment). J. Crit. Rev. 2020, 7, 4964–4969. [Google Scholar]
  44. Walmsley, A.P.; Kersten, T.P. The imperial cathedral in Konigslutter (Germany) as an immersive experience in virtual reality with integrated 360° panoramic photography. Appl. Sci. 2020, 10, 1517. [Google Scholar] [CrossRef] [Green Version]
  45. Mah, O.B.P.; Yan, Y.; Tan, J.S.Y.; Tan, Y.-X.; Tay, G.Q.Y.; Chiam, D.J.; Wang, Y.-C.; Dean, K.; Feng, C.-C. Generating a virtual tour for the preservation of the (in)tangible cultural heritage of Tampines Chinese Temple in Singapore. J. Cult. Herit. 2019, 39, 202–211. [Google Scholar] [CrossRef]
Figure 1. The conceptual procedure used in this study.
Figure 1. The conceptual procedure used in this study.
Ijgi 09 00689 g001
Figure 2. Panoramic instrument of GigaPan EPIC Pro V.
Figure 2. Panoramic instrument of GigaPan EPIC Pro V.
Ijgi 09 00689 g002
Figure 3. Procedure for image stitching.
Figure 3. Procedure for image stitching.
Ijgi 09 00689 g003
Figure 4. Study site (AC), stations and routes (D) for the course of Surveying Practice in Feng Chia University (FCU), where blue and green colors indicate the routes for the elevation-based measurement, and red represents the station for the angle-based measurement.
Figure 4. Study site (AC), stations and routes (D) for the course of Surveying Practice in Feng Chia University (FCU), where blue and green colors indicate the routes for the elevation-based measurement, and red represents the station for the angle-based measurement.
Ijgi 09 00689 g004aIjgi 09 00689 g004b
Figure 5. Panoramic stations (red points) for capturing images.
Figure 5. Panoramic stations (red points) for capturing images.
Ijgi 09 00689 g005
Figure 6. An example of image stitching with root mean square errors (RMSEs) where green color represents excellent results.
Figure 6. An example of image stitching with root mean square errors (RMSEs) where green color represents excellent results.
Ijgi 09 00689 g006
Figure 7. Examples of the stitched images, located (A) symbol 1, (B) symbol 2, (C) symbol 3, and (D) symbol 4 in (E).
Figure 7. Examples of the stitched images, located (A) symbol 1, (B) symbol 2, (C) symbol 3, and (D) symbol 4 in (E).
Ijgi 09 00689 g007
Figure 8. Webpage framework and panoramic stations, where blue and red colors represent elevation- and angle-based measurements.
Figure 8. Webpage framework and panoramic stations, where blue and red colors represent elevation- and angle-based measurements.
Ijgi 09 00689 g008
Figure 9. An example of setting a virtual tour.
Figure 9. An example of setting a virtual tour.
Ijgi 09 00689 g009
Figure 10. Homepage of the developed platform. Symbol 1: hyperlink to the FCU official webpage; symbol 2: information of the supporting project; symbol 3: back to the homepage control bar; symbol 4: control bars (zoom in, zoom out, or switch the image to VR mode); symbol 5: hyperlink to the related documents; symbol 6: quick map.
Figure 10. Homepage of the developed platform. Symbol 1: hyperlink to the FCU official webpage; symbol 2: information of the supporting project; symbol 3: back to the homepage control bar; symbol 4: control bars (zoom in, zoom out, or switch the image to VR mode); symbol 5: hyperlink to the related documents; symbol 6: quick map.
Ijgi 09 00689 g010
Figure 11. Webpage for teaching the elevation-based measurement. (A) Entrance for angle-based measurement (symbol 1, or called leveling measurement); (B) Route selection (symbols 2 and 3 represent routes 1 and 2, respectively); (C) Teaching video (symbol 4) for setting up the instrument.
Figure 11. Webpage for teaching the elevation-based measurement. (A) Entrance for angle-based measurement (symbol 1, or called leveling measurement); (B) Route selection (symbols 2 and 3 represent routes 1 and 2, respectively); (C) Teaching video (symbol 4) for setting up the instrument.
Ijgi 09 00689 g011aIjgi 09 00689 g011b
Figure 12. Webpage for teaching the angle-based measurement. (A) Entrance for angle-based measurement (symbol 1); (B) Aimed direction from left for measurement (symbol 2) and how to record the observations (symbol 3); (C) Teaching video for setting up the instrument with VR mode.
Figure 12. Webpage for teaching the angle-based measurement. (A) Entrance for angle-based measurement (symbol 1); (B) Aimed direction from left for measurement (symbol 2) and how to record the observations (symbol 3); (C) Teaching video for setting up the instrument with VR mode.
Ijgi 09 00689 g012
Figure 13. Cases for comparing different overlap ratios and RMSEs after image stitching. (A) Case 1; (B) Case 2.
Figure 13. Cases for comparing different overlap ratios and RMSEs after image stitching. (A) Case 1; (B) Case 2.
Ijgi 09 00689 g013
Table 1. Classification of Panorama [9].
Table 1. Classification of Panorama [9].
Field of CapturingPanorama360° PanoramaSpherical Panorama
Horizontal<180°360°360°
Vertical<180°<180°180°
Table 2. Used parameters for camera settings in this study.
Table 2. Used parameters for camera settings in this study.
Shutter Speed (Sec.)Aperture SizeISO Value
1/500~1F/11~F/16100~400
Table 3. Results of the questionnaire.
Table 3. Results of the questionnaire.
(%)Score
54321
Q153443--
Q248448--
Q337603--
Q44555---
Table 4. Statistics of Figure 13 for comparing different overlap ratios and root mean square errors (RMSEs).
Table 4. Statistics of Figure 13 for comparing different overlap ratios and root mean square errors (RMSEs).
OverlapCapturing Time (Min)RMSE (Pixel)
Case 1Case 2
30%~3 3.393.48
50%~5 3.373.39
70%~9 3.353.29
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Share and Cite

MDPI and ACS Style

Lai, J.-S.; Peng, Y.-C.; Chang, M.-J.; Huang, J.-Y. Panoramic Mapping with Information Technologies for Supporting Engineering Education: A Preliminary Exploration. ISPRS Int. J. Geo-Inf. 2020, 9, 689. https://0-doi-org.brum.beds.ac.uk/10.3390/ijgi9110689

AMA Style

Lai J-S, Peng Y-C, Chang M-J, Huang J-Y. Panoramic Mapping with Information Technologies for Supporting Engineering Education: A Preliminary Exploration. ISPRS International Journal of Geo-Information. 2020; 9(11):689. https://0-doi-org.brum.beds.ac.uk/10.3390/ijgi9110689

Chicago/Turabian Style

Lai, Jhe-Syuan, Yu-Chi Peng, Min-Jhen Chang, and Jun-Yi Huang. 2020. "Panoramic Mapping with Information Technologies for Supporting Engineering Education: A Preliminary Exploration" ISPRS International Journal of Geo-Information 9, no. 11: 689. https://0-doi-org.brum.beds.ac.uk/10.3390/ijgi9110689

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop