Next Article in Journal
Naturalistic Hyperscanning with Wearable Magnetoencephalography
Next Article in Special Issue
Design and Evaluation of CPR Emergency Equipment for Non-Professionals
Previous Article in Journal
Contactless and Vibration-Based Damage Detection in Rectangular Cement Beams Using Magnetoelastic Ribbon Sensors
Previous Article in Special Issue
Machine Learning-Based Estimation of Ground Reaction Forces and Knee Joint Kinetics from Inertial Sensors While Performing a Vertical Drop Jump
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Utility and Usability of Two Forms of Supplemental Vibrotactile Kinesthetic Feedback for Enhancing Movement Accuracy and Efficiency in Goal-Directed Reaching

by
Ramsey K. Rayes
1,2,†,
Rachel N. Mazorow
1,†,
Leigh A. Mrotek
1 and
Robert A. Scheidt
1,*
1
Joint Department of Biomedical Engineering, Marquette University and the Medical College of Wisconsin, Milwaukee, WI 53233, USA
2
Medical School, Medical College of Wisconsin, Milwaukee, WI 53226, USA
*
Author to whom correspondence should be addressed.
These authors contributed equally to this work.
Submission received: 3 April 2023 / Revised: 25 May 2023 / Accepted: 6 June 2023 / Published: 9 June 2023
(This article belongs to the Special Issue Applications, Wearables and Sensors for Sports Performance Assessment)

Abstract

:
Recent advances in wearable sensors and computing have made possible the development of novel sensory augmentation technologies that promise to enhance human motor performance and quality of life in a wide range of applications. We compared the objective utility and subjective user experience for two biologically inspired ways to encode movement-related information into supplemental feedback for the real-time control of goal-directed reaching in healthy, neurologically intact adults. One encoding scheme mimicked visual feedback encoding by converting real-time hand position in a Cartesian frame of reference into supplemental kinesthetic feedback provided by a vibrotactile display attached to the non-moving arm and hand. The other approach mimicked proprioceptive encoding by providing real-time arm joint angle information via the vibrotactile display. We found that both encoding schemes had objective utility in that after a brief training period, both forms of supplemental feedback promoted improved reach accuracy in the absence of concurrent visual feedback over performance levels achieved using proprioception alone. Cartesian encoding promoted greater reductions in target capture errors in the absence of visual feedback (Cartesian: 59% improvement; Joint Angle: 21% improvement). Accuracy gains promoted by both encoding schemes came at a cost in terms of temporal efficiency; target capture times were considerably longer (1.5 s longer) when reaching with supplemental kinesthetic feedback than without. Furthermore, neither encoding scheme yielded movements that were particularly smooth, although movements made with joint angle encoding were smoother than movements with Cartesian encoding. Participant responses on user experience surveys indicate that both encoding schemes were motivating and that both yielded passable user satisfaction scores. However, only Cartesian endpoint encoding was found to have passable usability; participants felt more competent using Cartesian encoding than joint angle encoding. These results are expected to inform future efforts to develop wearable technology to enhance the accuracy and efficiency of goal-directed actions using continuous supplemental kinesthetic feedback.

1. Introduction

When performing goal-directed actions, people typically integrate information from multiple senses (e.g., vision and proprioception) to control their body movements ([1,2,3,4]; but see [5]). Usually, multimodal sensory integration is performed without conscious thought ([6,7]; see also [8]). Sometimes, however, one of the senses may be unavailable in the short- or long-term (cf., [5]); visual attention might need to be diverted when reaching for a cup of coffee, or proprioception may be permanently impaired after a neuromotor injury such as stroke ([9]; see also [10]). The current study is based on the idea that it may be desirable in some cases to enhance the accuracy and efficiency of movements by providing continuous supplemental kinesthetic feedback of limb or tool motion in real-time using sensory augmentation techniques (c.f., [10,11,12,13,14,15,16,17,18,19]).
We envision a future where intelligent wearable technologies enhance functional movements by sensing the state of the body and its physical surroundings, synthesizing real-time sensory feedback optimized to achieve control goals, and delivering the feedback in an inconspicuous way that does not interfere with other critical behaviors. Recent advances in wearable sensors and computing have made possible the development of novel augmentation technologies that have potential applications in a wide array of fields, including aerospace [20,21], navigation [22], virtual reality [23], sports [24,25], and healthcare [26,27]. In healthy individuals, supplemental feedback has been used to augment motor performance in complex tasks, such as robot-assisted surgery [28,29,30], and to promote motor learning while playing an instrument or sports [31,32,33]. In individuals with sensorimotor deficits, vibrotactile stimuli have been applied to improve sensorimotor control by exciting corticospinal pathways contributing to the regulation of movement and/or reflex activity [34]. As such, efforts to enhance real-time feedback control of the arm and hand draw on a rich body of prior research.
The requirement that a sensory augmentation system be continuously wearable places strict constraints on the technology. Use of the system should minimize the potential for injury, be easy to don/doff, minimize demands on visual attention, not interfere with conversation, and be inconspicuous so that social stigmatization is minimized (c.f., [35,36,37]). There are several ways that supplemental sensory information can be delivered by a wearable system, including acoustic stimuli [38,39,40], visual stimuli [41], and electrotactile stimuli [42,43]; we propose that only vibrotactile stimulation readily satisfies all the criteria; electrotactile stimulation can cause skin breakdown [44] or interfere with speech [45], acoustic stimulation can interfere with perception of the spoken word, and visual augmentation places a high load on visual attention. Several stimulation sites proposed by Kaczmarek et al. [11] can be discounted if the wearable system is to be inconspicuous and easy for individuals to don and doff: these include the abdomen, back, fingertip, forehead, and tongue. By contrast, donning/doffing can be relatively easy for wearable technology applied to the proximal arm segments. Preliminary experimental studies have found that a supplemental vibrotactile feedback system consisting of two tactors per degree of freedom can be intuitive to use [10,17] and can promote degrees of movement accuracy and efficiency that meet or exceed those observed during movements guided by intrinsic proprioceptive feedback in the absence of visual feedback [18,19].
There are at least four ways to encode information relevant to the control of limb motion into supplemental vibrotactile feedback. These include continuous feedback of limb state relative to an arbitrary reference body configuration [17,18]; continuous error feedback relative to a goal [12,14,16,32,46]; continuous optimal feedback relative to some arbitrary cost function [47]; and intermittent alarms indicating undesirable conditions [48]. Regardless of which form the vibrotactile feedback takes, the cues must be designed and applied such that the encoded information can be easily perceived and interpreted by the user [49,50,51,52]. User experience research notes that the usefulness of a system derives from two main aspects of its design evaluated within a specific context. These are utility (the capacity of a system to fulfill a real user need) and usability (the capacity for users to readily understand and learn how to use the system). The immediate goal of the current study is to compare the objective utility and subjective user experience offered by two biologically inspired ways of encoding limb kinematic information into vibrotactile feedback for the real-time control of goal-directed movements. These include the encoding of hand position within a reference frame modeled in some respects after visual feedback (Cartesian Endpoint Encoding; CEE) and the encoding of arm configuration within a reference frame modeled after intrinsic proprioceptive feedback (Joint Angle Encoding; JAE).
We asked healthy human participants to hold the handle of a robotic manipulandum while making point-to-point, goal-directed reaches in the horizontal plane. We compared the utility of the two encoding schemes of supplemental kinesthetic feedback in enhancing the accuracy and efficiency of reaches performed in the absence of concurrent visual feedback. We also compared the extent to which the two encoding schemes could foster positive user experiences, which we assessed using standard surveys of system usability [53], intrinsic motivation [54], and user satisfaction [55,56]. We specifically tested the following hypotheses: (1) after a brief training period, supplemental vibrotactile feedback can improve reach accuracy in the absence of concurrent visual feedback for both encoding schemes; (2) when reaching without vision, one or the other encoding scheme will promote better reach accuracy and/or efficiency; and (3) that participants will prefer one encoding method over the other. The first two hypotheses were tested via analysis of kinematic data, whereas we tested the last hypothesis via analysis of survey responses.

2. Materials and Methods

2.1. Participant Recruitment

Based on the results of prior studies using similar experimental techniques [17,18], a convenience sample of 15 healthy right-handed individuals (8 female, 7 male; age range: 20–27 years) gave informed consent to participate in this study. All procedures were approved by the Institutional Review Board of Marquette University in compliance with the 1964 Declaration of Helsinki. None of the participants had a known history of neurological disease or sensory deficits, colorblindness, or reported visual difficulty. Each participant volunteered in two experimental sessions performed on separate days (day range: 2–17). Each session required about 1 to 1.5 h to complete, including experimental setup and testing.

2.2. Experimental Set-Up

We adapted techniques previously used to study the integration of supplemental vibrotactile kinesthetic feedback into the ongoing control of goal-directed actions [17,18]. Participants were seated in an adjustable, high-backed chair that was positioned directly in front of a horizontal planar robotic manipulandum (Figure 1A). They used their right hand to grasp a spherical handle affixed to the robot’s endpoint. Handle location was resolved within 0.038 mm using joint angular position data from two 17-bit encoders (A25SB17P180C06E1CN; Gurley Instruments Inc., Troy, NY, USA) [57]. Hand position data were collected at 200 samples/s using the MATLAB XPC real-time computing environment (The Mathworks Inc., Natick, MA, USA). An opaque, horizontal shield was placed directly above the plane of motion of the robotic arm to prevent a direct view of the participant’s arm and robot. The seat position and height were adjusted so that the participant was able to reach all positions of the workspace without moving the torso. Movement of the torso was further minimized using nylon shoulder restraints attached to the chair. The right arm rested in a four-degree-of-freedom, lightweight arm support that was fixed to the seat; this linkage acted to counteract the effect of gravity and to maintain a shoulder abduction angle of approximately 75° to 85°. Wrist motions were minimized using an orthopedic brace.
A vertical 40″ LED screen (Westinghouse WD40FX1170) was mounted approximately 70 cm directly in front of the participant and was used to present visual stimuli. Twenty-five visual targets (1 cm diameter) were arranged in a 5 × 5 grid that had an edge length of 8 cm (Figure 1B); we refer to this grid of targets as the visual workspace. During certain trials, hand position was represented onscreen with a white cursor (0.5 cm diameter) that provided honest real-time visual feedback of hand location relative to the visual targets. Hand motion in the physical workspace was mapped to cursor motion in the visual workspace in a 1:1 manner, whereby a 1 cm movement of the hand to the participant’s right corresponded to a 1 cm displacement of the cursor to the right. Likewise, a 1 cm movement of the hand away from the participant’s body corresponded to a 1 cm displacement of the cursor toward the top of the visual display.
We attached a “vibrotactile display” to the non-moving left arm and used it to provide supplemental kinesthetic feedback regarding the motion of the moving right hand (Figure 1A). The vibrotactile display was comprised of four eccentric rotating mass (ERM) vibration motors (Precision Microdrives Inc., London, UK; Model: 308–102), which have an operational frequency range of 60 Hz to 380 Hz and a covarying amplitude range of 0.0 to 6.3 G. Each vibration motor was encased in a polyolefin sheath that could be affixed directly to the skin of the arm with athletic tape and adhesive bandages. With reference to anatomic position, the placement of the four vibration motors, shown in Figure 1C, included the (i) C7 dermatome on the dorsal hand, about 5 cm proximal to the middle finger knuckle; (ii) C8 dermatome on the ventromedial forearm, approximately 5 cm proximal to the ulnar styloid process; (iii) C6 dermatome on the ventrolateral forearm, approximately 5 cm distal to the antecubital fossa; (iv) T1 dermatome on the posterior arm, about 5 cm proximal to the olecranon (cf., [58]). The spacing between motors was greater than 6 cm at all sites, thereby avoiding undesirable mechanical crosstalk between adjacent sites [59,60]. Participants wore noise-canceling headphones (Boltune, Cupertino, CA, USA; Model BT-BH011) that played white noise throughout experimental testing to minimize audible cues from the vibration motors and other potential audible distractions.

2.3. Vibrotactile Feedback Encoding Schemes

Instantaneous hand position, as measured by the horizontal planar robot, was encoded into supplemental vibrotactile feedback using two different encoding schemes: Cartesian Endpoint Encoding (CEE) and Joint Angle Encoding (JAE) (Figure 1C). In Cartesian Endpoint Encoding, the vibrotactile feedback was encoded in a Cartesian frame of reference with its origin in the center of the central target of the 5 × 5 grid, which we refer to as the “home position”. The motors did not produce any vibrations when the hand position was within 0.25 cm of the home position. Movement of the hand to the right of the home position produced vibrations in the X+ motor, whereas movement of the hand to the left of the home position produced vibrations in the X motor (see Figure 1B,C). Likewise, hand movements that would move the cursor higher than the home position in the visual workspace corresponded to vibrations in the Y+ motor, and movements in the lower half of the workspace caused the Y motor to vibrate. The X+ and X motors were not activated with vertical hand movement along the Y-axis of the visual workspace (i.e., when X = 0). Similarly, the Y+ and Y motors did not produce vibrations with horizontal hand movement along the X-axis in the visual workspace (i.e., when Y = 0). The vibrotactile feedback was encoded as a vector ranging from 75 Hz at 0.25 cm from the home position to 200 Hz at the center of the most distant target (4.0 cm from the home position). The vibrotactile feedback saturated at maximum activation at distances greater than 5.5 cm from the home position.
For Joint Angle Encoding, hand position as measured by the robot was used to infer the participant’s instantaneous shoulder angle (qs) and the elbow angle (qe) using inverse kinematics analysis [Equations (1a) and (1b)] and individualized measurements as described in Figure 2 (see also Appendix A). With the participant grasping the handle at the home position, measurements were taken between (a) the robot handle to the participant’s sagittal midline, (b) the participant’s sagittal midline to the shoulder’s center of rotation, (c) the shoulder’s center of rotation to the elbow’s center of rotation, and (d) the elbow’s center of rotation to the center of the robot handle. Measurements a and b were used to reframe the current hand position from the robot reference frame into a shoulder-centered reference frame (x,y) in real time. The distance (h) between the participant’s shoulder and hand was then calculated to find current shoulder and elbow joint angles using static measurements c and d.
θ e = t a n 1 ( y x ) c o s 1 ( h 2 + c 2 d 2 2 c h )
θ s = π c o s 1 ( h 2 c 2 d 2 2 c d )
Joint Angle Encoding utilized the same visual workspace as Cartesian Endpoint Encoding with comparable scaling such that vibrotactile feedback varied over the same range of intensities in the qe−/+ or qs−/+ motors in response to changes in the elbow or shoulder angles, respectively.

2.4. Testing Procedures

In one of the two sessions, we tested the capability of participants to use vibrotactile feedback with Cartesian Endpoint Encoding to enhance the accuracy and efficiency of goal-directed reaching movements. In the other session, we similarly tested vibrotactile feedback with Joint Angle Encoding. The order of the two sessions was counterbalanced across participants. Prior to testing on each day, participants were introduced to that day’s vibrotactile display, they were informed about how to interpret the vibrotactile cues, and they were invited to freely explore the robot’s workspace. During this familiarization period, participants were frequently and repeatedly asked to report which motors were activated at any given time. If they made errors in detecting vibration on any motor, the motor’s location was adjusted by 2 or 3 cm within the same dermatome so that each participant could reliably detect and report vibration. Participants were then encouraged to explore the vibrotactile display by making self-guided reaching movements until comfortable with the encoding scheme. This introduction and exploration procedure took between 2 to 5 min to complete and established how the workspace was encoded within the vibrotactile feedback.
During the main part of both experimental sessions, participants performed 9 blocks of 25 reach-to-target movements, one movement per trial (Figure 1D). Participants started each block with their right hand centered within the home position. The target grid was always displayed on-screen as a set of low-contrast gray dots, and the current target was presented in vivid green (Figure 1B). Participants were instructed to “capture the target as quickly and accurately as possible”. Upon completing a reach, they were to announce that they had arrived at the target, and the experimenter ended the trial unless 10 s had elapsed, whereupon the trial ended automatically. At the end of the trial, the previous target became an empty green dot, and after a random delay period (2.3 ± 0.7 s), a new location in the workspace turned green, cueing the participant to move to that location. Target sequences were pseudo-randomized across each block of 25 trials. The distance between consecutive targets ranged from 4.0 to 6.32 cm.
The block order and descriptions were as follows:
1. Visual Feedback: Participants were able to see a cursor representing their hand position in the visual workspace. No vibrotactile feedback was provided. This block served to familiarize participants with the reaching task. This was the only block where participants were able to see the cursor representing hand position in the visual workspace.
2. No Vision 1: Neither visual feedback nor vibrotactile feedback was provided. This block served to provide a baseline assessment of performance guided only by proprioceptive feedback (for comparison with blocks where visual or vibrational cues were also provided).
3–7. VTF Training 1–5: Participants completed 5 blocks of 25 reaches, each with supplemental vibrotactile feedback (VTF Training) and without concurrent visual feedback. If the center of the robot’s handle was more than 0.25 cm from the center of the target when the participants indicated they had completed a reach, the robot smoothly moved the hand to the center of the indicated target with a 1 s movement time. These training blocks were the only trials where terminal corrections were applied. No visual knowledge of results was provided during the training blocks in order to encourage participants to learn the mapping from hand position in the physical workspace to patterns of vibrotactile feedback within the vibrotactile display.
8. No Vision 2: Neither visual feedback nor vibrotactile feedback was provided. This block served to establish a post-training measure of performance guided only by proprioceptive feedback. No robot corrections were provided during this block to allow for a fair comparison with both the pre-training No Vision 1 block (to quantify the general benefits of training) and with the post-training VTF Test block (to quantify the benefits of concurrent vibrotactile guidance).
9. VTF Test: No visual feedback or robot corrections were provided during this final test of reach performance guided by concurrent vibrotactile feedback. We sought to compare performance improvements in this block relative to the No Vision 2 block across encoding schemes as a primary outcome of this study.
At the end of each session, participants completed three surveys that assessed their subjective experiences with each encoding scheme in terms of usability, motivation, and satisfaction. As described below, these included the System Usability Scale (SUS; [53]), the Intrinsic Motivation Inventory (IMI; [54]), and the Quebec User Evaluation of Satisfaction with Assistive Technology (QUEST 2.0; [55,56]).

2.5. Analysis of Kinematic Data

Analysis of kinematic performance focused principally on the final hand position, which was recorded when participants verbally indicated that they had acquired the intended target. Our primary measure of movement accuracy was target capture error, defined as the Euclidean distance between the center of the illuminated target and the final hand position. Our primary measure of movement efficiency was target capture time, defined as the time that elapsed from the moment the target was illuminated to when the participant gave a verbal cue to the researcher or after 10 s had expired, whichever came first. We also computed two secondary outcome measures pertaining to kinematic efficiency. One was the normalized path length, defined as the total hand path length within a trial divided by the length of the ideal straight-line path between the hand’s starting location and the desired target. Another was a decomposition index [DI; Equation (2)] that is sensitive to different strategies for using vibrotactile feedback to solve the target capture task in the absence of visual feedback [18]. The decomposition index is a unitless measure that quantifies the extent to which sampled-data hand paths in any given trial move exclusively parallel to the cardinal axes of the vibrotactile display. In the case of Cartesian Endpoint Encoding, for example, the cardinal axes of the vibrotactile display correspond to the {x, y} axes of the visual display. If we let the generalized variable q1 correspond to the hand’s x-axis location and the variable q2 correspond to its y-axis location, we obtain:
D I = n = 2 N   { 1 2 | q 1 ( n ) q 1 ( n 1 ) n = 2 N { q 1 ( n ) q 1 ( n 1 ) } | | q 2 ˙ m a x q 2 ˙ ( n ) q 2 ˙ m a x | + 1 2 | q 2 ( n ) q 2 ( n 1 ) n = 2 N { q 2 ( n ) q 2 ( n 1 ) } | | q 1 ˙ m a x q 1 ˙ ( n ) q 1 ˙ m a x | }
where N corresponds to the maximum number of data samples within a given trajectory, n is the sample number within that trajectory, and q 1 ˙ m a x , q 2 ˙ m a x correspond to the peak hand speeds along the cardinal {x, y} axes. For computing the decomposition analysis in trials with Joint Angle Encoding, the variable q1 corresponds to the shoulder angle qs, and q2 corresponds to the elbow angle qe. Hand movements represented as smooth straight lines in joint angle coordinates (e.g., [61]) have low decomposition index values, whereas movements comprised of sequential motions at the two joints have high decomposition index values. Per Equation (2), off-axis, straight-line trajectories with bell-shaped (Gaussian) velocity profiles in the {q1, q2} reference frames yield decomposition index values equal to 0.19, whereas there is no upper limit on decomposition index values for highly decomposed movements.

2.6. Assessment of Subjective User Experience

Usability is defined as the “appropriateness to a purpose of any particular artefact” [53]. We used the System Usability Scale (SUS; [53]) to assess the usability of each vibrotactile feedback encoding scheme within the context of enhancing the accuracy and efficiency of goal-directed reaching movements. The SUS is a 10-item questionnaire using a 5-option Likert scale (ranging from “Strongly Disagree” to “Strongly Agree”). Scores were summed across items and scaled to yield a total score ranging from 0 to 100. Higher scores indicate better usability, with scores greater than 68 generally regarded as indicating passable usability [62].
We used a 30-question subset of the Intrinsic Motivation Inventory (IMI; [54]) to assess the extent to which participants perceived the supplemental kinesthetic feedback to be motivating. The questionnaire spanned five dimensions of the original survey, including “interest/enjoyment”, “effort/importance”, “value/usefulness”, “perceived competence”, and “felt pressure/tension.” The remaining two sections of the original survey, “perceived choice” and “relatedness”, were removed because they assessed aspects of user interaction beyond the scope of this study. Participants responded to each of the 30 prompts using a 7-option Likert scale (ranging from “Not At All True” to “Very True”). The IMI “interest/enjoyment” subscale is generally considered to be the self-report measure of intrinsic motivation [63,64]. Higher scores on most subsections indicate better subjective user experience (except for the “pressure/tension” subsection, where higher scores indicate that the participant felt more pressured or tense during the exercise). Average “interest/enjoyment” scores greater than or equal to four indicate that a given system is perceived as being motivated to use.
Finally, we assessed user satisfaction using the Quebec User Evaluation of Satisfaction with Assistive Technology (QUEST 2.0; [55,56]). We used an 8-item version of the QUEST that assessed user satisfaction in terms of the system’s physical characteristics (e.g., size, weight), physical and cognitive fit (e.g., physical comfort, ease in adjusting, ease in learning), and functional characteristics (e.g., how successfully the device performed); we did not include the original QUEST subsection that assesses satisfaction with services associated with the device because such services were neither rendered nor necessary for the completion of our study. The QUEST questionnaire used a 5-option Likert scale ranging from “Not Satisfied At All” to “Very Satisfied”. Higher scores indicate higher user satisfaction. The individual item scores were averaged across the eight questions to obtain a final score. Scores equal to or greater than 3 were considered to indicate passable satisfaction with the assistive device. Lastly, the QUEST asked participants to identify from a list of eight options (12 in the original questionnaire; 4 pertaining to services were removed) the three most important aspects of a wearable vibrotactile feedback system that would impact their satisfaction in using it.

2.7. Statistical Hypothesis Testing

This study tested three main hypotheses. The first pose is that after a brief training period (~30 min), supplemental vibrotactile feedback can improve reach accuracy in the absence of concurrent visual feedback for both encoding schemes. Given the within-subject research design, this hypothesis was tested using planned one-sided paired t-tests to compare target capture error across the No Vision 2 and VTF Test blocks within each encoding scheme. The second hypothesis poses that when reaching without vision, one or the other vibrotactile encoding schemes will promote better reaching accuracy and/or efficiency. On the one hand, intrinsic proprioceptors normally used for real-time control of arm movements are embedded in muscles spanning the shoulder and elbow joints, suggesting that supplemental vibrotactile feedback encoded in a joint angle coordinate system may be more intuitive and effective to use. On the other hand, Cartesian Endpoint Encoding of supplemental vibrotactile feedback largely conforms to the visual reference frame, and so vibrotactile feedback encoded in Cartesian coordinates could be more effective and easier to use due to vision’s dominant influence on the specification of movement vectors [1]. To test this hypothesis, we performed planned, paired-sample t-tests comparing the primary performance measures across the vibrotactile feedback encoding schemes during the VTF Test blocks. The third hypothesis predicts that participants will prefer one way of encoding supplemental vibrotactile feedback over the other (i.e., that user experience will differ between the Cartesian Endpoint and Joint Angle Encoding schemes). To test this hypothesis, we operationally defined a “more positive user experience” as one where participants would find the system to be more usable, more motivating, and/or more satisfying. Separate planned paired-samples t-tests were therefore performed to determine whether SUS values, IMI values, and QUEST values systematically favored Joint Angle Encoding or Cartesian Endpoint Encoding. Subsequently, 95% confidence intervals (CI) were calculated to classify cohort subjective experiences. Secondary outcome measures were analyzed using repeated measures ANOVA and post-hoc paired-samples t-tests to compare performance across trial blocks and encoding schemes. Statistical testing was performed in R Studio 2022.07.1. Statistical significance was set at a family-wise error rate of α = 0.05.

3. Results

All participants completed both testing sessions (Cartesian Endpoint Encoding and Joint Angle Encoding), including post-test surveys, and all were attentive throughout the study. As such, data from all participants were included in the statistical hypothesis testing described in the following paragraphs.

3.1. Effects of Supplemental Vibrotactile Feedback on Primary Measures of Reach Accuracy and Efficiency

Figure 3 shows examples of hand paths (gray traces) and movement endpoints (red dots) relative to the workspace targets (blue circles) within all nine trial blocks performed by selected participants for each of the two experimental sessions: Cartesian Endpoint Encoding and Joint Angle Encoding. During baseline testing with concurrent visual cursor feedback (Vision), the hand and its cursor moved directly from one target to the next, following approximately straight paths. All movement endpoints were within their intended targets. Upon removing the cursor (No Vision 1; in the absence of supplemental vibrotactile feedback), hand paths failed to achieve their targets. Instead, movements became much longer than ideal such that target capture errors accumulated from one movement to the next. In both sessions, the space spanned by the hand paths drifted to the left (i.e., toward the participant’s midline) and greatly exceeded the space spanned by the target set. These features of disordered reaching decreased immediately upon providing supplemental vibrotactile feedback in both sessions. Recall that the robot repositioned the hand to the intended target at the end of each reach in the training blocks; this provided proprioceptive knowledge of results that participants could use to learn how hand location in the physical workspace relates to patterns of vibrations in the vibrotactile display. Passive repositioning of the hand also minimized the accumulation of errors from one trial to the next within each training block, i.e., proprioceptive drift [65]. Interestingly, VTF Training trials performed with a Cartesian Endpoint Encoding of hand position appear to be decomposed into separate horizontal (x-axis) and vertical (y-axis) motions. Although less apparent from these hand path plots, the participant also tended to decompose training reaches into separate motions at the shoulder and elbow joints in the Joint Angle Encoding session. With Cartesian Endpoint Encoding, decomposition appeared to persist after training was completed, even when vibrotactile feedback was removed in the No Vision 2 block. However, this effect was not significant for Joint Angle Encoding. For both encoding schemes, movement accuracy degraded in the absence of vibrotactile feedback and concurrent vision in the No Vision 2 Test block. By contrast, movement accuracy improved considerably when vibrotactile feedback was re-instated in the VTF Test block, even without robotic repositioning of the hand at the end of each reach, which was limited to the five training blocks.
The single-subject trends depicted in Figure 3 were characteristic of movements made by the entire study cohort. Figure 4A presents the across-participants average target capture error (our primary outcome measure of reach accuracy) for each trial block in both sessions. Target capture errors were minimal (averaging only 0.1 ± 0.03 cm, mean ± SD) when participants could view an onscreen cursor that represented the location of their moving hand in the workspace. Reach accuracy degraded markedly when the cursor was removed in the No Vision 1 trial block (CEE: 6.9 ± 3.2 cm; JAE: 6.2 ± 2.9 cm). Accuracy improved substantially during initial training with vibrotactile feedback (Training Block 1), regardless of the encoding scheme (CEE: 2.5 ± 0.9 cm; JAE: 3.5 ± 1.3 cm). Accuracy improved progressively throughout the five training blocks, although the magnitude of target capture errors in the Joint Angle Encoding session appeared to exceed those in the Cartesian Endpoint Encoding session at the end of the training blocks (CEE: 1.4 ± 0.4 cm; JAE: 2.2 ± 0.7 cm). When vibrotactile feedback was removed in the No Vision 2 Test block, target capture errors increased dramatically in both sessions (CEE: 5.1 ± 2.7 cm; JAE: 5.1 ± 3.4 cm). Whereas average target capture errors decreased upon reinstating vibrotactile guidance in both sessions, there was a greater decrease with Cartesian Endpoint Encoding than Joint Angle Encoding (CEE: 3.1 ± 1.0 cm; JAE: 4.1 ± 2.6 cm).
We used a planned, one-sided, paired-sample t-test to test our first hypothesis (i.e., after a brief training period, both forms of supplemental vibrotactile feedback facilitate reach accuracy in the absence of concurrent visual feedback). To do so, we compared target capture errors across the No Vision 2 and VTF Test blocks for each participant and for each experimental session. Consistent with the hypothesis, both encoding schemes were effective at reducing target capture error in the absence of visual feedback (CEE: t14 = 4.56, p < 0.001; JAE: t14 = 2.34, p = 0.017), even in the absence of passive robotic repositioning of the hand, which was present during the VTF Training blocks but absent in the VTF Test block.
Figure 4B presents the across-participant’s average target capture times (our primary outcome measure of temporal efficiency) for each trial block in both experimental sessions. Target capture times were uniformly low when participants performed baseline reaches with and without concurrent cursor feedback of hand position (Vision—CEE: 2.8 ± 0.6 s; JAE: 2.9 ± 0.7 s. No Vision 1—CEE: 3.0 ± 0.8 s; JAE: 3.1 ± 0.8 s). By contrast, target capture times exceeded 5 s in both sessions, even at the end of training (CEE: 5.4 ± 1.2 s; JAE: 5.1 ± 1.5 s). Although capture times dropped nearly to baseline levels in the No Vision 2 block (CEE: 3.9 ± 1.3 s; JAE: 3.8 ± 1.4 s), they increased to end-of-training levels in the VTF Test block (CEE: 5.5 ± 1.1 s; JAE: 5.1 ± 1.4 s). This decrease in temporal efficiency suggests that the integration of supplemental vibrotactile feedback into the ongoing control of movement comes at a significant cost in terms of cognitive workload.
We used a set of planned, two-sided, paired-sample t-test to assess our second hypothesis (i.e., when reaching without vision, one or the other vibrotactile feedback encoding schemes would promote better reaching accuracy and/or efficiency). Here, we compared our primary measures of reach accuracy (target capture error) and efficiency (target capture time) across the two vibrotactile feedback encoding schemes in the VTF Test blocks. We found that Cartesian Endpoint Encoding was better than Joint Angle Encoding in enabling participants to reduce target capture errors, not only in absolute terms (i.e., comparing VTF Test blocks: t14 = 3.33, p < 0.005) but also in terms of error reduction relative to the No Vision 2 test block in each session (t14 = 3.81, p < 0.002). A similar trend toward better temporal efficiency with Cartesian Endpoint Encoding did not quite achieve statistical significance (comparing VTF Test blocks: t14 = 1.89, p = 0.080). It should also be noted that VTF Test block capture times were considerably longer than capture times in the No Vision 2 test blocks for both the Cartesian Endpoint (t14 = 7.93, p < 0.001) and Joint Angle (t14 = 3.92, p < 0.002) sessions.

3.2. Secondary Analyses of Kinematic Performance during Reaching with Supplemental Vibrotactile Feedback

We analyzed secondary measures of kinematic performance (path length ratio and decomposition index) to gain insight into strategies participants may have used to integrate supplemental vibrotactile kinesthetic feedback into the ongoing control of reaching. Figure 5A presents the cohort-average path length ratio, a measure of the spatial efficiency of reaching, for each trial block in both experimental sessions. Average path length ratios were uniformly low (nearly ideal) when participants could view a cursor representing the location of their hand within the workspace (CEE: 1.4 ± 0.13; JAE: 1.39 ± 0.15). The spatial efficiency of reaching was preserved when the cursor was removed in the No Vision 1 trial block (CEE: 1.4 ± 0.28; JAE: 1.3 ± 0.22). By contrast, path length ratios increased markedly during initial training with vibrotactile feedback (Training Block 1), regardless of the encoding scheme (CEE: 2.5 ± 0.92; JAE: 2.4 ± 0.62). The efficiency of reaching increased progressively throughout the five training blocks, with path length ratios achieving nearly identical values in the two sessions at the end of training (CEE: 2.0 ± 0.59; JAE: 2.0 ± 0.53). When vibrotactile feedback was removed in the No Vision 2 test block, path length ratios dropped to near baseline levels (CEE: 1.5 ± 0.43; JAE: 1.4 ± 0.43). Although the average path length ratio appeared to increase upon reinstating vibrotactile guidance with Cartesian Endpoint Encoding (CEE: 2.1 ± 0.74), Joint Angle Encoding did not tend to demonstrate as large an increase (JAE: 1.7 ± 0.52 cm).
Because we did not have specific hypotheses pertaining to the secondary performance measures, we performed an omnibus repeated measures, 2-way ANOVA that examined the impact of the block number and encoding scheme on path length ratio; we found significant effects of both factors (Trial Block: F(8,238) = 29.19, p < 0.0005; Encoding Scheme: F(1,238) = 8.32, p = 0.004), but no interaction between them (F(8,238) = 0.95, p = 0.48). Notably, path length ratios decreased significantly from the start to end of training for both encoding schemes (CEE: t14 = 3.78, p = 0.002; JAE: t14 = 4.07, p = 0.002). After training, VTF Test block reaches with Joint Angle Encoding exhibited lower path length ratios than did reaches with Cartesian Endpoint Encoding (t14 = 3.16, p = 0.007), although neither form of vibrotactile guidance promoted test block reaches with path length ratios as low as those observed in the No Vision 2 trial blocks (CEE: t14 = 2.89, p = 0.012; JAE: t14 = 4.19, p < 0.001).
We used the decomposition index of Equation (2) to quantify the extent to which vibrotactile-guided reaching tended to be decomposed into separate motions along the cardinal axes of the vibrotactile display. Figure 5B presents average decomposition index values for each trial block as computed from the moment the target was illuminated to when the participant’s hand speed dropped below 10% of its peak value on that trial (i.e., the total movement). In both sessions, DI Total values were uniformly low when participants could view a cursor representing the location of their hand within the workspace (Vision block: CEE: 0.48 ± 0.03; JAE: 0.45 ± 0.03) and when the cursor was removed in the No Vision 1 trial block (CEE: 0.48 ± 0.11; JAE: 0.44 ± 0.04). By contrast, DI Total values increased at the onset of training (Training block 1: CEE: 0.64 ± 0.12; JAE: 0.51 ± 0.08) and remained elevated through the end of training (Training block 5: CEE: 0.64 ± 0.12; JAE: 0.52 ± 0.07). Relative to the No Vision 1 trial block, the extent to which DI Total values increased during vibrotactile-guided reaching appeared to be greater for Cartesian Endpoint Encoding than Joint Angle Encoding. After training with Cartesian Endpoint Encoding, DI Total values remained elevated both in the No Vision 2 block (CEE: 0.60 ± 0.15) and in the VTF Testing block (CEE: 0.67 ± 0.10). By contrast, elevated decomposition index values did not carry over into the No Vision 2 block after training with Joint Angle Encoding (JAE: 0.46 ± 0.07) but instead re-emerged in the VTF Testing block (JAE: 0.52 ± 0.06). These observations were supported by results of repeated measures, 2-way ANOVA that examined the impact of block number and encoding scheme on DI Total values; we found significant effects of both factors (Trial Block: F(8,238) = 38.06, p < 0.001); Encoding Scheme: F(1,238) = 106.77, p < 0.001), as well as an interaction between the two factors (F(8,238) = 3.47, p < 0.001). We noted no significant improvement in the decomposition index from the first to the fifth VTF Training block for either Cartesian Endpoint Encoding (t14 = 0.19, p = 0.85) or Joint Angle Encoding (t14 = 0.50, p = 0.63). After training, reaching movements remained significantly more decomposed in the No Vision 2 block relative to the No Vision 1 block for Cartesian Endpoint Encoding (t14 = 3.35, p < 0.005) but not Joint Angle Encoding (t14 = 1.68, p = 0.115). VTF Test blocks had significantly higher decomposition index compared to respective No Vision 2 blocks for both encoding schemes (CEE: t14 = 2.99, p < 0.01; JAE: t14 = 4.23, p < 0.001). DI Total values were significantly higher for Cartesian Endpoint Encoding than for Joint Angle Encoding in the VTF Test blocks (t14 = 7.4, p < 0.001).
To address the possibility that DI Total values might be dominated by iterative feedback corrections that were sometimes observed at the end of the reaching movements, we repeated the decomposition index analysis using hand path data from only the first half of each trial (i.e., DI Initial; Figure 5C). Although not shown here, the statistical analysis yielded a similar pattern of results as described in the previous paragraph, suggesting that participants used a decomposition strategy from the start of movement under both encoding schemes, although the kinematic effects of this strategy were strongest with Cartesian Endpoint Encoding.

3.3. Subjective User Experience between Cartesian Endpoint Encoding and Joint Angle Encoding

We next assessed subjective user experiences after one day of exposure to each form of information encoding (Figure 6). We used the System Usability Scale (SUS) as our primary measure of system usability. Although there was considerable variation in SUS scores across the study cohort, participants generally rated Cartesian Endpoint Encoding as more usable than Joint Angle Encoding (Figure 6A: CEE: 63 ± 17; JAE: 45 ± 15; t14 = 4.3, p = 0.001). Of the two encoding schemes, only Cartesian Endpoint Encoding achieved a mean SUS score that included the threshold of passable usability.
We used the Intrinsic Motivation Inventory (IMI) as our primary measure of the system’s ability to motivate use. Of the five IMI subscales assessed at the end of each session (see Table 1), only the “perceived competence” subscale exhibited a significant difference between the two encoding methods (t14 = 3.3, p = 0.005; Figure 6B right); participants felt significantly more competent using Cartesian Endpoint Encoding compared to Joint Angle Encoding. The 95% CI for Cartesian Endpoint Encoding included the threshold of satisfactory perceived competence, whereas the 95% CI for Joint Angle Encoding did not. By contrast, we found no compelling evidence of significant differences between encoding methods for any of the remaining IMI subsets (t14 < 1.92, p > 0.07 in all four cases). In particular, we note that for both encoding schemes, average scores for the “interest/enjoyment” subscale exceeded the threshold value demarcating whether or not the task was motivating (Figure 6B left).
We used the Quebec User Evaluation of Satisfaction with Assistive Technology (QUEST) to assess each user’s perceived satisfaction with each of the two movement encoding schemes. Eleven of the 15 participants indicated that they found Cartesian Endpoint Encoding to be more satisfactory than Joint Angle Encoding (Figure 6C, left). On average, QUEST scores equaled or exceeded the threshold value of 3.0, indicating acceptable user satisfaction for both encoding schemes (CEE: 3.8 ± 0; JAE: 3.4 ± 0.8). The difference in satisfaction between the two encoding methods did not reach significance (t14 = 2.0, p = 0.066). Finally, QUEST identified effectiveness, ease of use, and physical comfort as the three most important aspects of the system impacting participant satisfaction in using it.
In summary, only Cartesian Endpoint Encoding provided a satisfactory user experience in the sense that participants found Cartesian Endpoint Encoding but not Joint Angle Encoding to have passable usability. Moreover, while both encoding schemes yielded IMI scores suggesting that they were marginally motivating, participants felt more competent using Cartesian Endpoint Encoding as compared to Joint Angle Encoding. Both encoding schemes appeared to generate acceptable user satisfaction, as determined by scores on the QUEST survey.

4. Discussion

We compared the objective utility and subjective user experience of two different forms of supplemental kinesthetic feedback with regard to their ability to enhance the accuracy and efficiency of goal-directed reaching in the absence of visual feedback in healthy, neurologically intact adults. One form of feedback, Cartesian Endpoint Encoding, converted real-time hand position in a Cartesian frame of reference into supplemental kinesthetic feedback provided by a vibrotactile display attached to the non-moving arm and hand. The other encoding approach, Joint Angle Encoding, provided real-time arm configuration information via the vibrotactile display.
After a brief training period, both forms of supplemental feedback promoted improved reach accuracy in the absence of concurrent visual feedback over levels of performance achieved using proprioception alone. We found that both encoding schemes had merit, depending on which performance metric was examined. On the one hand, Cartesian Endpoint Encoding was better than Joint Angle Encoding in enabling participants to reduce target capture errors, not only in absolute terms but also in terms of error reduction relative to a control condition wherein neither visual feedback nor vibrotactile feedback was provided. However, accuracy improvements came at a cost in terms of temporal efficiency. Target capture times during vibrotactile-guided reaching were considerably longer than capture times in the No Vision Test Blocks for both encoding schemes. Neither encoding scheme yielded movements that were particularly smooth; in both cases, vibrotactile-guided reaches tended to be decomposed into distinct movements along the principal axes of the vibrotactile display, although movements made with Joint Angle Encoding did so to a lesser degree than movements with Cartesian Endpoint Encoding. These efficiency results suggest that integration of supplemental vibrotactile feedback into ongoing control of movement is cognitively demanding, at least within the time frame of this study (i.e., one brief training session with each form of feedback). Finally, we analyzed participant responses to standard surveys to assess subjective experience in terms of usability, motivation, and user satisfaction. Although participant responses suggest that both encoding schemes were motivating and that both yielded passable user satisfaction scores, only Cartesian Endpoint Encoding was found to have passable usability; participants felt more competent using Cartesian Endpoint Encoding than Joint Angle Encoding. Although a case could be made that Joint Angle Encoding promoted smoother movements, future projects aiming to develop wearable technology to enhance the terminal accuracy of goal-directed reaching may wish to focus on Cartesian Endpoint Encoding for continuous supplemental kinesthetic feedback, especially considering the user perceptions of effectiveness, ease of use, and comfort reported here.

4.1. Information Encodings for Supplemental Guidance of Movement

The idea of providing meaningful supplemental feedback to mitigate sensory loss (sensory substitution) or impairment (sensory augmentation) has been pursued for decades (see [36] for a review). While early investigations were largely confined to the research laboratory [66,67,68], advances in mobile computing systems have made it possible to create a variety of novel wearable augmentation technologies, including those intended for use by the visually impaired [69,70], patients with balance deficits [15,16,71], or users of myoelectric forearm prostheses [72]. The information provided as feedback should also be intuitive to interpret (cf., [73,74]) in the sense that the user should be able to detect and/or discriminate stimuli of practical relevance [75]. Of the many ways to deliver supplemental feedback, we proposed that vibrotactile stimulation poses the least risk for skin breakdown or interference with other senses and activities of daily living. Consequently, we focus the remainder of our discussion on issues pertaining to the provision of informative vibrotactile feedback. We do not consider here the application of vibrotactile stimuli absent of meaningful information encodings (e.g., stochastic resonance), which has been described in detail elsewhere (cf., [34,76,77]).
The literature describes several ways to encode relevant information into vibrotactile feedback for the guidance of movement. One characteristic that is useful for classifying encoding schemes pertains to whether the feedback is intended to be provided periodically or continuously. Periodic feedback has been used to convey symbolic information representing complex concepts or messages [78] or to alert users to an important event or sudden change in task conditions [48,79]. In one example, Cuppone and colleagues describe a system that can promote proprioceptive learning of wrist movements in the absence of vision [12,80,81]. After three to five days of wrist movement training wherein participants were given vibrotactile cues when wrist trajectory errors exceeded certain limits, neurologically intact adults demonstrated improvements in wrist proprioceptive acuity that were retained up to 10 days after training ceased [81]. By contrast, continuous feedback systems are designed to enhance sensorimotor control by providing continuous feedback on some aspects of behavior. Exemplar applications include postural stabilization in patients with vestibular deficits [15,16,71], grasp force regulation and hand aperture control in users of myoelectric forearm prostheses [72], waypoint navigation [22], and the training of specific patterns of limb movements [13,14,17,32,47,82]. Because a long-term goal of our work is to re-establish real-time feedback control of limb movements in patients with neuromotor injury via supplemental kinesthetic feedback, we limit further discussion to issues relevant to continuously wearable sensory substitution/augmentation technologies.
Two main types of information encoding schemes are used for continuous supplemental feedback. These include state-based approaches, which inform the user about a limb’s current position in space relative to some fixed reference configuration (e.g., [18]), and “goal-aware” approaches that either inform the user about the difference (error) between current and desired body configurations (cf., [14,16]) or about how the user should ideally move their body to achieve some task goal (cf., [83]). Krueger and colleagues recently compared the ability of neurologically healthy people to use limb state information or hand position error to enhance the performance of stabilization and reaching tasks performed with the arm [17]. The authors compared objective performance using measures of kinematic error, and they compared subjective assessments of usefulness provided on a 7-point Likert scale. Both encoding schemes were found capable of enhancing stabilization and reaching performance in the absence of vision, although error encoding yielded somewhat superior outcomes-objective and subjective due to the additional task-relevant information it contains [17]. However, the state feedback approach is the simpler of the two from an implementation perspective. Whereas it is relatively straightforward to estimate limb orientation and hand position using low-cost wearable inertial measurement systems (see [37] for a relevant review), it is a much harder problem to infer with accuracy the user’s intended movement objective at any given time. For the near future, at least, limb state encoding will have greater practical utility for wearable supplemental feedback systems.
The current study compared two approaches regarding how limb states could be best encoded. On the one hand, Joint Angle Encoding imitates the way limb state information is provided by intrinsic muscle spindle proprioceptors, which provide feedback about joint angles in the limb via signals sensitive to muscle stretch. Joint Angle Encoding is also easy to implement using wearable IMUs, which can provide real-time estimates of limb segment angles relative to a user-defined reference configuration. On the other hand, Cartesian Endpoint Encoding mimics how the cursor tracks changes in hand position via movement on the visual display. The VTF Testing results depicted in Figure 4A strongly favor Cartesian Endpoint Encoding if target capture accuracy is a priority. By contrast, the VTF Testing results depicted in Figure 5 favor Joint Angle Encoding if spatial efficiency of movement is a priority. The use of either encoding scheme incurred a steep cost in terms of movement time (Figure 4B; see also [84]), although this aspect of performance may be expected to improve with additional practice. In one recent study, participants trained for approximately 10 h on using Cartesian Endpoint Encoded supplemental kinesthetic vibrotactile feedback to guide goal-directed reaching in the horizontal plane [19]. As we also showed in the current study, the initial performance of vibrotactile-guided reaching demonstrated that people were able to rapidly interpret and use supplemental vibrotactile feedback to improve reaching accuracy, although in doing so, movement durations effectively doubled. Throughout the course of extended training, however, Shah and colleagues found that movement durations for vibrotactile-guided reaches asymptotically approached those for movements made in baseline conditions performed without either vibrotactile or visual feedback [19]. Although no long-term training study has yet been performed using supplemental kinesthetic vibrotactile feedback with Joint Angle Encoding, we expect that a similar pattern of improvement will evolve as experience accrues.
While several previous studies have surveyed user perspectives on sensory augmentation technologies and approaches, most utilized either a single standardized questionnaire or in-house, open-ended questions to determine subjective user experience [85,86,87]; in contrast, the current study used three formal and reliable surveys (SUS, IMI, QUEST) to quantify the usability, intrinsic motivation, and satisfaction of each vibrotactile feedback encoding method. Just as patients are encouraged to assume an active role in their healthcare, it is imperative to consider the user’s perspective when designing assistive devices. Satisfaction is one of the most important indicators of quality healthcare [88], and patients who are more satisfied may be more likely to utilize healthcare systems [89]. Additionally, satisfaction, along with effectiveness and efficiency, are key factors that influence the usability of a system [53]. Intrinsic motivation in the context of exercise, for example, can contribute to long-term engagement [90,91,92]. While both encoding schemes were marginally motivating, participants felt more competent using Cartesian Endpoint Encoding, and this encoding scheme demonstrated acceptable utility. These findings suggest that users may be more likely to adopt Cartesian Endpoint Encoding for long-term use in a system designed to enhance movement quality by providing real-time supplemental vibrotactile kinesthetic feedback.

4.2. Limitations and Future Directions

The limitations of this study are important to consider. The vibrotactile display was fixed to the nonmoving arm using athletic tape to secure the eccentric rotating mass motors against the skin. Although the study team attempted to apply consistent pressure when applying the vibrotactile display, we did not implement an effective way to measure and control the applied pressure. A future implementation of the vibrotactile display could embed the motors in an elastic sleeve (see, for example, [14,46,93]), which could facilitate consistency in the placement and setup of the vibrotactile display across don/doff cycles and across participants. Another limitation stems from the fact that all participants in this study were under 28 years old. Research has shown that the acuity of intensity discrimination for vibrotactile feedback declines somewhat in older adults [94], although this may be counteracted with long-term training [95], which would be a natural byproduct of long-term use of the system. Another limitation results from the fact that this study only implemented a two-dimensional workspace using a hand-held device that constrained motions to the horizontal plane. Behaviors limited to two dimensions do not fully capture the complexity of real-world interactions with objects in three dimensions. Although it would be simple to implement, adding another pair of motors corresponding to a third dimension of vibrotactile feedback could increase cognitive loading beyond that described here, further exacerbating the observed decomposition strategy through the phenomenon of masking. In masking, the presence of one vibratory stimulus degrades the acuity with which another vibratory stimulus can be perceived when the stimuli are presented simultaneously or very close in time [52,96]. Consequently, presenting more than one channel of information within a single sensory feedback modality may cause participants to decompose their movements by strategically processing each dimension one at a time. We speculate that it can be possible to avoid decomposition using cross-linked, multimodal sensory stimuli (see [97]), such as vibrotactile, auditory (cf., [98]), and skin-stretch feedback (cf., [99]) to obtain smoother, more efficient movements. Future studies should examine the effects of extended training with supplemental kinesthetic feedback in three-dimensional workspaces to examine the extent to which the technology can support functional interactions with real-world objects. For example, it is yet unclear whether the integration of supplemental feedback into the real-time control of the arm and hand is promoted better by training all three dimensions of feedback from first exposure onward or by gradually introducing the feedback one dimension at a time.

5. Conclusions

Although there have been recent advancements in wearable technologies for sensory augmentation, it remains unclear how movement information should be encoded into the supplemental feedback provided by the technology. This study investigated the objective utility and subjective experience of two biologically inspired methods of encoding arm kinematics into supplemental vibrotactile feedback. Cartesian Endpoint Encoding mimicked visual feedback by translating hand position into the vibrotactile feedback in a Cartesian reference frame. Joint Angle Encoding was inspired by intrinsic proprioception and provided information on hand position in a reference frame that encoded shoulder and elbow joint angles. We found that both methods provided objective utility in enhancing reach accuracy in the absence of concurrent visual feedback, with Cartesian Endpoint Encoding yielding a greater reduction in target capture errors compared to Joint Angle Encoding. While both encoding methods had passable user satisfaction, and both were found to be intrinsically motivating, only Cartesian Endpoint Encoding had passable usability as reported on surveys. This work will help advise future studies aiming to evaluate the benefits of continuous supplemental kinesthetic feedback in guiding goal-directed reaches.

Author Contributions

Conceptualization, L.A.M. and R.A.S.; Data curation, R.K.R. and R.N.M.; Formal analysis, R.K.R., R.N.M., L.A.M. and R.A.S.; Funding acquisition, L.A.M. and R.A.S.; Investigation, R.K.R. and R.N.M.; Methodology, L.A.M. and R.A.S.; Project administration, L.A.M. and R.A.S.; Resources, R.A.S.; Software, R.N.M. and R.A.S.; Supervision, R.A.S.; Validation, R.K.R., R.N.M. and R.A.S.; Visualization, R.K.R., R.N.M., L.A.M. and R.A.S.; Writing—original draft, R.K.R., R.N.M., L.A.M. and R.A.S.; Writing—review & editing, R.K.R., R.N.M., L.A.M. and R.A.S. All authors have read and agreed to the published version of the manuscript.

Funding

This work was supported in part by the National Institutes of Health (NICHD award number R15HD093086 and NIA award number T35AG029793). Any opinions, findings, conclusions, or recommendations expressed in this material are those of the authors and do not necessarily reflect the views of the NIH.

Institutional Review Board Statement

The study protocol was approved by the IRB at Marquette University (HR-3303).

Informed Consent Statement

All subjects gave informed written consent to participate in this study.

Data Availability Statement

De-identified data will be made available upon reasonable request.

Acknowledgments

We thank John McGuire for helpful discussions contributing to this work.

Conflicts of Interest

The authors declare no conflict or competing interest.

Appendix A

We used the following anthropometric and home position data (Table A1), along with Equations (1a) and (1b), to compute joint angles in real-time when providing joint angle feedback (see also Figure 2 for visualization of these measurements).
Table A1. Participant demographics and anthropometric data. Columns 4 and 5 locate the participant’s home position relative to their shoulder. Columns 6 and 7 refer to measurements c and d as defined in Figure 2.
Table A1. Participant demographics and anthropometric data. Columns 4 and 5 locate the participant’s home position relative to their shoulder. Columns 6 and 7 refer to measurements c and d as defined in Figure 2.
Subject NumberSexAgeYHome to YShoulder
(cm)
XHome to XShoulder (cm)c: Shoulder to
Elbow (cm)
d: Elbow to
Handle (cm)
1Male262392632
2Male252483536
3Female2419103031
4Male242883436
5Female222092628
6Male272383035
7Female2424102929
8Male272482936
9Female252282629
10Male2519103031
11Female262072429
12Female2723102726
13Female232492932
14Male2021173136
15Female232092728

References

  1. Sober, S.J.; Sabes, P.N. Multisensory Integration during Motor Planning. J. Neurosci. 2003, 23, 6982–6992. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  2. Lateiner, J.E.; Sainburg, R.L. Differential Contributions of Vision and Proprioception to Movement Accuracy. Exp. Brain Res. 2003, 151, 446–454. [Google Scholar] [CrossRef] [PubMed]
  3. Scheidt, R.A.; Conditt, M.A.; Secco, E.L.; Mussa-Ivaldi, F.A. Interaction of Visual and Proprioceptive Feedback during Adaptation of Human Reaching Movements. J. Neurophysiol. 2005, 93, 3200–3213. [Google Scholar] [CrossRef] [PubMed]
  4. Sarlegna, F.R.; Sainburg, R.L. The Roles of Vision and Proprioception in the Planning of Reaching Movements. Adv. Exp. Med. Biol. 2009, 629, 317–335. [Google Scholar]
  5. Judkins, T.; Scheidt, R.A. Visuo-Proprioceptive Interactions during Adaptation of the Human Reach. J. Neurophysiol. 2014, 111, 868–887. [Google Scholar] [CrossRef] [Green Version]
  6. Ghez, C.; Scheidt, R.; Heijink, H. Different Learned Coordinate Frames for Planning Trajectories and Final Positions in Reaching. J. Neurophysiol. 2007, 98, 3614–3626. [Google Scholar] [CrossRef]
  7. Liu, X.; Scheidt, R.A. Contributions of Online Visual Feedback to the Learning and Generalization of Novel Finger Coordination Patterns. J. Neurophysiol. 2008, 99, 2546–2557. [Google Scholar] [CrossRef] [Green Version]
  8. Lantagne, D.D.; Mrotek, L.A.; Slick, R.; Beardsley, S.A.; Thomas, D.G.; Scheidt, R.A. Contributions of Implicit and Explicit Memories to Sensorimotor Adaptation of Movement Extent during Goal-Directed Reaching. Exp. Brain Res. 2021, 239, 2445–2459. [Google Scholar] [CrossRef]
  9. Scheidt, R.A.; Stoeckmann, T. Reach Adaptation and Final Position Control Amid Environmental Uncertainty after Stroke. J. Neurophysiol. 2007, 97, 2824–2836. [Google Scholar] [CrossRef] [Green Version]
  10. Ballardini, G.; Krueger, A.; Giannoni, P.; Marinelli, L.; Casadio, M.; Scheidt, R.A. Effect of Short-Term Exposure to Supplemental Vibrotactile Kinesthetic Feedback on Goal-Directed Movements after Stroke: A Proof of Concept Case Series. Sensors 2021, 21, 1519. [Google Scholar] [CrossRef]
  11. Kaczmarek, K.A.; Webster, J.G.; Bach-y-Rita, P.; Tompkins, W.J. Electrotactile and Vibrotactile Displays for Sensory Substitution Systems. IEEE Trans. Biomed. Eng. 1991, 38, 1–16. [Google Scholar] [CrossRef] [Green Version]
  12. Cuppone, A.V.; Squeri, V.; Semprini, M.; Masia, L.; Konczak, J. Robot-Assisted Proprioceptive Training with Added Vibro-Tactile Feedback Enhances Somatosensory and Motor Performance. PLoS ONE 2016, 11, e0164511. [Google Scholar] [CrossRef] [Green Version]
  13. Bark, K.; Khanna, P.; Irwin, R.; Kapur, P.; Jax, S.A.; Buxbaum, L.J.; Kuchenbecker, K.J. Lessons in Using Vibrotactile Feedback to Guide Fast Arm Motions. In Proceedings of the 2011 IEEE World Haptics Conference IEEE, Istanbul, Turkey, 21–24 June 2011; pp. 355–360. [Google Scholar]
  14. Bark, K.; Hyman, E.; Tan, F.; Cha, E.; Jax, S.A.; Buxbaum, L.J.; Kuchenbecker, K.J. Effects of Vibrotactile Feedback on Human Learning of Arm Motions. IEEE Trans. Neural Syst. Rehabil. Eng. 2015, 23, 51–63. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  15. Sienko, K.H.; Balkwill, M.D.; Oddsson, L.I.E.; Wall, C. Effects of Multi-Directional Vibrotactile Feedback on Vestibular-Deficient Postural Performance during Continuous Multi-Directional Support Surface Perturbations. J. Vestib. Res. 2008, 18, 273–285. [Google Scholar] [CrossRef] [PubMed]
  16. Lee, B.-C.; Chen, S.; Sienko, K.H. A Wearable Device for Real-Time Motion Error Detection and Vibrotactile Instructional Cuing. IEEE Trans. Neural Syst. Rehabil. Eng. 2011, 19, 374–381. [Google Scholar] [CrossRef] [PubMed]
  17. Krueger, A.R.; Giannoni, P.; Shah, V.; Casadio, M.; Scheidt, R.A. Supplemental Vibrotactile Feedback Control of Stabilization and Reaching Actions of the Arm Using Limb State and Position Error Encodings. J. Neuroeng. Rehabil. 2017, 14, 36. [Google Scholar] [CrossRef] [Green Version]
  18. Risi, N.; Shah, V.; Mrotek, L.A.; Casadio, M.; Scheidt, R.A. Supplemental Vibrotactile Feedback of Real-Time Limb Position Enhances Precision of Goal-Directed Reaching. J. Neurophysiol. 2019, 122, 22–38. [Google Scholar] [CrossRef]
  19. Shah, V.A.; Thomas, A.; Mrotek, L.A.; Casadio, M.; Scheidt, R.A. Extended Training Improves the Accuracy and Efficiency of Goal-Directed Reaching Guided by Supplemental Kinesthetic Vibrotactile Feedback. Exp. Brain Res. 2023, 241, 479–493. [Google Scholar] [CrossRef]
  20. Raj, A.K.; Braithwaite, G. The Tactile Situation Awareness System in Rotary Wing Aircraft: Flight Test Results. Curr. Aeromed. Issues Rotary Wing Oper. 1998, 19, 5. [Google Scholar]
  21. van Erp, J.B.F.; van Veen, H. A Multi-Purpose Tactile Vest for Astronauts in the International Space Station. In Proceedings of the Eurohaptics 2003 Proceedings, Dublin, Ireland, 6–9 July 2003. [Google Scholar]
  22. van Erp, J.B.F.; van Veen, H.A.H.C.; Jansen, C.; Dobbins, T. Waypoint Navigation with a Vibrotactile Waist Belt. ACM Trans. Appl. Percept. 2005, 2, 106–117. [Google Scholar] [CrossRef]
  23. Vasudevan, M.K.; Isaac, J.H.R.; Sadanand, V.; Muniyandi, M. Novel Virtual Reality Based Training System for Fine Motor Skills: Towards Developing a Robotic Surgery Training System. Int. J. Med. Robot. Comput. Assist. Surg. 2020, 16, 1–14. [Google Scholar] [CrossRef] [PubMed]
  24. Spelmezan, D.; Jacobs, M.; Hilgers, A.; Borchers, J. Tactile Motion Instructions for Physical Activities. In Proceedings of the SIGCHI Conference on Human Factors in Computing Systems, Boston, MA, USA, 4–9 April 2009; ACM: New York, NY, USA, 2009; pp. 2243–2252. [Google Scholar]
  25. Peeters, T.; Vleugels, J.; Garimella, R.; Truijen, S.; Saeys, W.; Verwulgen, S. Vibrotactile Feedback for Correcting Aerodynamic Position of a Cyclist. J. Sports Sci. 2020, 38, 2193–2199. [Google Scholar] [CrossRef]
  26. Lindeman, R.W.; Yanagida, Y.; Hosaka, K.; Abe, S. The TactaPack: A Wireless Sensor/Actuator Package for Physical Therapy Applications. In Proceedings of the 14th Symposium on Haptic Interfaces for Virtual Environment and Teleoperator Systems, Alexandria, VA, USA, 25–26 March 2006; IEEE: Piscataway, NJ, USA, 2006; pp. 337–341. [Google Scholar]
  27. Held, J.P.; Klaassen, B.; van Beijnum, B.-J.F.; Luft, A.R.; Veltink, P.H. Usability Evaluation of a VibroTactile Feedback System in Stroke Subjects. Front. Bioeng. Biotechnol. 2017, 4, 98. [Google Scholar] [CrossRef] [Green Version]
  28. Schoonmaker, R.E.; Cao, C.G.L. Vibrotactile Force Feedback System for Minimally Invasive Surgical Procedures. In Proceedings of the 2006 IEEE International Conference on Systems, Man and Cybernetics, Taipei, Taiwan, 8–11 October 2006; IEEE: Piscataway, NJ, USA, 2006; pp. 2464–2469. [Google Scholar]
  29. Abiri, A.; Juo, Y.-Y.; Tao, A.; Askari, S.J.; Pensa, J.; Bisley, J.W.; Dutson, E.P.; Grundfest, W.S. Artificial Palpation in Robotic Surgery Using Haptic Feedback. Surg. Endosc. 2019, 33, 1252–1259. [Google Scholar] [CrossRef]
  30. Kent, B.; Rossa, C. Development of a Tissue Discrimination Electrode Embedded Surgical Needle Using Vibro-Tactile Feedback Derived from Electric Impedance Spectroscopy. Med. Biol. Eng. Comput. 2022, 60, 19–31. [Google Scholar] [CrossRef] [PubMed]
  31. Ruffaldi, E.; Filippeschi, A.; Frisoli, A.; Sandoval, O.; Avizzano, C.A.; Bergamasco, M. Vibrotactile Perception Assessment for a Rowing Training System. In Proceedings of the World Haptics 2009—Third Joint EuroHaptics Conference and Symposium on Haptic Interfaces for Virtual Environment and Teleoperator Systems, Salt Lake City, UT, USA, 18–20 March 2009; IEEE: Piscataway, NJ, USA, 2009; pp. 350–355. [Google Scholar]
  32. Lieberman, J.; Breazeal, C. TIKL: Development of a Wearable Vibrotactile Feedback Suit for Improved Human Motor Learning. IEEE Trans. Robot. 2007, 23, 919–926. [Google Scholar] [CrossRef]
  33. van der Linden, J.; Schoonderwaldt, E.; Bird, J.; Johnson, R. MusicJacket—Combining Motion Capture and Vibrotactile Feedback to Teach Violin Bowing. IEEE Trans. Instrum. Meas. 2011, 60, 104–113. [Google Scholar] [CrossRef]
  34. Conrad, M.O.; Scheidt, R.A.; Schmit, B.D. Effects of Wrist Tendon Vibration on Arm Tracking in People Poststroke. J. Neurophysiol. 2011, 106, 1480–1488. [Google Scholar] [CrossRef] [Green Version]
  35. Jones, L.A.; Sarter, N.B. Tactile Displays: Guidance for Their Design and Application. Hum. Factors J. Hum. Factors Ergon. Soc. 2008, 50, 90–111. [Google Scholar] [CrossRef]
  36. Shull, P.B.; Damian, D.D. Haptic Wearables as Sensory Replacement, Sensory Augmentation and Trainer—A Review. J. Neuroeng. Rehabil. 2015, 12, 59. [Google Scholar] [CrossRef] [Green Version]
  37. Wang, Q.; Markopoulos, P.; Yu, B.; Chen, W.; Timmermans, A. Interactive Wearable Systems for Upper Body Rehabilitation: A Systematic Review. J. Neuroeng. Rehabil. 2017, 14, 20. [Google Scholar] [CrossRef] [Green Version]
  38. Abboud, S.; Hanassy, S.; Levy-Tzedek, S.; Maidenbaum, S.; Amedi, A. EyeMusic: Introducing a “Visual” Colorful Experience for the Blind Using Auditory Sensory Substitution. Restor. Neurol. Neurosci. 2014, 32, 247–257. [Google Scholar] [CrossRef] [Green Version]
  39. Amedi, A.; Stern, W.M.; Camprodon, J.A.; Bermpohl, F.; Merabet, L.; Rotman, S.; Hemond, C.; Meijer, P.; Pascual-Leone, A. Shape Conveyed by Visual-to-Auditory Sensory Substitution Activates the Lateral Occipital Complex. Nat. Neurosci. 2007, 10, 687–689. [Google Scholar] [CrossRef]
  40. Dahl, L.; Knowlton, C.; Zaferiou, A. Developing Real-Time Sonification with Optical Motion Capture to Convey Balance-Related Metrics to Dancers. In Proceedings of the 6th International Conference on Movement and Computing, Tempe, AZ, USA, 10–12 October 2019; ACM: New York, NY, USA, 2019; pp. 1–6. [Google Scholar]
  41. Karatsidis, A.; Richards, R.E.; Konrath, J.M.; van den Noort, J.C.; Schepers, H.M.; Bellusci, G.; Harlaar, J.; Veltink, P.H. Validation of Wearable Visual Feedback for Retraining Foot Progression Angle Using Inertial Sensors and an Augmented Reality Headset. J. Neuroeng. Rehabil. 2018, 15, 78. [Google Scholar] [CrossRef]
  42. Dosen, S.; Markovic, M.; Strbac, M.; Belic, M.; Kojic, V.; Bijelic, G.; Keller, T.; Farina, D. Multichannel Electrotactile Feedback With Spatial and Mixed Coding for Closed-Loop Control of Grasping Force in Hand Prostheses. IEEE Trans. Neural Syst. Rehabil. Eng. 2017, 25, 183–195. [Google Scholar] [CrossRef]
  43. Ptito, M. Cross-Modal Plasticity Revealed by Electrotactile Stimulation of the Tongue in the Congenitally Blind. Brain 2005, 128, 606–614. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  44. Kaczmarek, K.A.; Tyler, M.E.; Bach-Y-Rita, P. Electrotactile Haptic Display on the Fingertips: Preliminary Results. In Proceedings of the 16th Annual International Conference of the IEEE Engineering in Medicine and Biology Society, Baltimore, MD, USA, 3–6 November 1994; IEEE: Piscataway, NJ, USA, 1994; pp. 940–941. [Google Scholar]
  45. Stronks, H.C.; Mitchell, E.B.; Nau, A.C.; Barnes, N. Visual Task Performance in the Blind with the BrainPort V100 Vision Aid. Expert Rev. Med. Devices 2016, 13, 919–931. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  46. Kapur, P.; Premakumar, S.; Jax, S.A.; Buxbaum, L.J.; Dawson, A.M.; Kuchenbecker, K.J. Vibrotactile Feedback System for Intuitive Upper-Limb Rehabilitation. In Proceedings of the World Haptics 2009—Third Joint EuroHaptics Conference and Symposium on Haptic Interfaces for Virtual Environment and Teleoperator Systems, Salt Lake City, UT, USA, 18–20 March 2009; IEEE: Piscataway, NJ, USA, 2009; pp. 621–622. [Google Scholar]
  47. Tzorakoleftherakis, E.; Bengtson, M.C.; Mussa-Ivaldi, F.A.; Scheidt, R.A.; Murphey, T.D. Tactile Proprioceptive Input in Robotic Rehabilitation after Stroke. In Proceedings of the 2015 IEEE International Conference on Robotics and Automation (ICRA), Seattle, WA, USA, 26–30 May 2015; pp. 6475–6481. [Google Scholar]
  48. Ferris, T.K.; Sarter, N. Continuously Informing Vibrotactile Displays in Support of Attention Management and Multitasking in Anesthesiology. Hum. Factors J. Hum. Factors Ergon. Soc. 2011, 53, 600–611. [Google Scholar] [CrossRef] [PubMed]
  49. Kinnaird, C.; Lee, J.; Carender, W.J.; Kabeto, M.; Martin, B.; Sienko, K.H. The Effects of Attractive vs. Repulsive Instructional Cuing on Balance Performance. J. Neuroeng. Rehabil. 2016, 13, 29. [Google Scholar] [CrossRef] [Green Version]
  50. Elsayed, H.; Weigel, M.; Müller, F.; Schmitz, M.; Marky, K.; Günther, S.; Riemann, J.; Mühlhäuser, M. VibroMap: Understanding the Spacing of Vibrotactile Actuators across the Body. Proc. ACM Interact. Mob. Wearable Ubiquitous Technol. 2020, 4, 1–16. [Google Scholar] [CrossRef]
  51. Bao, T.; Su, L.; Kinnaird, C.; Kabeto, M.; Shull, P.B.; Sienko, K.H. Vibrotactile Display Design: Quantifying the Importance of Age and Various Factors on Reaction Times. PLoS ONE 2019, 14, e0219737. [Google Scholar] [CrossRef] [Green Version]
  52. Shah, V.A.; Casadio, M.; Scheidt, R.A.; Mrotek, L.A. Spatial and Temporal Influences on Discrimination of Vibrotactile Stimuli on the Arm. Exp. Brain Res. 2019, 237, 2075–2086. [Google Scholar] [CrossRef] [Green Version]
  53. Brooke, J. SUS: A Quick and Dirty Usability Scale. Usability Eval. Ind. 1995, 189, 4–7. [Google Scholar]
  54. McAuley, E.; Duncan, T.; Tammen, V.V. Psychometric Properties of the Intrinsic Motivation Inventory in a Competitive Sport Setting: A Confirmatory Factor Analysis. Res. Q. Exerc. Sport 1989, 60, 48–58. [Google Scholar] [CrossRef] [PubMed]
  55. Demers, L.; Weiss-Lambrou, R.; Ska, B. Item Analysis of the Quebec User Evaluation of Satisfaction with Assistive Technology (QUEST). Assist. Technol. 2000, 12, 96–105. [Google Scholar] [CrossRef]
  56. Demers, L.; Monette, M.; Lapierre, Y.; Arnold, D.L.; Wolfson, C. Reliability, Validity, and Applicability of the Quebec User Evaluation of Satisfaction with Assistive Technology (QUEST 2.0) for Adults with Multiple Sclerosis. Disabil. Rehabil. 2002, 24, 21–30. [Google Scholar] [CrossRef]
  57. Scheidt, R.A.; Lillis, K.P.; Emerson, S.J. Visual, Motor and Attentional Influences on Proprioceptive Contributions to Perception of Hand Path Rectilinearity during Reaching. Exp. Brain Res. 2010, 204, 239–254. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  58. Whitman, P.A.; Adigun, O.O. Anatomy, Skin, Dermatomes. Available online: https://0-www-ncbi-nlm-nih-gov.brum.beds.ac.uk/books/NBK535401/ (accessed on 14 March 2023).
  59. Nolan, M.F. Two-Point Discrimination Assessment in the Upper Limb in Young Adult Men and Women. Phys. Ther. 1982, 62, 965–969. [Google Scholar] [CrossRef] [PubMed]
  60. Shah, V.A.; Casadio, M.; Scheidt, R.A.; Mrotek, L.A. Vibration Propagation on the Skin of the Arm. Appl. Sci. 2019, 9, 4329. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  61. Flanagan, J.R.; Rao, A.K. Trajectory Adaptation to a Nonlinear Visuomotor Transformation: Evidence of Motion Planning in Visually Perceived Space. J. Neurophysiol. 1995, 74, 2174–2178. [Google Scholar] [CrossRef]
  62. Bangor, A.; Kortum, P.; Miller, J. Determining What Individual SUS Scores Mean: Adding an Adjective Rating Scale. J. Usability Stud. 2009, 4, 114–123. [Google Scholar]
  63. Colombo, R.; Pisano, F.; Mazzone, A.; Delconte, C.; Micera, S.; Carrozza, M.C.; Dario, P.; Minuco, G. Design Strategies to Improve Patient Motivation during Robot-Aided Rehabilitation. J. Neuroeng. Rehabil. 2007, 4, 3. [Google Scholar] [CrossRef] [Green Version]
  64. Prange, G.B.; Kottink, A.I.R.; Buurke, J.H.; Eckhardt, M.M.E.M.; van Keulen-Rouweler, B.J.; Ribbers, G.M.; Rietman, J.S. The Effect of Arm Support Combined with Rehabilitation Games on Upper-Extremity Function in Subacute Stroke: A Randomized Controlled Trial. Neurorehabil. Neural Repair 2015, 29, 174–182. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  65. Wann, J.P.; Ibrahim, S.F. Does Limb Proprioception Drift? Exp. Brain Res. 1992, 91, 162–166. [Google Scholar] [CrossRef]
  66. Gault, R.H. Touch as a Substitute for Hearing in the Interpretation and Control of Speech. Arch. Otolaryngol.–Head Neck Surg. 1926, 3, 121–135. [Google Scholar] [CrossRef]
  67. White, B.W.; Saunders, F.A.; Scadden, L.; Bach-Y-Rita, P.; Collins, C.C. Seeing with the Skin. Percept. Psychophys. 1970, 7, 23–27. [Google Scholar] [CrossRef] [Green Version]
  68. Bach-Y-Rita, P.; Collins, C.C.; Saunders, F.A.; White, B.; Scadden, L. Vision Substitution by Tactile Image Projection. Nature 1969, 221, 963–964. [Google Scholar] [CrossRef] [PubMed]
  69. Bach-y-Rita, P.; Kercel, W.S. Sensory Substitution and the Human–Machine Interface. Trends. Cogn. Sci. 2003, 7, 541–546. [Google Scholar] [CrossRef]
  70. Danilov, Y.; Tyler, M. BrainPort: An Alternative Input to the Brain. J. Integr. Neurosci. 2005, 04, 537–550. [Google Scholar] [CrossRef]
  71. Lee, B.-C.; Kim, J.; Chen, S.; Sienko, K.H. Cell Phone Based Balance Trainer. J. Neuroeng. Rehabil. 2012, 9, 10. [Google Scholar] [CrossRef] [Green Version]
  72. Witteveen, H.J.; Rietman, H.S.; Veltink, P.H. Vibrotactile Grasping Force and Hand Aperture Feedback for Myoelectric Forearm Prosthesis Users. Prosthet. Orthot. Int. 2015, 39, 204–212. [Google Scholar] [CrossRef] [PubMed]
  73. Nesbitt, K. V Designing Multi-Sensory Displays for Abstract Data. Ph.D. Thesis, University of Sydney Australia, Sydney, Australia, 2003. [Google Scholar]
  74. Tannert, I.; Schulleri, K.H.; Michel, Y.; Villa, S.; Johannsen, L.; Hermsdorfer, J.; Lee, D. Immediate Effects of Vibrotactile Biofeedback Instructions on Human Postural Control. In Proceedings of the 43rd Annual International Conference of the IEEE Engineering in Medicine & Biology Society (EMBC), Guadalajara, Mexico, 1–5 November 2021; IEEE: Piscataway, NJ, USA, 2021; pp. 7426–7432. [Google Scholar]
  75. Sigrist, R.; Rauter, G.; Riener, R.; Wolf, P. Augmented Visual, Auditory, Haptic, and Multimodal Feedback in Motor Learning: A Review. Psychon. Bull. Rev. 2013, 20, 21–53. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  76. Priplata, A.; Niemi, J.; Salen, M.; Harry, J.; Lipsitz, L.A.; Collins, J.J. Noise-Enhanced Human Balance Control. Phys. Rev. Lett. 2002, 89, 238101. [Google Scholar] [CrossRef] [PubMed]
  77. Conrad, M.O.; Gadhoke, B.; Scheidt, R.A.; Schmit, B.D. Effect of Tendon Vibration on Hemiparetic Arm Stability in Unstable Workspaces. PLoS ONE 2015, 10, e0144377. [Google Scholar] [CrossRef] [PubMed]
  78. Brewster, S.A.; Brown, L.M. Tactons: Structured Tactile Messages for Non-Visual Information Display. In Proceedings of the Australasian User Interface Conference, Dunedin, New Zealand, 18–22 January 2004. [Google Scholar]
  79. Sklar, A.E.; Sarter, N.B. Good Vibrations: Tactile Feedback in Support of Attention Allocation and Human-Automation Coordination in Event-Driven Domains. Hum. Factors J. Hum. Factors Ergon. Soc. 1999, 41, 543–552. [Google Scholar] [CrossRef]
  80. Cuppone, A.; Squeri, V.; Semprini, M.; Konczak, J. Robot-Assisted Training to Improve Proprioception Does Benefit from Added Vibro-Tactile Feedback. In Proceedings of the 37th Annual International Conference of the IEEE Engineering in Medicine and Biology Society (EMBC), Milan, Italy, 25–29 August 2015; pp. 258–261. [Google Scholar]
  81. Cuppone, A.V.; Semprini, M.; Konczak, J. Consolidation of Human Somatosensory Memory during Motor Learning. Behav. Brain Res. 2018, 347, 184–192. [Google Scholar] [CrossRef]
  82. Rinderknecht, M.D.; Kim, Y.; Santos-Carreras, L.; Bleuler, H.; Gassert, R. Combined Tendon Vibration and Virtual Reality for Post-Stroke Hand Rehabilitation. In Proceedings of the 2013 World Haptics Conference (WHC), Daejeon, Republic of Korea, 4–17 April 2013; IEEE: Piscataway, NJ, USA, 2013; pp. 277–282. [Google Scholar]
  83. Tzorakoleftherakis, E.; Murphey, T.D.; Scheidt, R.A. Augmenting Sensorimotor Control Using “Goal-Aware” Vibrotactile Stimulation during Reaching and Manipulation Behaviors. Exp. Brain Res. 2016, 234, 2403–2414. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  84. Stepp, C.E.; Matsuoka, Y. Relative to Direct Haptic Feedback, Remote Vibrotactile Feedback Improves but Slows Object Manipulation. In Proceedings of the 2010 Annual International Conference of the IEEE Engineering in Medicine and Biology, Buenos Aires, Argentina, 31 August–4 September 2010; IEEE: Piscataway, NJ, USA, 2010; pp. 2089–2092. [Google Scholar]
  85. Kärcher, S.M.; Fenzlaff, S.; Hartmann, D.; Nagel, S.K.; König, P. Sensory Augmentation for the Blind. Front. Hum. Neurosci. 2012, 6, 37. [Google Scholar] [CrossRef] [Green Version]
  86. Muijzer-Witteveen, H.; Sibum, N.; van Dijsseldonk, R.; Keijsers, N.; van Asseldonk, E. Questionnaire Results of User Experiences with Wearable Exoskeletons and Their Preferences for Sensory Feedback. J. Neuroeng. Rehabil. 2018, 15, 112. [Google Scholar] [CrossRef]
  87. Goeke, C.M.; Planera, S.; Finger, H.; König, P. Bayesian Alternation during Tactile Augmentation. Front. Behav. Neurosci. 2016, 10, 187. [Google Scholar] [CrossRef] [Green Version]
  88. Vuori, H. PATIENT SATISFACTION--DOES IT MATTER? Int. J. Qual. Health Care 1991, 3, 183–189. [Google Scholar] [CrossRef]
  89. Zastowny, T.R.; Roghmann, K.J.; Cafferata, G.L. Patient Satisfaction and the Use of Health Services Explorations in Causality. Med. Care 1989, 27, 705–723. [Google Scholar] [CrossRef] [PubMed]
  90. Mullan, E.; Markland, D.; Ingledew, D.K. A Graded Conceptualisation of Self-Determination in the Regulation of Exercise Behaviour: Development of a Measure Using Confirmatory Factor Analytic Procedures. Pers. Individ. Dif. 1997, 23, 745–752. [Google Scholar] [CrossRef]
  91. Vallerand, R.J.; Blssonnette, R. Intrinsic, Extrinsic, and Amotivational Styles as Predictors of Behavior: A Prospective Study. J. Pers. 1992, 60, 599–620. [Google Scholar] [CrossRef]
  92. Vallerand, R.J. Intrinsic and Extrinsic Motivation in Sport and Physical Activity: A Review and a Look at the Future. In Handbook of Sport Psychology; Wiley: Hoboken, NJ, USA, 2007; pp. 59–83. [Google Scholar]
  93. Tang, F.; McMahan, R.P.; Allen, T.T. Development of a Low-Cost Tactile Sleeve for Autism Intervention. In Proceedings of the 2014 IEEE International Symposium on Haptic, Audio and Visual Environments and Games (HAVE) Proceedings, Richardson, TX, USA, 10–11 October 2014; IEEE: Piscataway, NJ, USA, 2014; pp. 35–40. [Google Scholar]
  94. Verrillo, R.T. Age Related Changes in the Sensitivity to Vibration. J. Gerontol. 1980, 35, 185–193. [Google Scholar] [CrossRef]
  95. Pomplun, E.; Thomas, A.; Corrigan, E.; Shah, V.A.; Mrotek, L.A.; Scheidt, R.A. Vibrotactile Perception for Sensorimotor Augmentation: Perceptual Discrimination of Vibrotactile Stimuli Induced by Low-Cost Eccentric Rotating Mass Motors at Different Body Locations in Young, Middle-Aged, and Older Adults. Front. Rehabil. Sci. 2022, 3, 895036. [Google Scholar] [CrossRef] [PubMed]
  96. Verrillo, R.T. Vibration Sensation in Humans. Music Percept. 1992, 9, 281–302. [Google Scholar] [CrossRef]
  97. Carson, R.G.; Kelso, J.A.S. Governing Coordination: Behavioural Principles and Neural Correlates. Exp. Brain Res. 2004, 154, 267–274. [Google Scholar] [CrossRef]
  98. Huang, J.; Sheffield, B.; Lin, P.; Zeng, F.-G. Electro-Tactile Stimulation Enhances Cochlear Implant Speech Recognition in Noise. Sci. Rep. 2017, 7, 2196. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  99. Sullivan, J.L.; Dunkelberger, N.; Bradley, J.; Young, J.; Israr, A.; Lau, F.; Klumb, K.; Abnousi, F.; O’Malley, M.K. Multi-Sensory Stimuli Improve Distinguishability of Cutaneous Haptic Cues. IEEE Trans. Haptics 2020, 13, 286–297. [Google Scholar] [CrossRef]
Figure 1. Experiment setup and protocol. (A) A participant grasping the handle of the planar manipulandum with the dominant right hand while seated in front of a grid of targets presented on a vertical computer display. An opaque horizontal shield obstructed their view of the physical workspace. Red markers indicate the location of eccentric rotating mass vibration motors (the vibrotactile display) fixed to the non-dominant arm. (B) Visual workspace showing the 5 × 5 grid of targets and a cursor corresponding to the position of the right hand in the physical workspace. (C) Location and information content of the individual vibration motors affixed to the non-preferred arm under the Cartesian Endpoint Encoding scheme (left) and Joint Angle Encoding scheme (right). (D) The structure of each of the two experimental sessions: Nine blocks of 25 reaches under 4 different feedback conditions.
Figure 1. Experiment setup and protocol. (A) A participant grasping the handle of the planar manipulandum with the dominant right hand while seated in front of a grid of targets presented on a vertical computer display. An opaque horizontal shield obstructed their view of the physical workspace. Red markers indicate the location of eccentric rotating mass vibration motors (the vibrotactile display) fixed to the non-dominant arm. (B) Visual workspace showing the 5 × 5 grid of targets and a cursor corresponding to the position of the right hand in the physical workspace. (C) Location and information content of the individual vibration motors affixed to the non-preferred arm under the Cartesian Endpoint Encoding scheme (left) and Joint Angle Encoding scheme (right). (D) The structure of each of the two experimental sessions: Nine blocks of 25 reaches under 4 different feedback conditions.
Sensors 23 05455 g001
Figure 2. Measurements involved in the calculation of participant joint angles. With the participant grasping the handle at the home position, distance measurements were made between (a) the robot handle to the participant’s sagittal midline, (b) the participant’s sagittal midline to the shoulder’s center of rotation, (c) the shoulder’s center of rotation to the elbow’s center of rotation, and (d) the elbow’s center of rotation to the center of the robot handle. θ e : elbow angle. θ s : shoulder angle.
Figure 2. Measurements involved in the calculation of participant joint angles. With the participant grasping the handle at the home position, distance measurements were made between (a) the robot handle to the participant’s sagittal midline, (b) the participant’s sagittal midline to the shoulder’s center of rotation, (c) the shoulder’s center of rotation to the elbow’s center of rotation, and (d) the elbow’s center of rotation to the center of the robot handle. θ e : elbow angle. θ s : shoulder angle.
Sensors 23 05455 g002
Figure 3. Representative hand paths (grey) and final positions (red) in all trial blocks performed by selected participants in the two encoding schemes. (Top): Cartesian Endpoint Encoding (CEE); (Bottom): Joint Angle Encoding (JAE). Each block is shown as a separate plot with block conditions labeled below. Block conditions with dashed delineators (i.e., baseline, testing) were performed without robotic repositioning of the hand at the end of each reach. Blocks with the solid delineator above the label (training) were subject to the robotic repositioning of the hand at the end of each reach. Five training blocks were performed. Horizontal and vertical scale bars: 8 cm.
Figure 3. Representative hand paths (grey) and final positions (red) in all trial blocks performed by selected participants in the two encoding schemes. (Top): Cartesian Endpoint Encoding (CEE); (Bottom): Joint Angle Encoding (JAE). Each block is shown as a separate plot with block conditions labeled below. Block conditions with dashed delineators (i.e., baseline, testing) were performed without robotic repositioning of the hand at the end of each reach. Blocks with the solid delineator above the label (training) were subject to the robotic repositioning of the hand at the end of each reach. Five training blocks were performed. Horizontal and vertical scale bars: 8 cm.
Sensors 23 05455 g003
Figure 4. Cohort results: primary outcome measures for each trial block in each experimental session. (A) target capture error; (B) target capture time. Orange open squares: average performance in the Cartesian Endpoint Encoding session (CEE). Teal solid circles: average performance in the Joint Angle Encoding session (JAE). The training blocks, marked with a solid black line on the x-axis, included robotic repositioning of the hand to the intended target at the end of each trial. Baseline and testing trials did not include robotic repositioning. Error bars: 95% CI of the mean.
Figure 4. Cohort results: primary outcome measures for each trial block in each experimental session. (A) target capture error; (B) target capture time. Orange open squares: average performance in the Cartesian Endpoint Encoding session (CEE). Teal solid circles: average performance in the Joint Angle Encoding session (JAE). The training blocks, marked with a solid black line on the x-axis, included robotic repositioning of the hand to the intended target at the end of each trial. Baseline and testing trials did not include robotic repositioning. Error bars: 95% CI of the mean.
Sensors 23 05455 g004
Figure 5. Cohort results: secondary measures of spatial efficiency. Panel (A) (top) shows the cohort average path length ratio averaged across trials within each block; the cohort mean (symbols) and 95% CI of the cohort mean (error bars) for all blocks of endpoint encoding (orange) and joint encoding (teal). Panel (B) (middle) shows the cohort average decomposition index for the entire distance traveled in each trial with 95% CI of the mean for all blocks of endpoint and joint encoding. Panel (C) (bottom) shows the cohort average decomposition index for the initial half of the distance traveled in each trial for each block in endpoint and joint encoding. The training blocks, marked with a solid black line along the x-axis, included robot corrections to target at the end of trials.
Figure 5. Cohort results: secondary measures of spatial efficiency. Panel (A) (top) shows the cohort average path length ratio averaged across trials within each block; the cohort mean (symbols) and 95% CI of the cohort mean (error bars) for all blocks of endpoint encoding (orange) and joint encoding (teal). Panel (B) (middle) shows the cohort average decomposition index for the entire distance traveled in each trial with 95% CI of the mean for all blocks of endpoint and joint encoding. Panel (C) (bottom) shows the cohort average decomposition index for the initial half of the distance traveled in each trial for each block in endpoint and joint encoding. The training blocks, marked with a solid black line along the x-axis, included robot corrections to target at the end of trials.
Sensors 23 05455 g005
Figure 6. Subjective user experience. Survey results for SUS (A), IMI (B) and QUEST (C) show average cohort scores with 95% CI of the mean for the endpoint (orange open square) and joint (solid teal circle) encoding. Solid gray lines link the corresponding scores of each encoding method from individual participants. Dashed lines represent the scores that demarcate positive experience thresholds. The pie chart shows the most important attributes of an assistive device as indicated by participants on the QUEST. The pie chart values shown here were rounded to the nearest integer percentage.
Figure 6. Subjective user experience. Survey results for SUS (A), IMI (B) and QUEST (C) show average cohort scores with 95% CI of the mean for the endpoint (orange open square) and joint (solid teal circle) encoding. Solid gray lines link the corresponding scores of each encoding method from individual participants. Dashed lines represent the scores that demarcate positive experience thresholds. The pie chart shows the most important attributes of an assistive device as indicated by participants on the QUEST. The pie chart values shown here were rounded to the nearest integer percentage.
Sensors 23 05455 g006
Table 1. Intrinsic Motivation Inventory (IMI) scores.
Table 1. Intrinsic Motivation Inventory (IMI) scores.
Interest &
Enjoyment
Perceived
Competence *
Effort &
Importance
Pressure &
Tension
Value &
Usefulness
CEE4.2 ± 1.23.7 ± 1.55.5 ± 0.92.6 ± 1.15.3 ± 1.1
JAE4.1 ± 1.32.5 ± 1.05.7 ± 0.93.0 ± 1.24.6 ± 1.1
CEE: Cartesian Endpoint Encoding; JAE: Joint Angle Encoding; Values represent mean score ± 1 SD. * p < 0.01 across encoding schemes.
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Rayes, R.K.; Mazorow, R.N.; Mrotek, L.A.; Scheidt, R.A. Utility and Usability of Two Forms of Supplemental Vibrotactile Kinesthetic Feedback for Enhancing Movement Accuracy and Efficiency in Goal-Directed Reaching. Sensors 2023, 23, 5455. https://0-doi-org.brum.beds.ac.uk/10.3390/s23125455

AMA Style

Rayes RK, Mazorow RN, Mrotek LA, Scheidt RA. Utility and Usability of Two Forms of Supplemental Vibrotactile Kinesthetic Feedback for Enhancing Movement Accuracy and Efficiency in Goal-Directed Reaching. Sensors. 2023; 23(12):5455. https://0-doi-org.brum.beds.ac.uk/10.3390/s23125455

Chicago/Turabian Style

Rayes, Ramsey K., Rachel N. Mazorow, Leigh A. Mrotek, and Robert A. Scheidt. 2023. "Utility and Usability of Two Forms of Supplemental Vibrotactile Kinesthetic Feedback for Enhancing Movement Accuracy and Efficiency in Goal-Directed Reaching" Sensors 23, no. 12: 5455. https://0-doi-org.brum.beds.ac.uk/10.3390/s23125455

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop