So you are standing in front of a refrigerated counter in the supermarket, reaching for a bottle of milk. What seemingly is an automated action for you in fact is a complex interplay between vision and proprioception, coordinated by neural circuits that in a split second reactivate memory traces of previous bottle-grasps as well as probabilistic expectancies of potential goal-directed movement paths which are automatically evaluated with respect to min-max criteria (minimize energy, maximize outcomes).
Your brain initiates the hand movement, maybe you have to step forward to keep your balance. Your eyes zip back and forth between the milk bottle and the tips of your fingers. You anticipate the coolness of the glass bottle, its weight and surface characteristics. Your brain triggers an optimal thumb-digit arrangement and precomputes the grip strength – of course you wouldn’t want to drop the bottle or “over-lift” it. Your fingers finally touch the bottle and feel a thin layer of water, requiring a slight adaptation of your grip, a widening of your fingers, a tighter grasp (“wow, it is heavier than I thought!”). You lift the bottle gently from the shelf and contentedly place it into your shopping cart.
“Brain processes relevant for learning and recall are also determined by bodily actions.”
Human behavior is an expression of the underlying cognitive, emotional and physiological processes. Interestingly, brain processes that are relevant for learning and recall are also determined by bodily actions. Research by Bruce McNaughton and colleagues, for example, indicates active walking to produce deeper learning and richer memorization compared to driving or passive movement. We simply learn better when we involve the entire motor apparatus. On the other hand, imagining limb movements (motor imagery) produces activation in exactly the same brain areas involved in producing the actual movements. The interaction of body and brain has recently been summed up in the term “embodied cognition” and pushed forward by researchers in philosophy, psychology, cognitive science, and artificial intelligence. Prof. Luc Steels from University Brussels even postulated that there is no intelligence without behavior.
We cannot not behave!
Apparently, we cannot not behave. With this statement, psychologist Gregory Bateson described quite nicely that no matter what we do (or don’t), behavioral processes are taking place. They occur on multiple scales. Some actions are apparent and visible (so-called “overt behavior” such as talking, gazing, reaching and grasping) while others are unobservable and hidden to the eye (referred to as “covert behavior” such as thoughts, perceptions, attitudes, feelings, or physiological processes). The relevant aspect is that all of these rich, multifaceted behavioral outcomes (“observable variables”) are manifestations of the underlying perceptual, cognitive and emotional processes (“latent variables”).
Empirical researchers have been using various tools to capture the latent variables of interest. Observations can be done in the field (i.e., the natural surroundings of a participant) vs. the lab (i.e., a controlled environment where all factors that are not supposed to have an impact on behavioral performance are controlled). For example, classical field research in human factors and Organizational Psychology included factory visits (sometimes lasting several months), where work efficiency was observed and evaluated based on predefined rating scales or coding schemes. This “quantification of behavior” involved noting down the frequencies, onsets and durations of behavioral actions that were indicative of certain cognitive processes. Nowadays behavioral observation is often done using video recordings, which are watched and manually classified by scientific experts. For example, an observer, often referred to as “rater”, could watch a video of the person reaching for the milk glass bottle and classify the “effectiveness of the reach” based on several aspects such as general posture, corrective actions, time to arm/hand lift, or number of saccades/fixations. Whenever any of these overt actions occur in the video, the rater would take a note or place a marker. In the end, the number of counts for each action can be analyzed statistically.
Behavioral research has identified three well-known effects that can alter the results significantly. Luckily, you can counteract them!
- Inter-Rater Reliability. One of the most crucial prerequisites of behavioral observation is to make 100% sure that the coding system applied for classification is absolutely replicable – by you and any other colleague. This implies that whoever watches a video uses the exact same definition of the behavior and what it represents in order that every rater comes to the very same classification results. In Psychology, this is referred to as inter-rater or inter-observer reliability. The higher the inter-rater reliability, the more likely a certain behavior is to be classified consistently.
- Reactivity. Sometimes the mere fact that we are observed by others changes our behavior – Psychology generally refers to this circumstance as the Hawthorne Effect. One possibility to overcome this effect is to be discreet in your observations while at the same time communicating your research goals clearly to the people you observe.
- Observer Bias. Usually the outcome of an observation is biased by the expectancies of the observer. For example, if behavioral researchers know that a certain child sufferers from ADHD, its observed behavior will more likely be classified as “ADHD-ish” just because the observer knew. One of the measures to reduce this effect is to use so-called “blind observers” or complete “double blind” experimental designs where neither the observer nor the participants are aware of their assignment to the experimental or control groups.
Within the last couple of years, however, manual coding schemes have been progressively replaced or extended by automatic classification procedures, mostly due to major breakthroughs in machine learning and computational neuroscience. For example, Emotient (FACET) as implemented in iMotions Biometric Research Platform provides an automated way to track and analyze the emotional responses of users in real-time, detecting and tracking various facial expressions of primary emotion, positive and negative emotions and blended composites of two or more emotions, as well as 19 action units (AU).
While behavioral observation per se is already a powerful tool to detect indicators for cognitive, emotional or physiological processes, it is very reasonable to combine it at a certain stage with measurements of physiological processes such as EEG, EMG, ECG or GSR, which are to a certain extent less prone to the observation biases described above and provide additional insight into the electro-neuro-muscular processes accompanying human emotions, thoughts and complex actions. Eventually, you might observe the beauty of their interplay next time you reach out for a bottle of milk…
Here at iMotions we are passionate about human behavior research and providing the world-leading software platform for experimental stimulation, synchronized recording and online/offline annotation of multimodal physiological and behavioral data (eye tracking, facial expressions, EEG, EMG, GSR etc.). Our team at iMotions is looking forward to assisting you in synchronizing your data streams and extracting the core information relevant for your particular research endeavor!
Please contact us anytime at email@example.com!
See how the University of Nebraska-Omaha does human behavioral research!
Find technology for human behavior research here!
Learn how human behavior research can be simplified!