Our keynote speakers:

Harald Martens:
Quantitative Intuition:
Combining prior knowledge and big data

Marieke E. Timmerman:
Segmentation with complex data: Arriving at an insightful representation

Michael Meyners:
Controversy regarding relevance and rigor of Sensometrics for industrial applications
Harald Martens:
Quantitative Intuition:
Combining prior knowledge and big data
Harald Martens(1,2), Andreas Wulvik(1) and Silje Skeide Fuglerud(3)
1) Idletechs AS, Trondheim Norway (harald.martens@idletechs.com). 2) Dept Engineering Cybernetics, NTNU Trondheim Norway. 3) St.Olavs University Hospital, Trondheim Norway
Clear, logical thinking is just the tip of the iceberg. Intuition, first impression, gut feeling: This is the brain on autopilot, a result of unconscious mental parallel processing of many types of input at the same time. The mind does some sort of multivariate analysis, without being fully aware of it. Our beliefs, values, preferences are multivariate summaries of past experiences, cumulative knowledge, peer pressure, culture.
The body does multivariate analysis, too, and brilliantly so. Otherwise we could not ride a bike. Our heart beats are beyond our control, like other cyclic processes in our biology. Our breath is a link between body and mind – half physical, half mental. Modern measurements show our brain hard at work. Drinking a double espresso, perhaps just thinking of that espresso, and all changes.
Over time, the classical data in sensometrics – 3rd person sensory perception reports – e.g. descriptive sensory panel data and consumer report - are increasingly supplemented by technical observation data, from cameras, microphones etc as well as physiological continuous streams of high-dimensional data from body measurements by EEG, ECG, skin resistance, FMRI, blood- or saliva analyzers etc.
How to make sense of this new torrent of multichannel data?
For each given setting, there is usually a limited number of independent causalities behind the observational data. Some of these causalities are already well known. We can specify their effects in advance in terms of e.g. mathematical models, and estimate their state variables by fitting these models to the stream of empirical data.
But the data will usually be affected by other, unexpected causalities too, whether we like it or not. These unexpected and unmodelled variation patterns may create so-called alias errors – they interfere with the quantification of the known causalities - unless corrected for. Moreover, these new patterns may be very informative in their own right.
With the right software tools for multivariate hybrid modelling, you can discover these unexpected, but clearly varying causalities empirically, as systematic patterns in the multichannel observational data. The known and the unknown causalities may then be interpreted simultaneously, in light of your tacit background knowledge, to eliminate the alias problems and gain new insight. Here we demonstrate how some of the standard methods for multivariate data modelling in sensometrics have been modified and extended, to yield tools for quantitative intuition. In other words: Explainable AI in practice.
See also: https://www.camo.com/iatl_haraldmartens/
Marieke E. Timmerman:
Segmentation with complex data: Arriving at an insightful representation
Marieke E. Timmerman
Psychometrics and Statistics, Heymans Institute for Psychological Research,
University of Groningen, the Netherlands; m.e.timmerman@rug.nl
Consumer segmentation aims at dividing consumers into groups of individuals that are similar in specific ways, as expressed in their empirical data. In this talk I will present mixture modelling as a useful cluster analysis approach to achieve an insightful segmentation. As mixture modelling is highly versatile, it can deal with complex data, giving the opportunity to define similarity in various ways. Herewith, it is useful to distinguish different types of variables (e.g., continuous, ordinal), and different data structures. I will discuss various prototypical data structures, including single-set and multi-set multivariate data, longitudinal data and multilevel data. The associated potentially useful models include mixture factor models, growth mixture models, and Markov models. The models will be presented from a birds eye view. Then I will turn to empirical practice, and discuss how to arrive at a proper model for an empirical segmentation problem at hand. As I will discuss, key steps are to identify the data structure and the aspects to cluster on. I will present different exemplary empirical cases from psychology, to show the approach and its usefulness for achieving insight into subgroups of individuals.
Michael Meyners:
Controversy regarding relevance and rigor of Sensometrics for industrial applications
Michael Meyners
Procter & Gamble Service GmbH, 65824 Schwalbach am Taunus, Germany
What makes a method from sensometrics (not) to be adopted and used in industry? When does “quick and dirty” become a bit too dirty? How many different methods do we need to address the same question? Or aren’t they addressing the same question?
Academic research and industry needs do not always closely align. Many (sensory and statistical) methods are developed and deployed, but they are rarely compared exhaustively and objectively with alternative existing methods. Why would I adopt any new method when I have something that currently (seemingly) addresses the same task in a similar way? What benefit does it bring, and is it important enough for me to bother? How can I make findings actionable to inform product design?
Simplified and faster methods and their related analyses have become popular over the last decade, with at best anecdotal evidence that results are comparable to classical, often more laborious efforts. Simultaneously, research is conducted that seems to suggest that lower efforts (i.e. fewer panelists, fewer replications, less training, …) may still give similar results. Is similar good enough? Was the standard itself good enough to begin with, or might it already have degraded over a few previous iterations of cost optimizations (think about the famous salami slicing)? What gold standards do we adhere to in order to avoid degradation of data quality over time?
This presentation aims at triggering discussions about what we are doing well today and what opportunties exist. It will become a plea to strengthen both the relevance and statistical rigor across the discipline. If we continue on our current path, Sensometrics might deteriorate its own credibility.