I am glad to say that my work with Jean-Remi King and Stan Dehaene has been recently accepted for publication in the prestigious journal Neuron. You can find a preliminary version here.
Recent studies of “unconscious working memory” challenge the notion that only visible stimuli can be actively maintained over time. In the present study, we investigated the neural dynamics underlying the brief maintenance of subjectively invisible stimuli, using machine learning and magnetoencephalography. Subjects were presented with a masked Gabor patch whose angle had to be briefly memorized. We show that the stimulus is encoded in early brain activity independently of its visibility, and that the maintenance of its presence and orientation can be decoded throughout the retention period, even in the invisible condition. Source and temporal generalization analyses revealed that perceptual maintenance depends on a deep hierarchical network ranging from early visual cortex to temporal, parietal and frontal cortices. Importantly, the representations coded in the late processing stages of this network specifically predict subjective reports. These results challenge several predictions of consciousness theories and suggest that unseen information can be briefly maintained within the higher processing stages of visual perception.
The link between working memory and visual awareness has recently been challenged
We here study the mechanism of unconscious maintenance with MEG & machine learning
Unseen stimuli can be partially and maintained within high cortical assemblies
We show how to revise awareness theories to account for the maintenance of unseen stimuli
Following my recent presentation for the Experimental Psychology Society meeting in Oxford, I will write about how I came up to conceive the Opinion Space and how I use it now to represent social interactions.
The idea came to me during a project using support vector machines, algorithms in machine learning that represent data points (called instances) in a multi-dimensional geometric space. Separation of categories and learning in this multivariate space is easier than along each individual dimension constituting the space. These sort of machine learning algorithms are great for representing complex data. The signal of different sensors along the scalp, if taken alone, is very weakly indicative of a certain brain state. Taking multiple sensors into account at the same time however (hence “multivariate“) allows us to discern patterns that are not found otherwise.
Social interaction is similar in nature to the problems studied in neuroscience, in respect to its complexity and non-linearity. When two people interact (for example when you talk to friend), they affect each other in very unpredictable ways and with no clear direction of cause and effect. The exchange opinions, views and believes. Each of the individuals in the interaction affects the other (either willingly or not) but is at the same time influenced by the other. By representing social interaction as movements along a higher dimensional space – the Opinion Space -we can understand better the mechanisms, describe better the phenomena and predict better the behaviour of human interaction.
How do we construct an Opinion Space? Each dimension of the space (feature) is one person’s belief or opinion about a certain variable of interest (e.g. is it going to rain tomorrow? Is the restaurant to the left or to the right going to be better?) or decision (e.g. shall I take my umbrella with me? Shall I try the restaurant on the right-hand side of the road or the one on the left-hand side?). In my research I typically measure the confidence associated with the variable of interest to gauge the strength of the participant’s opinion.
We can now construct from these orthogonal axes a Cartesian space, that we have called Opinion Space, where the information state of the group is represented as a single point along this space. This full Opinion Space is characterised by two agreement quadrants and two disagreement quadrants. We can simplify things to reduce this full space to a more parsimonious version that gets rid of subject’s identity (Yellow and Blue in the video above) and the choice identity (left and right in the video above). We now care only about whose opinion was supported by the strongest confidence (x-axis)? How’s the other person relate to this opinion (y-axis)? Does it agree or disagree? With what level of confidence?
We can now represent the state of the group’s opinion at each moment in time as a point along the space. As soon as one of the two participants change their mind or their confidence as a function of the social interaction, the group’s state will shift to a different point along the Space. We can now track the group’s opinion state as the the trajectory along the Opinion Space.
This method is incredibly useful to compare different social contexts or communication systems, as it allows to quickly visualise the dynamics of opinion formation and social influence. It is also effective to predict the future state of the group give the past and present trajectory. Finally, a simple expansion to more than two dimensions can be used to represent groups composed by more than two members.
Inspired by the wonderful talks I am listening to in these days at ICCSS2016 and by the recent outcome of the UK referendum, I was wondering how the opinions of experts and news can influence the population.
We know the well established Wisdom of Crowds effect: the average opinion is better because uncorrelated noise (how wrong the opinions are) averages out over large numbers, thus enhancing the signal.
We also know that some people are better than others. We call these people experts. Experts’ opinion is better than the average individuals because it is less variable (more clustered) and closer to the true signal.
Question: what effects do news have when they broadcast experts opinions to the whole population?
Does the average error of the population reduce after knowing the expert’s opinion?
Does the average error of the population increase after knowing the expert’s opinion?
Simulation I created a Matlab simulation (that you can download here) that tries to answer this question. You can tweak different parameters like:
How gullible is the population, that is how much is swayed by the expert.
How many people there are in the population.
How closer to the true value the expert’s opinion is compared to the population opinion. This is the ratio between the variance of the expert’s error and the variance of the population error (in the graph this is called “expert’s error”, sorry for the approximation)
The number of observations that we average across
Below is the result for a population of 100 people. The colour represents the improvement in accuracy (that is the distance from the true value) of the population average from before to after the expert’s opinion is broadcast.
The contour line shows the areas of the parameter space where the expert improves the average opinion. Warmer colours indicate better improvement. The blue colour means bad news. I was surprised by two things:
the contrast between the tiny area of improvement (on the left of the contour line) and the huge area to the right.
the magnitude of the improvement (a small improvement) compared to the magnitude of the decrease in performance (a disaster).
Running the simulation will also output another image showing the decrease in diversity of the population opinion. If you don’t think diversity if important check this out.
Conclusions? No matter how an expert is accurate, there always will be some residual errors in their judgement. Broadcasting that opinion to the whole population has the effect of biasing it instead of helping it. The effect seems to be irrelevant at best, but disastrous in the worst case scenario, that is an expert that is not so much of an expert.