Proudly sponsored by PyMC Labs, the Bayesian Consultancy. Book a call, or get in touch!
Did you know that the way your brain perceives speed depends on your priors? And it’s not the same at night? And it’s not the same for everybody?
This is another of these episodes I love where we dive into neuroscience, how the brain works, and how it relates to Bayesian stats. It’s actually a follow-up to episode 77, where Pascal Wallisch told us how the famous black and blue dress tells a lot about our priors about how we perceive the world. So I strongly recommend listening to episode 77 first, and then come back here, to have your mind blown away again, this time by Alan Stocker.
Alan was born and raised in Switzerland. After a PhD in physics at ETH Zurich, he somehow found himself doing neuroscience, during a postdoc at NYU. And then he never stopped — still leading the Computational Perception and Cognition Laboratory of the University of Pennsylvania.
But Alan is also a man of music (playing the piano when he can), a man of coffee (he’ll never refuse an olympia cremina or a kafatek) and a man of the outdoors (he loves trashing through deep powder with his snowboard).
Our theme music is « Good Bayesian », by Baba Brinkman (feat MC Lars and Mega Ran). Check out his awesome work at https://bababrinkman.com/ !
Thank you to my Patrons for making this episode possible!
Yusuke Saito, Avi Bryant, Ero Carrera, Giuliano Cruz, Tim Gasser, James Wade, Tradd Salvo, William Benton, James Ahloy, Robin Taylor, Thomas Wiecki, Chad Scherrer, Nathaniel Neitzke, Zwelithini Tunyiswa, Bertrand Wilden, James Thompson, Stephen Oates, Gian Luca Di Tanna, Jack Wells, Matthew Maldonado, Ian Costley, Ally Salim, Larry Gill, Joshua Duncan, Ian Moran, Paul Oreto, Colin Caprani, Colin Carroll, Nathaniel Burbank, Michael Osthege, Rémi Louf, Clive Edelsten, Henri Wallen, Hugo Botha, Vinh Nguyen, Raul Maldonado, Marcin Elantkowski, Adam C. Smith, Will Kurt, Andrew Moskowitz, Hector Munoz, Marco Gorelli, Simon Kessell, Bradley Rode, Patrick Kelley, Rick Anderson, Casper de Bruin, Philippe Labonde, Michael Hankin, Cameron Smith, Tomáš Frýda, Ryan Wesslen, Andreas Netti, Riley King, Yoshiyuki Hamajima, Sven De Maeyer, Michael DeCrescenzo, Fergal M, Mason Yahr, Naoya Kanai, Steven Rowland, Aubrey Clayton, Jeannine Sue, Omri Har Shemesh, Scott Anthony Robson, David Haas, Robert Yolken, Or Duek, Pavel Dusek, Paul Cox, Trey Causey, Andreas Kröpelin, Raphaël R, Nicolas Rode, Gabriel Stechschulte, Arkady and Kurt TeKolste.
Visit https://www.patreon.com/learnbayesstats to unlock exclusive Bayesian swag 😉
Links from the show:
- Alan’s website: https://www.sas.upenn.edu/~astocker/lab/members-files/alan.php
- Noise characteristics and prior expectations in human visual speed perception: https://www.nature.com/articles/nn1669
- Combining efficient coding with Bayesian inference as a model of human perception:
- Video: https://vimeo.com/138238753
- Paper: https://www.nature.com/articles/nn.4105
- LBS #77 How a Simple Dress Helped Uncover Hidden Prejudices, with Pascal Wallisch: https://learnbayesstats.com/episode/77-how-a-simple-dress-helped-uncover-hidden-prejudices-pascal-wallisch/
- LBS #72 Why the Universe is so Deliciously Crazy, with Daniel Whiteson: https://learnbayesstats.com/episode/72-why-the-universe-is-so-deliciously-crazy-daniel-whiteson/
In episode 81 of the podcast, Alan Stocker helps us update our priors of how the brain works. Alan, born in Switzerland, studied mechanical engineering and earned his PhD in physics before being introduced to the field of neuroscience through an internship. He is now Associate Professor at the University of Pennsylvania.
Our conversation covers various topics related to the human brain and whether it what it does can be characterised as a Bayesian inferences.
Low-level visual processing, such as identifying the orientation of moving grids, can be explained with reference to Bayesian priors and updating under uncertainty. We go through several examples of this such as driving a car in foggy conditions.
More abstract cognitive processes, such as reasoning about politics, may be more difficult to explain in Bayesian terms.
We also touch upon the question to what degree priors may be innate and how to educate people to change their priors.
In the end, Alan gives two recommendations for improving your Bayesian inferences in a political context: 1) Go out and get your own feedback and 2) try to give and receive true feedback. Listen to the episode for details.
please note that the transcript was generated automatically and may therefore contain errors. Feel free to reach out if you’re willing to correct them.
Yeah, so thank you for taking the time. I'm really happy about that. Episode. I have so many questions for you.
Good, good. But uh, Wait, who are you? I mean, I don't know who you are. And I didn't, you know, didn't take the time to actually look into it. But you probably know a little bit about me, but I don't know anything about you. So maybe you can tell me a little bit. Who you are. Sure. Yeah,o weeks have been like around:
So I don't even you know Fahrenheit is constant.orecasting website. It was in:
did you did you study anything? undergraduate or graduate or any any level I mean, or I mean, what's kind of what's your background, so to speak?
Yeah, I did. So basically, I did a master's in management and political science. So I went to business school in France. So I don't know how familiar you are with like the post graduate landscape in France, but you have that wherever you are. Here except for entering the best universities, which are not universities. But anyways, we, of course, when you go that contact code, because you know, sounds cool. So basically, I ended up being one of those conical universities which was doing, which was a business school basically. Thanks to that. I ended up doing my last year in Berlin, actually in the university, doing political science master's I didn't really know what to do and did a kind of randomly at the spreadsheet for me.
Well, that's totally fine. Totally fine. I think banking is is a necessary business and it has, you know, it's nothing to be you know, we need banks. There's no There's no doubt about it. So, oh, yeah. Nothing bad.
Yeah, no, that's not what Yeah, that wasn't my point. The point was like, it's really not the working environment that
like wearing a suit and a tie and stuff like that.
Yeah. Yeah. So like in a very hierarchical and sinful and get into that puts it wasn't it for me?
It's good. I'm happy to have done that because that was, you know, in the end, it ended
up quite good to see ya know, the friends there. That's cool. Like echo. I said echo national, no, Echo Superior Normal. Like, is it is it normal? Or is it super superior? Yeah, it was a I was explained, right, that it's kind of it's a top level school that makes basically the norms for all the other university or something like that.
Yeah. It's like it's the top school to make the norms it's normal, but nobody talks like that. And no, like, none, nobody will say it's not Mal. If you say about something that's like making the knobs but so anyways, yeah, that's basically that and I'm curious about your background, but that took a while recording.
Yes. So you're good. You just have to tell me what to do. I think I think I would ask the the chicks Oh,
yeah. So you can just click on the red button on Audacity, and then you're good.
So I'm recording it locally, and then I'm going to send that to you or something like that. Okay. Yeah, exactly. Because the quality's better. Yeah, okay. You can edit it. Yep. But your your audio is also going to be on it
must be right. I know. My audio is on my end.
Ah, then you have to sync it up like that. Yeah, exactly.
Okay. Well, I'm not the one doing that. And then I have someone doing that really good. Really Well, for me, Marco. You're listening. Thanks a lot. Yeah. Yeah. It's basically our patrons helping me to do that. Like they, like just pay a bit every month. And so that's where it's helping me paying, especially for editing. Okay. Yeah. So, so yeah, Audacity is recording on my end. Is it on yours?
Yeah, I don't know about. Yeah. Okay. The volume the level is always green. If you see the blue and I really say something loud like this and it goes red.
Yeah, perfect. That's good. Okay. And Zen caster, you don't have to worry but dance. That's all on my site. Okay. So let's start. Oh, yeah, making sure I pronounced your name right. And then Toka horse
Stoker's you know, it's, yeah, it's better. It's more original
German Prolon Hey, perfect. Alan Stoker. Welcome to Learning Bayesian statistics.
Yeah, welcome for having me.
Yeah, you bet. I want to thank Pascal Well, ish for putting us in contact. I recorded a very interesting episode. With him a few weeks ago, talking about famous by color dress. So I'll put that in the show notes. Of course, seven seven firm member correctly. And yeah, of course, we got to talk about Bayesian priors, only about priors. And then Pascal suggested that I talk to you because it's also something you work a lot on. So we're gonna come to the next. But first these these will be the main dish, but first, we need to start with your background or your origin story, as I like to say so. And then how did you come to the world of neuroscience and psychology? And was it to see us or a straight path?
much time do we have? So my path was pretty non direct. Actually, I grew up in Switzerland, I went to ETH Zurich. studied mechanical engineering actually. So my original dream was to become an engineer, particular and an engineer of Formula One race cars. That was kind of my dream. And I always liked like cars. I like technology. For me. That seemed to be kind of the pinnacle of, you know, car engineering. But then, you know, I looked into some kind of options and the market is pretty small. It was hard to get an internship. And so at some point I gave up on that. And then I think, I mean, I always cared about how we think just thinking in general logic, like playing games, you know, being riddles. And I think it was really more like accidentially that I came to neuroscience in the sense that I did an exchange here another exchange here in exchange semester. And I think the only options there were because I'm usually I'm very short. I don't like to plan much ahead. So think was last year, my last year in undergrad. I wanted to do an exchange semester and I went to the exchange office and they said, Look, here are two options. You can go to Finland. And I think I forgot the other option. But so I said oh Finland that sounds cool. So I went to Finland. And they're added some project over the summer. Finland with the summer doing any kind of research is kind of crazy because nobody works. But anyway, I did it and the project was about using neural networks to actually analyze some brain scan images of people with tumors and stuff like that. And this was in the you have to remember this was in the 90s, late 90s of last century and the artificial neural networks were kind of the first version right, the shallow version that was that was kind of the heyday. You know, it got some Hopfield networks and all kinds of networks. And so I was working with those networks trying to do this classification. So we had this brain scan, MRI images, and we tried to figure out what it might matter in gray matter and tumor and stuff like that. And so through the through that I came into kind of the context of the brain in some in some ways. And you know, neural networks as a kind of a means of you know, these days, I would say intelligent processing. And so that gave me them a way towards my PhD direction which has still nothing really directly to do with bass and behavior and what I'm doing right now, but at least it was going towards that, that direction. So during my PhD then I built actually neural networks in hardware, physical hardware, we built integrated circuits that did some kind of neural computation. And yeah, so I spent a lot of time in the engineering problem how to design the circuits, really semiconductor physics, you know, you have to deal with noise and all kinds of things that took most of the time. And actually, the neural computation part was kind of interesting part. I can only spend little time on it and so I decided to do a postdoc postdoc studies after my PhD now really focusing on on this Oh, we got somebody somebody
Oh no, that's like what's next? What is that? That's something just
started with hate.
The joys of recording. So yeah, like you can get back on track
Okay, so, so I decided to do a postdoc and I moved to NYU, where I then ultimately made Oh, that's yeah, that's when I went to Oh, you met Pascal at that time. Okay. Yeah. So that was at that time. And I turned to lab led by Eero Simoncelli, who did some early work about Bayesian inference and Bayes is a kind of a model of how, for example, the visual motion perception system would actually resolve some kind of ambiguity in the image domain. When you translate that to some kind of motion domain. And, yeah, so that was kind of kind of my past. And so then there, I started building these spatial models, which we can probably talk a little more and since then, I have been in that field. No more Formula One car design.
That's so cool. Yeah, I think you're like one of the you're the first person definitely to tell me that night. Yeah, their dream was to become mechanical engineer. For Formula One racing is like, extremely precise. I am impressed. That's so cool. Yeah, we have like to have that very precise dream. But, so I'm guessing you still watch a lot of Formula One and like, Have you watched the Netflix series?
I did not. Because, you know, I don't know. No, it's It's, you know, it's the modern Formula One is so it's like everything more than it's just the fun is taking out of it. So I don't really care about that anymore. But I still I'll still follow that technology. I think it's still fascinating. To kind of this same thing that's fascinated by bass in some ways. It's, it's, you know, there's some really brilliant minds trying to optimize something. This case, it caused me to go as fast as possible, right. Given some constraints, they're given by the rules, right? And yeah, it's just human ingenuity of the minds in work is kind of fascinating to look at.
Yeah. Yeah. It's a really fun game to play in. These also. It's a lot of physics. Yeah. But then you have constraints of the game, which like that, definitely. That's fascinating to me. And also, it's like kind of an endless, an endless endeavor. Or if you can always make it to be better on that front on that or that or that front. And almost done was betrayed, or things like that. So, like, you can never have the perfect car I guess. At least in an absolute term. Something I would like to do is actually drive
modern simulators or the games, right? Video games are pretty I mean, they're not. They're pretty good in in kind of, yeah, simulated into physics and you get you get a sense of, you know, how tricky these cars are to drive and things like that.
Oh, yeah, for sure. But it must be something else really be in the car. Yeah, get sensations like get to, like the adrenaline. That's something entirely different. Yeah. So if any, if any listeners aren't into the Formula One World, please let me know. So yeah, so thanks so much for digging your beach into your past like that. I love how serious that was. Reminds me of mine, I guess. And so actually, yeah. Can you define the work you're doing nowadays? And what are the topics that you're particularly interested in?
Okay, wow, that's another big question. Yeah, so you know, as an academic, you've Of course you have your certain directions that you go, but I'm pretty open to a lot of different questions. You know, if you're curious mind and there are a lot of curious questions out there. Clearly, it's about the moment it's still about the human mind. We want to understand ultimately how how humans operate, how the human mind works, what is human intelligence, but also subconsciously, right how the other cortical processes that you are neural processes in general that you're not aware of how they operate. And so since my postdoc years since I started, and I'm looking into this Bayesian models, I think the general approach is still the same in the sense that I assume that the brain or the mind operates as good as it can, okay. Given some constraints, that means if the if the brain brain makes a mistake, or you know, chose something that is not ideal, and it's not because it doesn't try to do it, or it doesn't try to solve the problem as good as possible. It's simply that it cannot or he has to make trade offs right that allow him the brain, not to operate at at some maximal expectation. And I think this is still something that for example, The Economist into working in the bank, have trouble with understanding right? This seemingly irrational behavior of people when they do economic decisions and I don't think in the large scale, it's really something that they just don't care. Obviously, it's about money. They probably want to do as good as possible. But it's always a question of what's the information they have, and what are additional constraints that actually prevent them to find the rational optimal decision. And that's similar for for the questions we address which is not in the economic world, but more kind of in the cognitive world. How do people solve cognitive problems? How do they make decisions how do they perceive the world? Because perceiving is a very difficult inference problem. You get a lot of sensory information, and the brain has to make sense of it. And again, it's very sensible to think the brain tries to get as good an answer about what's out there in the world as possible. It doesn't like to make mistakes. And so when it makes mistakes, they're kind of honest, make mistakes in the sense that they're just caused by limitations in the sensory information. Or limitation in the compute power or limitations in information and prior beliefs if you want about the world, right? So that's kind of the general hypothesis and based on that, we look into different things. Like low level perception, very basic things. Use very low level experiments, simply to kind of test our approach our modeling approach, or this general theory, at the best possible level, the most quantitative level that we can, because you believe that when you do really trying to do that, then you can really nail down some of these constraints that was talking about. On the surface. A lot of things look like Asian Oh yeah, look I try to give them if human subjects some experimental task, and you see oh, they're slightly biased in one answer. favor of one answer to the other. And then it's easy to say yeah, it's because there's some privately shy that kind of pushes them to that answer versus the other. Yes, it might be, but that might not be the full story. Or, you know, it's might not be at all because of that. Okay. And we have published work where we actually show I don't know whether you're aware of that, but you know, where people kind of behave like anti Bayesian in that sense, that they're biased away from the prior beliefs. Okay. And make mistakes which actually are favoring solutions, which actually less probable. We've had people
cite as a prior, so that they go against their priors.
Yes, yes. But
whereas they should go towards their prior.
Yes, so naively, right let's call it a Bayesian. The naive Bayesian view would be, oh, there's a prior. And if there's some uncertainty in their evidence, they should be biased towards the prior. Well, we show that you know, people or people have observed that under certain situations, they go against that prior. So if they have uncertainty evidence, they actually seem to be biased away from that from their prior beliefs. And nobody could explain that really satisfactory. And we showed that you can still explain that on an evasion level, okay. If you, if you kind of step back and you you kind of leave this naive thinking, and you consider all the parameters of the Bayesian model, particular if the evidence representation is not homogeneous. For example, if we have cognitive resources that allow us to represent certain things better than others, okay, more precise. That plays a role in the Bayesian innovation sense because then you know, your likelihood is affected by it in homogeneous evidence encoding if you want, and that can lead to actually this counterintuitive situations where the bias is actually away from the prior Even so, the system is doing a Bayesian inference computation okay. So things like that. And you can only discover these things if you really try to go down to the nitty gritty details and try to model is decision behavior at the at the finest level Okay. So
do you have, do you have an example and show you what you just said?
Yeah, sure, of course. So, so, the example that we also used, may be published that is that of perceiving visual orientation. So, right we have we have a sense of the world through our visual senses. We can judge whether some stimulus has a certain orientation. I show you, I show you a little line segment and you can pretty much tell me if it's vertical little this side, that side. You can make that type of task a little harder by adding some noise and showing it only briefly. So if you do that, you will, you will observe the people and the estimate that tells you what the orientation is of such a little stimulus. It will be biased away from the cardinal orientation, which is you know, vertical and horizontal. I'll show you show you stimulus which is kind of vertically aligned but a little bit, let's say clockwise, people will perceive the orientation have that little clockwise oriented stimulus to be much more clockwise oriented, it actually is. So that seems like an eye bias away from the Cardinals. So the same is true for horizontal orientation stimulus that's almost horizontal, will be perceived less horizontal than it actually is. And depending on the uncertainty in the stimulus, that bias will be even larger. So if it's more uncertainty, the evidence is even weaker. presented. Even for a shorter period. Then this bias is tendency to be pushed away from the cardinal orientation is even large. Okay, so now easily, one could say, well, it's probably because they have some prior belief that things are not correct. No. Okay. We have more. It's more likely that the orientation a priori right is oblique.
That could be but if you look into the statistics of our visual environment, so you take pictures from you know, your everyday environment, your office, your backyard, the mountains, you go skiing, whatever, okay. And you look you analyze how many of the local little axes are oriented you know, what's the distribution of disoriented orientations? You see that most we have two peaks, one at the vertical Cardinal orientation, one at the horizontal coordinate rotation. The mean that prior has a peak at the Cardinals and the trough. In the oblique. And so, if the system is trying to use the statistics and the environment to actually give you a good estimate on or the presented orientation of the stimulus, they will have a prior exactly the opposite direction so they would, you know, know if he would expect Oh, then we would report more cardinals, right, then less. Okay. And so we can show we were able to show that you can still explain that with the Bayesian framework we've had prior at the Cardinals, if you incorporate another very fundamental theory about Neural Encoding, which is called the efficient coding theory. And it short, efficient coding just explains that or assumes postulates. that the brain is basically allocating its encoding resources kind of efficiently. That did not just encodes everything, the same precision, but it decodes things that are more important and more frequent, with higher precision. So in addition to a prior to cardinals, you would also expect that Cardinal orientations are better represented with finer resolution, okay. And that will affect and the inference the patient inference prediction, because your final resolution means you have a better lower uncertainty means a kind of a narrow, narrower, narrower likelihood. And that will ultimately together with this patient prior for Cardinal orientations will lead to this behavior that we actually see in humans. So I think that was a really convincing case for you know, going one step further than the naive Bayesian approach in modeling the mind if you want. I don't know if I'm talking a little bit too specifically here to technically just ask me questions. If I think you know, this is not something that people will understand. I'm happy to
do TNCs fairly technical, so don't worry, but they love. They love technical details. So they're really cool. But, yeah, no, I love that. That's really fascinating. And we'll talk a bit more about that, especially when we talk about your study of humans. Visual speed perceptions, because I guess it will also help us understand a bit more. For now, a question I'd like to ask is, do you remember how you first got introduced to Bayesian methods?workshop, and I think it was:
Yes, sure. Nice. Yeah, that's so cool that you can like be points. Yeah. exact date and time. That's quite rare. No, duh. Yeah. Okay. Thanks. That's cool. And already some perception work. I can see that. That's that's good. And actually how I'm wondering how common Bayesian stats are in your field, because there are some fields where they are quite, quite uncommon, but it seems to me like your field is way more using these kinds of stats way more, or is the brain is like quite Bayesian in some ways. So yeah, like, how, how much of the black sheep are you in your field in a way?
I'm not a black sheep in terms of my research. I don't think so. Not at all. I think this idea that the human mind is an inference machine this is much older than I am, of course. And so it was just natural that that is cognitive scientists. And then maybe also the perceptual scientists picked up on this notion that oh, maybe the Bayesian formalism is actually a good model, okay. Of what the brain is trying to do a computational goal, right. So it's basically describing in some ways, the brain computation, I mean, there's something called the patient brain hypothesis, which I fully support, right? It's basically saying that the brain is more or less a Bayesian inference machine. Mean isn't Of course, simplistic, but the basic components are there, right. It has to deal with imperfect information. Not complete right under incomplete it. It has the notion of learning. We learn priors, we have previous experiences that we use reuse, right? It's a it's a big trade off of the human mind. And we combine these things in a way that we kind of try to optimize our, our guesses our, our output, right? To be as accurate as possible. And so I think, yeah,
yeah, sorry, I'm interrupting you. But yeah, I like that also, because it reminds me of what you said. At the beginning, basically, if like, yeah, like trying to optimize these outputs, and devices that we can see actually also a way to, to optimize the outputs, right. It's just it's a different optimization function, like a confirmation bias, for instance. Something that actually is super helpful in some ways. But then in some other situations, it's actually not what you would want to use the way you would want to be able to, like consciously change the optimization function you're trying to, to optimize basically, and I guess that's, that's one of the important things that neuroscience is trying to do is on my team, mapping all of these and then afterwards trying to understand how Okay, now that we know how that works, can we actually change these these pathways and things like that? And it's so challenging also these like, I can see emerging research that basically shows that because you know about bias that you're gonna automatically correct for it. You have to really be extremely diligent in in, in trying to correct for for instance, your confirmation bias.
Yeah. Well, when there's a fundamental problem here, right, as a Bayesian, if I have the right assumptions, okay. Meaning the right prior, then the bias, the bias, if my response is biased, right. That's actually the result of being as good as possible, meaning, you know, these biases are yes, they are an error. I make an error, but I cannot do better. Okay, this is basically the best possible behavior. And so biases in that sense are not really a bad thing. in an absolute sense. Yes, they're bad. But you know, they're just reflecting this trade off that the system has to do by not having enough evidence or not certain evidence right. And trade off with prior assumptions which are well founded on my prior experiences. And so, this notion that you want to correct biases in someone from a Bayesian perspective, it's actually not the thing you want to do. Okay? That doesn't make sense because the bad the bias behavior in the Bayesian sense represents the best possible thing you can do. Or you could do, okay, if that is not the desired output, if the feedback says Oh, no. You could adapt either your beliefs are wrong. Okay, or are not specific enough, right? It could be generally that's true. But you should have realized that we are not in a general setting. We are moving in specific setting and you should have used the more different specific Trier there. That is true, but immediately, it poses the question, well, another decision question, which is how can I decide that I'm not in this context, but in that context, and therefore I have to use that? I know that one. That's another inference problem, right. And so we have this hierarchical inference problems, which makes actually cognitive science at this point. It is one of the big questions right. How are these hierarchies interacting and and so yeah, it's, it's not trivial, okay. But this notion of the bias is actually a good thing is is something that it's really kind of difficult to wrap your head around. And I remember when I first presented some work about this motion, biases in motion priors. Some old perceptual scientists came to me after my present, they say, Look, I don't understand why this can be. Can you say this is optimal? This doesn't make sense. Okay? If I don't, if I had a bias in estimate, for example, the speed of a wild animal running towards me, right, and that wild animal is in Lion. If I underestimate the speed, that's really bad for me, I'm going to be you know, eaten by the lion. And the notion there is really, you know, yes, in that particular example, it's a bad thing. But in average, right, statistically seeing Yeah, your first step is kind of the optimal percept right. If I had no known that, you know, I'm in an environment where there are a lot of lions right, then I probably would have corrected my insurance. Probably not in terms of the statistics, but probably in terms of the value function. Okay. I would be super cautious. I would have, you know, my loss function would be such that if I make a mistake in under estimating the speed of a lie in approaching me, right, that will be a huge penalty. So I would shy away from that. And that would, of course, introduce other biases. The opposite direction.
Yeah, yeah, exactly. And I mean, so yeah, that's definitely context dependent for sure. Like that. And I too, can also think about it for confirmation bias. For instance, right, in this case is like actually, way more like it's super useful. If you're for sure in the forest, or in the savanna and like, you're like, you think you've heard the lion. So it's way better like it's way better to optimize the function. Let's see, well be super cautious and look for Lion. So try to confirm what you already think, you know, whereas if you are doing science, or trying to understand who to vote for in the next election, well, in that case, the confirmation bias is really bad, because that's not what you should optimize for in these contexts. But the confirmation bias is actually something that will also save you in the way it will not save you when it comes to science, or, or politics on the countering. But no, it's not that that way of thinking is bad in itself. Absolutely. It's just like switching the way you're thinking should be context dependent.
Right. But that switching is another inference problem. Right? So you really have a chicken and egg, right. So you know, it just goes down the hierarchy but Right,
yeah, exactly. What is your has to be like, in a way you have to be able to take enough perspective, to understand that you have to change the way you're thinking and who is doing that. That already is exactly the person who was thinking in the first place.
How do you do that? Right. But actually not you've
said it's a perfect segue because you've mentioned the you just mentioned your study of the human species speak perceptions. And so that's actually something I'm really wanting to talk with you about. So can you work walk us through that basically.
Yeah, so this of course is emerged from what I said before when I had this epiphany in this first encounter with the patient framework. Let's call it that way. Right. I had, I had been working on artificial neural network instantiated in kind of physical hardware that were doing speed or motion perception, right. And so I already worked on the problem, the perceptual problem. And so when I started my postdoc, and that was a given that my, my postdoc advisor worked on on that problem as well. He kind of it was kind of natural and to continue that work. So the goal there was really Can we can we did decide or can we figure out what kind of let's assume humans are Asian. When they perceive something moving? Can we decide or can we figure out what kind of priors they actually use? By running some appropriate experiments and then doing some kind of modeling and trying to fit the model and then extract its price. So that was the goal. And because, up to that point, I think nobody really dissipation idea for perception and cognition was around. But nobody has really taken a model and formulated in a way that you could actually directly predict or model human behavior in an experimental setting that we typically have. You put people in a room with a screen, they look at some moving stimuli, and then they had to make some decision. Is that moving faster than that one? Right at that level on this decision level? Nobody really formulated patient or as incorporate these Bayesian models. And therefore nobody was really able to really directly fit quantitatively fit and figure out what these these priors are those people use. And so I think my work was, you know, the first did that. It took quite a while to actually figure out how to, to incorporate this this patient model in a larger model that you could then apply to human behavior. But yeah, we finally did it. So what was the experiment? I talked a little bit about the experiment. What's the most important interesting thing about it is
yeah, like, tell us about the experiment. And then what what you learned from the experiment? Yeah. Okay.
So we know at that time we knew that thinks that have a low contrast, let's say moving, moving grading. So you know what the grading is just a black and white striped pattern. Now imagine it's kind of it's kind of drifting. Okay? It's almost like like a ball or a roll is some stripes on it, and it's just rotating slowly. And you just look at it, top of it. You see this drifting gratings. If you reduce the contrast of that grating, make it very faint black and white. And you ask people, you know, how fast is that thing moving? The lower you make the contrast, the lower the perceived speed is that people will report. So that has been known. And we had some hint that yeah, if you reduce the contrast you make the signal and have very weak a the visual signal. Therefore you have a lot of uncertainty in it. And so the evidence the visual evidence is low. The likelihood is broad if you want in a patient sense and therefore the prior will kind of play a larger role. So we had this hunch that the prior should be probably a prior towards slow speed, meaning, people assume that things generally don't move or move slowly. And only rarely, they move very fast. Okay, we've such a prior and with this explanation that if you reduce the contrast, you basically make introduce more uncertainty in the sensory signal. That would qualitatively explain how the percept would move towards slower speed. Okay, so that was the starting point. And so we say, Okay, can we do an experiment where we systematically vary the contrast of these grating stimuli and their speed and then collect a lot of data? Large, okay. And then basically apply this Bayesian model and extract basically do reverse engineering of this prior from from the date on the model. Yeah.
Trying to uncover people's priors.to do about, I think, almost:
Yeah. So basically do stand does that mean that for most of the time, we see are no basically if I understand correctly, that means that for instance, when we have less contrast, so for instance, at night, we will estimate that things are moving, moving slower than during the day, for instance? Yes,
yes. I think there are some studies that's also
why it's hot. Like it makes it even more dangerous to drive at night, for instance, or cross the street at night.
Yeah, I mean, there are a couple of issues there. But one is certainly that one, okay. Other issues are that the visual system is in a different state, different photoreceptors active it's dark, and they're, they're slower and so on, so forth. But certainly, what people have shown it's actually pretty interesting that when people drive in cars, when the conditions are such that there's a lot of fog taki condition, okay, you can see really much it's not dark, it's enough light, right? So the visual system is in a in a higher light condition, but the visibility is very low. And if you see something it's very it's very positive. People seem to drive faster. And under these conditions, if it wasn't for me, and that makes total total sense. Yeah, because they perceive their own speed as being lower right. And unless they look on the computer and then say, Oh, shit, I'm driving too fast. Right. But apparently, they have measured that and it seems to be a real effect.
Yeah, that's so fascinating. That reminds me of jobs colleagues, famous line was like, did you notice that anybody driving slower than you is a moron and anybody driving faster than you is a maniac.
They might have the wrong fryer or you know, I don't know they have maybe maybe they should they should wash their windshields, right.
But yeah, that makes total sense. Yeah, I was actually going to ask you something related to that is that okay, so but that's for because I thinking that went on, for instance, a pedestrian crossing the street then I would at night, I estimate that a car coming is actually coming slower than I would see the same car at during the day. But then I was thinking and how does that change when I'm moving myself? For instance, like if I'm in a car, there is a car coming? My way? Does that like to priors stays the same or do we have different priors when we're actually moving also?
Yeah, that's that's a good question. So that's our study couldn't address that right. We had basically people being stationary and looking at you know, just drifting patterns. These were not even object grouping. Right. These were, as I told you, like this rotating rolls you can imagine like, yeah, looking at them. So the question whether people have this kind of priors when they move themselves, right. Whether that changes anything, that's a really good one, but we haven't really tested that it's a little hard to test. Yeah. It's it then is also the question. Well, we will we tested, when we tested this, the stimuli weren't really associated with a particular object. It was really kind of is drifting gradings. And so is it the question then, are these priors that we have to be? Are they generalizable to moving objects, right? Where you could imagine that moving objects every object might have different components a different behavioral characteristic, right? And so if I see a lion, right, I know the lion has probably a different motion distribution then you know, turtle, okay. So if I know that and I recognize this as a lion, you will be clever to use, you know, the specific prior for that specific object, right, rather than some generic overall statistical prior. Yeah, we haven't been going that direction at all, even so it's a really interesting one. So the studies would not be able to kind of isn't be great, is
it? If anybody wants to win I do a PhD in neuroscience. They already had the topic.
Come join our program. could do that.
Yeah. You've heard it, folks. Yeah, that sounds like it sounds fascinating. Other sound like so many questions. And, I mean, I'm still I'm really kind of impressed by just like, you know, the plasticity of the brain because I'm guessing these priors are quite old in our history, but still, we're able to date them. I don't know if we update. Yeah, I guess we're taking them because, like, if you think about it, the advent of really fast motions is really really recent, right? We have cars almost everybody for like, I don't know 50 years tops, and like was still able to like basically we can cross the street pretty safely. We take trains we we take planes in these basically is pretty fascinating to me how basically the brain can adapt extremely quickly to these new conditions. And when you compare it to the history of our evolution, so like it appeared, I don't know if our evolution was written down in one hour, like these last three cars appeared in like the last millisecond right now, you know, so it's fairly new thing
to me. Yeah, this is, I would say in terms of the motion statistics. It might not be so much different. Or let's say it kind of changed kind of gradually, maybe so I think the system had enough time to slowly change to it. You know, even in let's say, 20 years ago, people were riding horses. Horses go pretty fast, right? I mean, you know, you observe birds flying by they will pretty fast too. Or you know, you play ball. I don't know how old football soccer is right? You know, I I'm just saying or even at that time people were exposed to to a broad spectrum of motion speeds and you know, patterns. So yeah, but yeah, I think I remember there was this fear that when the when the train was invented, right. The railway. Yeah. The people.
There is always amazing fears. New technology. Yeah, this story.
Yeah, I don't know. I don't know. I don't have a specific story. But I thought, you know, there was the concern that people would die if they go on a train on a train and would experience speeds that are faster than, I don't know. 50 miles per hour or something like that. Probably some medical doctors had some concerns that the organisms that didn't go well.
I mean, sure. Now, it's super easy to know, but I don't know what they knew at the time about human the human body or stuff like that. Yeah, for sure. I mean, that kind of concern definitely happens. If you're talking about space travel. You have to like when you have to do any huge acceleration. Like, we cannot take I think more than seven G's or something like that, right. So life stopped taking more than seven G's. Basically, you're gonna start to have a lot of troubles, especially if you're not trained. So yeah. That's that's actually could be a concern. I could see how they could think it would be a concern. Actually, I mean, I'm guessing this is extremely hard to know that and maybe it's impossible but one or the other and theories about why we would develop such a prior. So like, why in situation where it comes to contract is lower. We would get that prior that things are moving slower.
But yeah, but see that that's the that's the beauty about this. This Bayesian explanation is right, the price is same. The price is stationary, fixed, prior that's just reflecting let's say the statistics of how things move in our environment. Or more precisely how the visual information on our read now, what's the dynamics have that right? Because that's kind of the input that we provided in these experiments. It's fixed. Yeah. So it doesn't matter if the contrast is low or high, right? It's fixed because that's basically the statistical ground truth. And the effect that low contrast stimuli are perceived to move slower is not because the prior changes, it's because that the evidence the sensory evidence, you know, the eye telling us Oh, something is actually moving. Right? That evidence gets weaker and weaker because the signal is lower. And if you have some noise later on the brain, right, that will lead to less less evidence, less strong evidence, and what you do in a patient's ends if your evidence is weak and tells you Yes, something might move with about that speed. But you know, I don't know it could be faster, or it could be slower. You call upon the prior, right? You multiply your prior with your evidence. And if the evidence is weak, right, the prior is gonna dominate in your posterior probability. Okay, in the worst case, worst case, the extreme case you don't see anything moving, right? It's black or it's uniform. Right? Well, it could be that something is moving. You just can't see it. Right. But your default assumption is, well, no, nothing moves. Right. And that's totally explainable by having a prior for slow speeds right. If the likelihood is basically a uniform, could be any motion but I can see it new prices now. Okay, I have a peak or you know, my center of mass or whatever you want to use there is at zero and therefore I would predict, you know, most likely nothing moves. So that's, that's, that's the idea. And it's nice how this framework with this fixed prior right can explain all these effects when you start playing with the strength of the evidence and so on so forth.
Yep, no Fisher amny Yeah, super fascinating in my that makes me being has you done some work? Nice. Something we talked about with Pascal well, it should be it's also is have you done some work then in how we could change people's priors.
Yeah, that's that's a good idea. Or Good idea. Good question.
Probably have longer idea in a lot of circumstances.
It's a question of, you know, over what timescale? Do you think these priors are are established? I think you said before, there's some very old priors. Basically, given the notion that this might even prize you you're kind of born with maybe, or certainly you know, over the development you kind of learn the statistics of your environment right. Speech prior or the other prey we talked about, like the orientation about little visual line segments, your intention, your environment that we have more cardinal and orientations and oblique. This is something that you might learn over a long time period and so it's going to be probably really hard to change those, because they're probably well ingrained in the system. But then, on the other hand, you know, we're talking about low level perceptual priors. Of course, there are higher level more cognitive priors, right. You know, probably is about I don't know. I don't have a good example. You talked about politics, pray about how, how certain parties will behave right? You have a certain prayer about how they kind of program they will have and stuff like that. That's probably pretty rigid. It's probably not a good example. The stock market maybe there's certain you know, classes of companies which you know, you could be, you know, a good good investment. So maybe you have a prior skew towards certain technology companies. That can change quickly, right, if the technology falls apart or somebody new comes. Yeah, so how can you change this price? Yeah. Well, I think the easiest way to think about it is simply by, you know, experience and get feedback. If I, I have the right prayer, and I use it the Bayesian way. And my decision making you'll be as good as possible, and my feedback will be, you know, good. You know, I will make mistakes, but everybody's gonna make mistakes. And if I'm the one, the perfect prior, the best prior, then, you know, on average, I will do the fewest mistakes compared to all the others and you will realize that if you get feedback, and you compare, right, and so you get the constant notion of how well calibrated your belief system is given given the environment. And so if you want to change that for whatever reason right? You have to start changing the statistics of the environment, the stakes of the decision tasks, right. Before it was good to assume disbeliefs right. Now in this new environment, well that will give you a disadvantage. If you learn and you will adapt, you will pick up on those new priors and incorporate them right. That's how I understand learning. So give feedback. If you if you want people to learn the right thing, okay? If truthful feedback, if you want people to learn the wrong beliefs, you give feedback, but you know, lie to them. Okay? Give them wrong feedback. That's something that people try to do all the time. Right. They want to. So
yeah, and that's why I mean, that's why the information and the topics of information bubbles and things like that are even more worrying, worrisome because like, that's exactly the kind of situations where then people are less and less exposed to ideas they don't agree with. And so also they get less and less feedback, and they get also less and less able to accept feedback, because it's so rare to get ideas that are outside of what you think that then it becomes kind of like an aggression. You don't like it, and so you don't want it anymore. And so you don't get the feedback in the end, which makes everything harder.
But, but, so this is yeah, so you can really kind of extrapolate that to kind of a social social context, right? And particularly nowadays, social media with all these kinds of manipulations of, of social or mainstream opinions, right? I think what, what people would, would would protect people to some degree from from this kind of manipulation attempts, let's call it right. If if they can get their own feedback, okay, that will be helpful, right? If you sit in front of your TV and your computer, and the only feedback you get and the only information you get is the same channel right? You have no way to calibrate what you have been told with reality, right? You can you can no way that belief system is actually correct. But if you listen to you can still listen to the whatever mainstream media or whatever information you get or you receive. If you then go and you know, go go outside, go interact with people talk to these people that you have heard are bad people or good people. Talk to them directly yourself, you get direct feedback yourself, which will definitely be a source that you've probably also trust much more and that will recalibrate your belief system. Okay. So yeah, if you want a lesson for modern society, I think it's really just one thing you said right? You should not shut down about information that is kind of against your belief system, right? Because then you will never you will shrink your bubble even more. And the second one is to get real feedback, right? And you can only get that by doing it yourself or going out and get that experience firsthand. Or as direct as possible.
Yeah. Yeah, it's definitely a skill to see definitely a skill to develop in. Extremely useful but it's something that requires dedication and practice. So it's hard.
Not something we usually like to do. It's hard because you have to actually do something actively. You can have to get up, okay. Be active. I mean, that's what I tried to teach my kids right. Listen to what people say but then always trying to get your own experiences that kind of calibration way to do it. Right. Get your own feedback. Try it out. See what's happening. Oh you skipped Are we still on this backup room? Area yo. Can you still hear me?
I can't hear you can't hear you. See?
You want to start again? The original Google meet okay.
Step Now can you hear me Ah yes, yes. How about you?
Oh, yes, I can hear you again. Okay, do not know what happened. It's super weird. The first time that happened?
Just sound it's a beautiful double service.
I know what's happening. I think the frequencies that are trying to censor our
the bandwidth. The bandwidth looked like it went down. You picked you got more and more pixelated. So yeah, but this one is a good question. Okay, so we're good. Okay, so we
were staying clean. You remember what we were saying?
You were talking about society and how you were asking you know, how can one change, pray beliefs of people. And then we were talking about getting feedback is important to learn. Right? Right. And
so I'll just transition from from there. I think I can. I don't want to take too much of your time. Anyways, we've been recording for quite a while. So it's going to be time to wrap up in a few minutes. So transition from there. Yeah, that's actually super fascinating. Actually. You're like you're inspired me another question. Which would be like, Yes. Something that would be super curious is like, do we come to the world as you know, when we are born, do we come to the world without priors or like we flat priors and then develop those priors or are the priors already something we have, like that are encoded already in the brain like that? That sounds fascinating to me. Are we born frequencies and then we could Bayesians are we already Bayesians when we just are born into the world the question here is Yeah, other PhD thesis to make Yeah, that's
that's hard to work on. Right? Because he cannot really experimentally test that I guess, or you can work. I mean, people work with children and you see how they learn and you can do some you can deduct something there, but it's a good question. I you know, ultimately, I don't think it's so it's so important because I mean, clearly right? We cannot have priors more abstract high level thinks, right. Babies don't know anything about, you know, social structures. They don't know anything about politics or anything like that. Yeah, I assume. But on the lowest level, right, these these priors we talked about the speed and orientation and things like that. It's hard to say because as soon as they see right things, they got a lot of data already. Visual as soon as they open their eyes as soon as their optics is getting to a level that they actually they can actually see something. They immediately get blasted with a lot of statistical data. And so I think it's not difficult to believe that building up the priors at that point, they would pretty quickly get some decent representation of those priors, right. So I don't think there's any need for having these kinds of priors being built in. Maybe there's some really fundamental priors more about I don't know. People will call that probably more behavioral.
You know, yeah. The
important pieces, right? Yeah, with the mother, right. And a newborn right.
Yeah, exactly. How do you know it's your mother? Like, must be something. It's quite intuitive. And also like is that I don't know if that's pop rock pop science, but seem to remember that, you know, like, there are some studies that shows that show that we already hear stuff, even before being born. I don't know how true that is. I've never looked at those studies. So like, if that's true, then maybe you probably will already have some prayers because you're already here and have feedback from the outside world. You're