Proudly sponsored by PyMC Labs, the Bayesian Consultancy. Book a call, or get in touch!
I love Bayesian modeling. Not only because it allows me to model interesting phenomena and learn about the world I live in. But because it’s part of a broader epistemological framework that confronts me with deep questions — how do you make decisions under uncertainty? How do you communicate risk and uncertainty? What does being rational even mean?
Thankfully, Gerd Gigerenzer is there to help us navigate these fascinating topics. Gerd is the Director of the Harding Center for Risk Literacy of the University of Potsdam, Germany.
Also Director emeritus at the Max Planck Institute for Human Development, he is a former Professor of Psychology at the University of Chicago and Distinguished Visiting Professor at the School of Law of the University of Virginia.
Gerd has written numerous awarded articles and books, including Risk Savvy, Simple Heuristics That Make Us Smart, Rationality for Mortals, and How to Stay Smart in a Smart World.
As you’ll hear, Gerd has trained U.S. federal judges, German physicians, and top managers to make better decisions under uncertainty.
But Gerd is also a banjo player, has won a medal in Judo, and loves scuba diving, skiing, and, above all, reading.
Our theme music is « Good Bayesian », by Baba Brinkman (feat MC Lars and Mega Ran). Check out his awesome work at https://bababrinkman.com/ !
Thank you to my Patrons for making this episode possible!
Yusuke Saito, Avi Bryant, Ero Carrera, Giuliano Cruz, Tim Gasser, James Wade, Tradd Salvo, William Benton, James Ahloy, Robin Taylor,, Chad Scherrer, Zwelithini Tunyiswa, Bertrand Wilden, James Thompson, Stephen Oates, Gian Luca Di Tanna, Jack Wells, Matthew Maldonado, Ian Costley, Ally Salim, Larry Gill, Ian Moran, Paul Oreto, Colin Caprani, Colin Carroll, Nathaniel Burbank, Michael Osthege, Rémi Louf, Clive Edelsten, Henri Wallen, Hugo Botha, Vinh Nguyen, Marcin Elantkowski, Adam C. Smith, Will Kurt, Andrew Moskowitz, Hector Munoz, Marco Gorelli, Simon Kessell, Bradley Rode, Patrick Kelley, Rick Anderson, Casper de Bruin, Philippe Labonde, Michael Hankin, Cameron Smith, Tomáš Frýda, Ryan Wesslen, Andreas Netti, Riley King, Yoshiyuki Hamajima, Sven De Maeyer, Michael DeCrescenzo, Fergal M, Mason Yahr, Naoya Kanai, Steven Rowland, Aubrey Clayton, Jeannine Sue, Omri Har Shemesh, Scott Anthony Robson, Robert Yolken, Or Duek, Pavel Dusek, Paul Cox, Andreas Kröpelin, Raphaël R, Nicolas Rode, Gabriel Stechschulte, Arkady, Kurt TeKolste, Gergely Juhasz, Marcus Nölke, Maggi Mackintosh, Grant Pezzolesi, Avram Aelony, Joshua Meehl, Javier Sabio, Kristian Higgins, Alex Jones, Gregorio Aguilar, Matt Rosinski, Bart Trudeau and Luis Fonseca.
Visit https://www.patreon.com/learnbayesstats to unlock exclusive Bayesian swag 😉
Links from the show:
- Visit https://www.patreon.com/learnbayesstats to unlock exclusive Bayesian swag 😉
- Gerd’s website: https://www.mpib-berlin.mpg.de/staff/gerd-gigerenzer
- Do children have Bayesian intuitions: https://psycnet.apa.org/doiLanding?doi=10.1037%2Fxge0000979
- What are natural frequencies: https://www.bmj.com/content/343/bmj.d6386
- HIV screening: helping clinicians make sense of test results to patients: https://www.bmj.com/content/347/bmj.f5151
- Teaching Bayesian Reasoning in Less Than Two Hours: https://www.apa.org/pubs/journals/releases/xge-1303380.pdf
- How to Stay Smart in a Smart World – Why Human Intelligence Still Beats Algorithms: https://www.amazon.com/How-Stay-Smart-World-Intelligence/dp/0262046954
- Gut Feelings – The Intelligence of the Unconscious: https://www.amazon.com/Gut-Feelings-Intelligence-Gerd-Gigerenzer/dp/0143113763
- Better Doctors, Better Patients, Better Decisions: https://www.amazon.com/Better-Doctors-Patients-Decisions-Envisioning/dp/026251852X
- LBS #50, Ta(l)king Risks & Embracing Uncertainty, with David Spiegelhalter: https://learnbayesstats.com/episode/50-talking-risks-embracing-uncertainty-david-spiegelhalter/
- LBS #87, Unlocking the Power of Bayesian Causal Inference, with Ben Vincent: https://learnbayesstats.com/episode/87-unlocking-the-power-of-bayesian-causal-inference-ben-vincent/
- As a bonus, Gerd playing the banjo: https://www.youtube.com/watch?v=qBllveuj8RI
Abstract
In this episode, we have no other than Gerd Gigerenzer on the show, an expert in decision making, rationality and communicating risk and probabilities.
Gerd is a trained psychologist and worked at a number of distinguished institutes like the Max Planck Institute for Human Development in Berlin or the University of Chicago. He is director of the Harding Center for Risk Literacy in Potsdam.
One of his many topics of study are heuristics, a term often misunderstood, as he explains. We talk about the role of heuristics in a world of uncertainty, how it interacts with analysis and how it relates to intuition.
Another major topic of his work and this episode are natural frequencies and how they are a more natural way than conditional probabilities to express information such as the probability of having cancer after a positive screening.
Gerd studied the usefulness of natural frequencies in practice and contributed to them being taught in high school in Bavaria, Germany, as an important tool to navigate the real world.
In general, Gerd is passionate about not only researching these topics but also seeing them applied outside of academia. He taught thousands of medical doctors how to understand and communicate statistics and also worked on a number of economical decision making scenarios.
In the end we discuss the benefits of simpler models for complex, uncertain situations, as for example in the case of predicting flu seasons.
Transcript
This is an automatic transcript and may therefore contain errors. Please get in touch if you’re willing to correct them.
Transcript
Gert Gigerentzer, welcome to Learning
Vision Statistics.
2
I'm glad to be here.
3
Yeah, thanks a lot for taking the time.
4
I am very happy to have you on the show.
5
A few patrons have asked for your episode,
so I'm glad to have you here today.
6
And thank you very much to all of you in
the Slack, in the LBS Slack who
7
recommended Gert for an episode on the
show.
8
And yeah, I have a lot of questions for
you because you've done a lot of things.
9
You have a lot of, there is a lot of
questions I want to ask you on a lot of
10
different topics, but first, as usual,
let's start with your origin story.
11
Geert, and basically, how did you come to
the world of study of rationality and
12
decision-making under uncertainty?
13
Now, I have been observing myself, how I
make decisions.
14
For instance, in an earlier career, I was
a musician playing dixieland, jazz, and
15
other things.
16
And when I did my PhD work, I had to make
a decision.
17
Was I want to continue a career on the
stage as a musician or to try an academic
18
career?
19
Mm-hmm.
20
And for me, music was the safe option,
because I knew, and also I earned much
21
more money than an assistant professor.
22
And an academic career, I couldn't know
whether I could make it, whether I would
23
ever become a professor, but it was the
risky option.
24
So this is, if you want an initial story,
I decided then to take the uncertainty at
25
risk.
26
That makes sense.
27
And so that was like pretty early in your
career, or is that something that came
28
later on when you already had started
studying other things, or you started
29
doing that as soon as you started your
undergrad studies?
30
What came later was that I learned about
theories about decision making, and some
31
of them I found very unrealistic and
strange, and about topics that were not
32
really the topics where I thought are
important, like which job do you take,
33
what do you do with the rest of your life,
but were of monetary gambles, was it you
34
want a hundred dollars for sure, or two
hundred with a probability of 0.4?
35
or six.
36
And I also spent an important year of my
life at the Center for Interdisciplinary
37
Research in Bielefeld on a group called
the Probabilistic Revolution.
38
That's an international and
interdisciplinary group that investigated
39
how science changed from a deterministic
worldview to a probabilistic one.
40
And I learned so much.
41
I was one of the young guys in this group.
42
There were people like Thomas Kuhn, Ian
Hacking, Nancy Cartwright.
43
And that also taught me something.
44
It's important not to read in your own
discipline and do what the others do.
45
But to fall in love is a topic like
decision making and uncertainty in the
46
real world.
47
And then read everything.
48
that people have written about that.
49
And that means from areas like biology,
animal behavior, to economics, to
50
sociology, to the history of science.
51
Yeah, that was something really
interesting when preparing the episode
52
with you to see the whole arc of your
career being basically around these topics
53
that you've studied really a lot and
in-depth.
54
So that was really super interesting to
notice.
55
And so something I'm wondering is, if you
remember...
56
how you first got introduced to Bayesian
methods.
57
Now, for instance, I read Fisher's book,
Statistic Methods and
58
Mm-hmm.
59
Thomas Bayes for having the insight not to
publishing his paper.
60
Because, according to Fisher, that's not
what you need in science.
61
And I got very much interested in the
fights between statisticians, in something
62
that could be called insult and injury.
63
And Fisher, for instance, in the same
book, he destroys Carl Pearson, his
64
successor, saying
65
the terrible weakness of his mathematical
and scientific work flowed from his
66
incapacity of self-criticism.
67
So if you want to get anyone interested in
statistics, then start with the
68
controversies.
69
That's my advice.
70
And the pity is that in the textbooks, in
psychology certainly,
71
All the controversies have been
eliminated, one doesn't mention them, and
72
talks as if there would be only one kind
of statistics.
73
So that could be Fisher's null hypothesis
testing, which has been turned in a very
74
strange ritual, Fisher never would accept,
or on the other side there are also
75
Bayesians who think it's the only tool in
the toolbox.
76
And the knees of that attitude is
realistic, it's more religious.
77
There is a statistical toolbox.
78
And there are different instruments and
you need to look at the problem to choose
79
the right one.
80
And also within bass, there are so many
different kinds of bassianism.
81
There's not one.
82
64,000.
83
It's a lot.
84
Yeah, so, okay, that makes it clear.
85
And that helps me also understand your
work because, yeah, something I saw is in
86
your work, you often emphasize the role of
heuristics in decision-making.
87
So I'm curious if you could explain how
Bayesian thinking and heuristics intersect
88
and...
89
how do these approaches complement each
other in navigating uncertainty?
90
First, the term heuristic is often
misunderstood.
91
I mean the term in the sense that Herbert
Simon used it to make a computer program
92
smart, or the Gestalt psychologist used
it, or Einstein used it in the title of
93
his Nobel Prize winning paper of 1905.
94
I don't use it in the sense that it has
been very popular in psychology and other
95
fields.
96
as heuristics and biases.
97
That's a clear misunderstanding.
98
So to make it very short, in a world that
Jimmy Savage, who is often called the
99
father of Bayesian statistics, called a
small world where the entire state space
100
is known and nothing else can happen.
101
In that world,
102
This is the ideal world for Bayesianism
and also for most of statistics.
103
In a world where you do not know the state
space that the economist Frank Knight
104
called uncertainty, or as I have called
true uncertainty or radical uncertainty,
105
you can't optimize by definition.
106
You cannot find the best solution.
107
And here...
108
People and other animals, just like
managers and scientists, use heuristics.
109
So a heuristic is a rule that helps you,
under uncertainty, to find a good
110
solution.
111
For instance, Polia, the mathematician
distinguished between analysis and
112
heuristics.
113
You need heuristics to find a proof and
you need analysis to check.
114
whether it was right.
115
Most important, heuristics and analysis
are not opposites, as it's now become very
116
popular in system one and system two
theories.
117
They're not opposites.
118
They go together.
119
And for instance, a study of 17 noble
laureates reported that almost all of them
120
attributed there.
121
success from going back and forth between
heuristics slash intuition or analysis.
122
So that's an important thing.
123
It's not binary opposites.
124
So your question, where does Bayes meet
heuristics?
125
Now, of course, for instance, in the
determination of the prior probability
126
distribution, uniform
127
That's also known as one over N.
128
So you divide, for instance, your assets
equally over the funds or the stocks that
129
you have.
130
It's a reasonable assumption when you know
little.
131
And just as one over n is reasonable, in
some situations it's not always.
132
And the real challenge is to find out in
what situation does a certain heuristic or
133
does space work, and where does it not
work.
134
That's what I call the study of ecological
rationality.
135
So in short, there's no single tool that's
always the best.
136
We need to face...
137
The difficult question, can we identify
the structure of environments where a
138
simple heuristic like equal distribution
or imitate others works and where does it
139
mislead?
140
Hehehe
141
Yeah, yeah, this is really interesting
because something also I'm always like, I
142
always try to reconcile and actually you
talk about it in your book, Gut Feelings,
143
The Intelligence of the Unconscious.
144
And you talk also about intuitions and how
they can sometimes outperform more complex
145
analytical processes.
146
And this is a claim that you can see in a
lot of fields, right?
147
From, I don't know, politics to medicine
to sports, when basically people don't
148
really want the analytical process to be
taken too seriously because maybe it
149
doesn't go, it doesn't confirm their...
150
Yeah.
151
their previous analysis or their own bias.
152
So what I'm wondering is how do Bayesian
methods in your research, how do Bayesian
153
methods accommodate the role of intuitive
judgment and how can individuals strike a
154
balance between intuitive thinking and the
systematic updating of beliefs that we use
155
under Bayesian reasoning?
156
So let me first define what I mean by
intuition.
157
So intuition is a kind of unconscious
intelligence that is based on years of
158
experience with a topic where one feels
quickly what one should do, what one
159
should not do, but one cannot explain it.
160
So when a doctor sees a patient and the
doctor may feel something is wrong with
161
that patient but cannot explain it, that's
an intuition based on years of experience.
162
And then the doctor will go on and do
tests and analysis in order to find out
163
what's wrong if there's something.
164
So remember, intuition and analysis are
the same.
165
always go together.
166
It's a big error what we have today in
so-called dual processing theories, where
167
they're presented as opposites.
168
And then usually one side is always right,
like analysis and intuition is blamed, and
169
heuristics are blamed if things go wrong.
170
I see.
171
Yeah.
172
And so how does that then integrate into
the Bayesian framework according to you?
173
Like in the systematic analysis of beliefs
that we have in the Bayesian framework.
174
So applications of Bayes use heuristics
such as 1 over n, so equal distribution,
175
equal priors.
176
And they also use a more silent
independence assumption and such things.
177
But I would not phrase the problem as how
to integrate heuristics in the Bayesian
178
framework.
179
I would also not say...
180
how to integrate Bayes in the heuristics
framework.
181
I think of both, so there are many
Bayesian methods and also other
182
statistical methods, the old optimizing
methods, and there are heuristic methods
183
which are non-optimizing methods.
184
I think of them as part of an adaptive
toolbox that humans have, that they can
185
use, and the real art is the choice of the
right.
186
tool.
187
So when I should use base and what kind of
base or when should I use a heuristic, a
188
social heuristic, for instance do what
Alex tells me to do or for instance simple
189
heuristics like take the best which just
go lexicographically through reasons and
190
stop with the first one that allows to
make a decision.
191
And that's the question of ecological
rationality.
192
I see.
193
And do you have, yeah, do you have
examples?
194
Bayes' rule is a rule that is reasonable
to apply in situations where the world is
195
stable, where no unexpected things happen,
where you have good estimates for the
196
priors and also good estimates for the
likelihoods.
197
For instance, mammography screening is a
case.
198
So...
199
We know that the, or we can expect that
the results of mammography screening won't
200
change very much.
201
We have to take in account that the base
rates differ from country to country or
202
from group to group.
203
But besides that, it is a good framework
to understand what is the probability that
204
a person has breast cancer.
205
if she tests positive.
206
Mm-hmm.
207
But that's a good situation.
208
But if you have something which is highly
volatile, like, okay, I worked with the
209
Bank of England on a method for
regulation, for banking regulation, and
210
that role is highly volatile, and you're
not getting very far with standard
211
statistical methods.
212
But you may evaluate whether a bank is in
troubles.
213
by something that we call a fast and
frugal tree that only looks at maybe three
214
or four important variables and doesn't
combine them in a way as base or as linear
215
models do, but lexicographic.
216
Why?
217
Because, so if you first look, for
instance, think about medical diagnosis.
218
If your heart fails, a good kidney cannot
compensate that.
219
And this is the idea of lexicographic
models.
220
And a number of heuristics are
lexicographic, as opposed to compensatory
221
models like Bayes or linear regressions.
222
Oh, I see, okay.
223
Yeah, continue.
224
Yeah, I have myself trained about a
thousand doctors in understanding and
225
doing Bayesian diagnosis and Bayesian
thinking.
226
And you should realize that most doctors
and also most gynecologists would not be
227
able to answer the question I posed
before.
228
What is the...
229
probability that a woman has breast cancer
in screening when the mammogram is
230
positive.
231
And if I give them the numbers in
conditional probabilities, they're equally
232
lost.
233
Alex, I do a test with you.
234
Are you ready?
235
So the point will be, I give you the
information in, as usual, in conditional
236
probabilities.
237
And I hope you will be confused.
238
And also, to readers, the listeners.
239
And then I give you the same.
240
information in what we call natural
frequencies.
241
And then insight will come.
242
Ready?
243
Okay.
244
So assume you conduct a mammography
screening.
245
What you know is that among the group of
women who participates, there is a one
246
percent chance that a woman has breast
cancer undetected.
247
You also know that the probability that a
woman has positive if she
248
as breast cancer is 90%.
249
And you know that the probability that
women should test positive if she does not
250
have breast cancer is 9%.
251
Okay?
252
You have a base rate of 1%, a sensitivity
or hit rate of 90%, and a falls alarm rate
253
of 9%.
254
Now a woman in that group just tested
positive and you know nothing.
255
about her because it's creamy, ask you,
doctor, tell me, do I now have breast
256
cancer?
257
Or how certain is it?
258
99%, 90, 50, please tell me.
259
What do you say?
260
If there is now fog in your mind, that's
the typical situation of most doctors.
261
Mm-hmm.
262
And there have been conclusions made in
psychological research that the human mind
263
has not evolved to think statistically, or
here, the Bayesian way.
264
Now the problem is not in the mind, the
problem is in the representation of the
265
information.
266
Conditional probabilities are something
quite new.
267
And few of us have been trained in it.
268
Now how did humans...
269
before Thomas Bass.
270
Mm-hmm.
271
or animals do based on reasoning, not
conditional probabilities, but what we
272
call natural frequencies.
273
That is, I give you first a demonstration,
then explain what it is.
274
Okay, we use the same situation.
275
You do the mammography screening and
translate the probabilities into concrete
276
frequencies.
277
Okay?
278
Think about a hundred women.
279
We expected one of them has breast cancer
and she likely tests positive.
280
That's the 90%.
281
Among the 99 who do not have breast
cancer, we expected another 9 will
282
nevertheless test positive.
283
So we have a total of 10 who test
positive.
284
Question, how many of them do actually
have cancer?
285
It's one out of 10.
286
So a woman who tests positive in screening
has most likely not cancer.
287
That's good news.
288
So that's natural frequencies and you
basically see through.
289
And natural frequencies, we call them
because they're not relative frequencies.
290
They're not normalized.
291
You start with a group like 100 and you
just break it down.
292
And then the computation becomes very
simple.
293
just imagine Bayes rule for this problem.
294
And then natural frequencies does the
computation, the representation.
295
It's just one out of the total number of
positives, 10.
296
That's all.
297
And once doctors have learned that and
tried with a few problems, they can
298
generalize it and use the method for other
problems.
299
And then we can avoid.
300
the errors that are currently still in
place and also doctors can better
301
understand what tests like HIV tests or
pregnancy tests actually mean.
302
And the interesting theoretical point is,
as Herbert Simon said, the solution to the
303
problem is in its representation.
304
And he asked it from the Gestalt
Psychologist.
305
Yeah, this is really interesting.
306
I really love the...
307
And in a way that's quite simple, right,
to just turn to natural frequencies.
308
So I really love that because it gives a
simple solution to a problem that is
309
indeed quite pronounced, right?
310
Where it's just like when you're...
311
Even if you're trained in statistics, you
have to make the conscious effort of not
312
falling into the fallacy of...
313
thinking, well, if the woman has a
positive test and the test has a 99% hit
314
rate, she's got a 99% probability of
having breast cancer.
315
I have one part of my brain which knows
that completely because I deal with
316
statistics all the time, but there is
still the intuitive part of my brain,
317
which is like, wait, why should I even
wonder if that's the true answer?
318
So I like the fact that natural
frequencies
319
kind of an elegant and simple solution to
that issue.
320
And so I will put in the show notes your
paper about natural frequencies and also
321
the one you've written about HIV screening
and how that relates to natural
322
frequencies.
323
So that's in the show notes for listeners.
324
And I'm also curious, basically
concretely,
325
how you did that with the professionals
you've collaborated with.
326
Because your work has involved
collaborating with professionals from
327
various domains.
328
That means physicians, that means judges.
329
I'm curious how you have applied these
principles of risk communication in
330
practice with these professionals and what
challenges.
331
and what successes have emerged from these
applications.
332
Yeah, so I have always tried to connect my
theoretical work with practical work.
333
So in that case of the doctors, I have
been teaching continuing medical education
334
for doctors.
335
So the courses that I give, they are
certified and the doctors gets points for
336
that.
337
and it may be a group of 150 or so doctors
who are assembled to a day or two days of
338
continuing medical education, and I may do
two hours with them.
339
And that has been for me a quite
satisfying experience because the doctors
340
are grateful because they have muddled
through these things for their lives.
341
And now they realize there's a simple
solution.
342
They can learn within a half an hour or
so.
343
And then it sticks for the rest of their
lives.
344
I've also trained in the US, so I have
lived many years in the US and taught as a
345
professor at the University of Chicago.
346
And I have trained together with a program
from George Mason University, US Federal
347
Churches.
348
These are very smart people and I enjoyed
that.
349
So these trainings were...
350
and in illustrious places like Santa Fe.
351
And the churches were included and their
partners also included.
352
And there was also a series of things like
about how to understand fibers.
353
And I was teaching them how to understand
risks and decision making and heuristics.
354
And...
355
If you think that federal churches who are
among the best ones in the US would
356
understand Bayes' rule, good luck.
357
No, there may be a few, most not.
358
And actually, by the way, Bayes' rule is
forbidden in UK law.
359
interesting.
360
And so, but going back, these are examples
of training that every psychologist could
361
do.
362
But you have to leave your lab and go
outside and talk to doctors and have
363
something to offer them for teaching.
364
By now, the term natural frequencies is a
standard term in evidence-based medicine.
365
And I'm very...
366
proud about that.
367
And many, there's also a review, a
Cochrane's review has looked at various
368
representations and found that natural
frequencies are among the most powerful
369
ones.
370
And we have with some of our own students
who were more interested in children than
371
in doctors, we have posed us the question,
can we teach children?
372
and how early.
373
And one of the papers I sent you, it's a
paper in the Journal of Experimental
374
Psychology General, I think two years ago,
has for the first time tested fourth
375
graders, fifth graders, sixth graders, and
second graders.
376
So when we did this with the teachers,
they were saying, and they were looking at
377
the problems,
378
They were saying, no, that's much too
difficult.
379
The children will not be able to do that.
380
They haven't even had fractions.
381
But you don't need fractions.
382
And for instance, when we use problems,
they are more childlike.
383
So here we put that type of problems.
384
And when they are in natural frequencies,
385
And the numbers are two-digit numbers.
386
You can't do larger numbers with fourth
graders.
387
Then the majority of the fourth graders
got the exact Bayesian answer.
388
Of course, with conditional probabilism,
it would be totally lost.
389
And also we have found that some, maybe
20% of the second graders find the
390
Bayesian answer.
391
The title of the paper is Our Children
Intuitive Basients.
392
Yeah, it's in the show notes.
393
And again, it's in the representation.
394
It's a channel message in mathematics,
that representation of numbers matter.
395
And if you don't believe it, just think
about doing a calculation or base rule
396
with Roman numerals.
397
Good luck.
398
And that's well known in mathematics.
399
For instance, the physicist...
400
Feynman has made a point that
mathematically equally forms of a formula,
401
or despite their mathematically
equivalent, they're not psychologically
402
the same.
403
Because, as I said, you can see new
directions, new guesses, new theories.
404
In psychology, that is not always
realized.
405
And what Feynman, Richard Feynman was
talking about would be called framing in
406
psychology.
407
And by many of my colleagues, it's
considered an error to pay attention to
408
framing.
409
It's not.
410
It's an enabler for intelligent
decision-making.
411
Yeah, this is fascinating.
412
I really love that.
413
And I really recommend your, your paper
that you that you're talking about.
414
Do children have Bayesian intuitions?
415
Because first, I really love the
experiment.
416
I found that super, super interesting to
watch that.
417
And also, yeah, as you were saying,
418
in a way, the conclusion that we can draw
from that and basically how this could be
419
integrated into how statistics education
is done, I think is extremely important.
420
And actually, yeah, I wanted to ask you
about that.
421
Basically, if you, what would be the main
thing you would change in the way
422
statistical education is done?
423
Well, so you're mainly based in Germany,
so I would ask in Germany, maybe just in
424
general in Europe, since our countries are
pretty close on a lot of metrics.
425
So I guess what you're saying for Germany
could also be applied for a lot of other
426
European countries.
427
it's actually starting to change.
428
So some of my former post-docs are now
professors, and some are in education.
429
And for instance, they have done
experiments in schools in Bavaria, where
430
the textbooks have, in the 11th class,
have base rule.
431
And they show trees, but with relative
frequencies.
432
not natural frequencies.
433
And I've run a study which basically
showed that when pupils learn in these
434
textbooks base rules with relative
frequencies or conditional probabilities,
435
and you test them later,
436
90% can't do it anymore.
437
They've done something like rote learning.
438
Never understood it.
439
And then, in class, teachers taught the
students natural frequencies they had
440
never learned before.
441
And then 90% could do it.
442
Something they had never heard of.
443
Thanks for watching!
444
so my former students convinced the
Bavarian government with this study.
445
And now natural frequencies and thus
understandable base is part of the mass
446
curriculum in Bavaria.
447
So that's a very concrete example where
one can help young persons to understand.
448
And when they will be older and will be
doctors or have another profession where
449
they need base, they will not be so
blocked and have to muddle through and not
450
understand.
451
And if there are patients, then they know
what to ask and how to find out what a
452
positive HIV screening test really means
or a positive COVID test and what
453
information one needs for that.
454
So I think that statistical literacy is
one of the most important topics that
455
should be taught in school.
456
We still have an emphasis on the
mathematics on certainty, of certainty.
457
So algebra, geometry, trigonometry,
beautiful systems.
458
But what's most important for everyone in
later life is not geometry, it's
459
statistical thinking.
460
I mean in practical life.
461
And we are missing to do that.
462
The result is that...
463
If you test people, including medical
professionals, or we have tested
464
professional lawyers, with problems that
require Bayesian thinking, most are lost.
465
And the level of statistical thinking
is...
466
is often so low that you really can't
imagine it.
467
Here's an example.
468
Two years ago, the Royal Statistical
Society of London asked members of
469
parliament whether they would be willing
to do a simple statistical test.
470
And about 100 agreed.
471
The first question was, if you throw a
fair coin twice, what's the chance that it
472
will land twice on head?
473
Now, if you think that every member of
parliament understands that there are four
474
possibilities and two heads or two...
475
So two heads are, that's one in fourth?
476
No.
477
About half understood and the others not.
478
And the most wrong guess was it's still a
half.
479
It's just an illustration of the level of
statistical thinking in our society.
480
And I don't think if we would test German
politicians, we would do much better.
481
And that's a, you might say, yeah, who
cares about coins?
482
But look, there was COVID with all these
probabilities.
483
There is investment.
484
There are taxes.
485
There are tons of numbers that need to be
understood.
486
And if you have politicians that don't
even understand the most basic things,
487
what can we expect?
488
No, for sure.
489
I completely agree.
490
And these are topics we already tackled in
these podcasts, especially in episode 50,
491
where I had David Spiegelhalter here on
the podcast.
492
And we talked about these topics of
communication of uncertainty and all these
493
very interesting topics, especially
education and how
494
how to include all that in the education.
495
So that these are very interesting and
important topics and I encourage people to
496
listen to that episode, number 50 with
David Spiegelhauter.
497
I will put it in the show notes.
498
Yeah.
499
I may add here that David and I have been
working together for many years.
500
And he has been conducting the Wynton
Center for Evidence Communication or Risk
501
Communication in Cambridge.
502
And I'm still directing the Harding Center
for Risk Literacy.
503
And both centers were funded by the same
person, David Harding, a London Investment
504
Banker, who had insight that there's a
problem.
505
But the rest of philanthropists don't
really seem to realize that it would be
506
important to fund these centers.
507
The Wyndham Center is now closed down.
508
which is a great pity.
509
And yeah.
510
So there's very little funding for.
511
So there's funding for research.
512
So when I do the studies like this,
children, there's lots of funding for
513
that.
514
But the moment you apply what you learn
into the real world to help the society,
515
funding stops.
516
Except for...
517
Philanthropes like David Harding.
518
Mm-hmm.
519
Any idea why that would be the case?
520
They are the research agencies they think
they have not realized the problem that
521
science is more than having publications.
522
but that much of the science that we have
is actually useful.
523
That's being realized in, if it's about
engineering, and it's about patent, yes,
524
but that there are similar positive tools
that help people like natural frequencies
525
to understand their world, and that you
can teach them, and then you need a few.
526
guys who just go out and teach doctors,
lawyers or school children.
527
That is not really in the mind of
politicians.
528
Yeah, which is, which clearly is a shame,
right?
529
Because you can see how important
probabilistic thinking is in a lot of, in
530
a lot of fields.
531
And, and, and especially in politics,
right?
532
Even electoral forecasting, which is
something I've done a lot.
533
Probabilistic thinking is absolutely,
absolutely of utmost importance.
534
And yet, it's not there yet.
535
and not a lot of interest in developing
this, at least in France, which is where I
536
have done these experiments.
537
That's always been puzzling to me,
actually.
538
And even in sports, one of the recent
episodes I've done about soccer analytics
539
with Maximilian Goebel, well,
540
That was also an interesting conversation
about the fact that basically the methods
541
are there to use the data more
efficiently, but a lot of European
542
football clubs don't really use them for
some reason, which for me is still a
543
mystery because that would help them make
better use of their finite resources and
544
also be more competitive.
545
So.
546
Yeah, that's definitely something I'm
passionate to understand.
547
So yeah, thanks a lot for doing all that
work.
548
I'm here to try and help us understand all
that.
549
everyone can help here.
550
And for instance, most people are with the
doctors at some point, like COVID-19 or
551
HIV tests or cancer screening.
552
And everyone could ask the doctor, what's
the probability that I actually have the
553
disease?
554
or the virus, if it is positive.
555
And then you likely will learn that your
doctor doesn't know that.
556
Or excuse.
557
Then you can help your doctor understand
that.
558
And bring a natural frequency tree and
show them.
559
I've done this with many doctors, but
quite a few.
560
Over here, I said, I'm training doctors.
561
I've trained more than 1,000, my own
researcher from the Harding Center, I've
562
trained more than 5,000 extra.
563
And the last time I was with my home
physician, I spent maybe 50 minutes with
564
him.
565
and 40 minutes explaining him on the
internet where he finds reliable
566
information.
567
The problem is not in the doctor's mind,
the problem is in the education, at the
568
medical departments, where doctors learn
lots of things, but one thing they do not
569
learn, statistical thinking.
570
Mm-hmm.
571
Yeah.
572
with very few exceptions.
573
And I'm curious, did you do some follow-up
studies on some courts of those doctors
574
where you basically taught them those
tools, it seemed to work in the moment
575
when they applied it, and then I'm curious
basically of the retention rate of these
576
methods, basically is it something like,
oh yeah, when you force them in a way to
577
use them, yeah, they see it's useful,
that's good.
578
But then when you go away, they just don't
use them anymore.
579
And they just refer to the previous way
they were doing things, which is of
580
course, suboptimal.
581
So yeah, I'm curious how that...
582
continuing medley education, I have about
90 minutes and I teach them many things,
583
not just natural frequencies.
584
And when I teach them natural frequencies,
somewhere in the beginning, and I test
585
them towards the end.
586
So that's, yeah, a short time, a little
bit more than an hour.
587
There is no way for me to find these
doctors again.
588
But we have done follow-up studies up to
three months with students and teaching
589
them how to translate conditional
probabilities in natural frequencies.
590
And the interesting thing is that the
performance, which is after the training,
591
around 90%, that means 90% of all tasks,
they get exactly right.
592
After several months it stays at the same
level.
593
Whereas in the control group where they
are taught conditional probability,
594
exactly your problem is there.
595
So they learn it not as well as natural
frequencies, but then a few days later it
596
goes away and after three months they are
basically down with the story.
597
Yeah.
598
Some representations do not stick in the
minds.
599
And frequency representations do, if they
are not relative frequencies.
600
Yeah, this is definitely super
interesting.
601
So basically to make it stick more, the
idea would be definitely use more natural
602
frequencies.
603
Is that what you were saying?
604
Yes, and of course it doesn't hurt if you
continue thinking this way and do some
605
exercise.
606
Hmm, yeah.
607
Yeah, yeah.
608
I see.
609
And something I'm also curious about and
that a lot of, a lot of beginners ask me a
610
lot is what about priors, right?
611
So I'm curious in your job, how did you
handle priors and the challenges regarding
612
confirmation bias, persistence of...
613
persistence of incorrect beliefs.
614
So in a more general way, what I'm asking
is, how can individuals, particularly
615
decision makers in fields like law or
medicine that you know very well, avoid
616
the pitfalls associated with biased prior
beliefs and harnessing the power of
617
patient reasoning?
618
Yeah, so in the medical domain,
particularly in diagnostics, the priors
619
are usually from, they're usually
frequencies and they are estimated by
620
studies.
621
There's always the possibility that a
doctor might adjust the frequency base
622
rate a bit because he or she has some kind
of belief that
623
this patient's main op-e exactly from that
group.
624
But again, there's huge uncertainty about
priors.
625
And also, one should not forget, there's
also uncertainty about likelihoods.
626
Often in Beijing, the discussion centers
among priors.
627
How do you know the likelihoods?
628
So for instance, the, take the mammography
problem again, the probability that you
629
test positive if you don't have cancer, so
which I in the example gave is 9%, which
630
is roughly correct, but it varies.
631
It depends on the age of the woman.
632
It depends on quite a number of factors.
633
And one should not forget that
634
Also the likelihoods have to have some
kind of subjective element and judgment.
635
And then there's a third more general
assumption, namely the assumption that all
636
these terms, the likelihoods and the base
rates, which are from somewhere, maybe a
637
study in Boston, would actually apply to a
study in Berlin.
638
Mm-hmm.
639
And I can name you a few more assumptions.
640
For instance, that the world would be
stable, that nothing has happened.
641
There's no different kind of cancer that
has different statistics.
642
So one always has to assume a stable world
to do base.
643
And one should be aware that it might not
be.
644
And that's why I use the term statistical
thinking.
645
Because you need to think about the
assumptions all the time and about the
646
uncertainty in the assumptions.
647
And also realize that often, particularly
if you have more complex problems, not
648
just one test, but many, and many other
variables, you might, in these situations,
649
where Bayes slowly gets intractable.
650
Mm-hmm.
651
You might think using a different
representation, like what we call a fast
652
and frugal tree, that's a simple way.
653
It's just like think about a natural
frequency tree, but it is an incomplete
654
one, where you basically focus on the
important parts of the information and
655
don't even try to estimate the rest in
order to avoid estimation error.
656
And that's the key logic of heuristics.
657
Under uncertainty, the big danger is that
you overfit.
658
You overfit the data.
659
You have wrongly assuming that the future
is like the past.
660
And in order to avoid overfitting, as the
bias-variance dilemma shows in more
661
detail, one needs to make things more
simple.
662
Maybe not too simple, but more simple.
663
and trying to estimate all conditional
probabilities may give you a great fit,
664
but not good predictions.
665
Yeah, so thanks a lot for this perfect
segue to my next question, because this is
666
a recurring theme in your work and in your
research, simplicity.
667
You often emphasize simplicity in
decision-making strategies.
668
And so that was something I was wondering
about, because, well, I, of course, love
669
Bayesian methods.
670
They are extremely powerful.
671
They are, most of the time,
672
really intuitive to interpret, especially
the model parameters.
673
But they are complex sometimes.
674
And they appear even more complex than
they are to people who are unfamiliar with
675
them, precisely because they are
unfamiliar with them.
676
So anything you're unfamiliar with seems
extremely complex.
677
So
678
I'm wondering how we can bridge the gap
between the complexity of patient
679
statistics, whether real or fantasized,
and the need for simplicity in practical
680
decision-making tools, as you were talking
about, especially for professionals and
681
the general public, because these are the
audiences we're talking about here.
682
Now there are two ways.
683
One is you stay within the Bayesian
framework and for instance avoid
684
estimating conditional probabilities.
685
And that would be what's called naive
Bayes.
686
And naive Bayes can be amazingly good.
687
It has also the advantage that is much
more easy to understand than regular
688
Bayes.
689
The second option is to leave the Bayesian
framework.
690
and study how adaptive heuristics can give
you what base makes too complicated.
691
And also there's too much overfitting.
692
For instance, if we have studied
investment problems, so assume you have a
693
sum of money and want to invest it in N
assets.
694
How do you do it?
695
And there are basic methods that tell you
how to weigh your money in each of these
696
in assets.
697
There is Markowitz Nobel Prize winning
method that's standards of statistics, the
698
mean variance portfolio that tells you how
you should do that.
699
But when Harry Markowitz made his own
investments for the time after his
700
retirement...
701
You might think he used his Nobel Prize
winning optimization method.
702
No, he didn't.
703
He used a simple heuristic that's called 1
over n, or divide equally, the same as a
704
Bayesian equal prior.
705
And a number of studies have asked how
good is 1 over n compared to the Nobel
706
Prize?
707
Winning Markowitz model and also modern
variants including Bayesian methods.
708
The short answer is that 1 over n is
mostly as good as Markowitz and also
709
better, and also the most modern
sophisticated models that use any kind of
710
complexity cannot really beat it.
711
The more interesting question is the
following.
712
Can we identify in what situation
713
A heuristic like 1 over n or any other of
the complicated models is ecologically
714
rational.
715
Because before we have talked about
averages.
716
And you can see, so 1 over n has no free
parameter, very different from base.
717
That means nothing needs to be estimated
from data.
718
It actually doesn't need any data.
719
Thus, in the statistical terms of bias and
variance, it may have a bias, and likely
720
it has.
721
So bias is the difference from the average
investment to the true situation, but it
722
has no variance because it doesn't
estimate any parameters from data.
723
And variance means it's the deviation.
724
of individual estimates from different
samples around the average estimate.
725
And since there is no estimate, there is
no variance.
726
So Markowitz or Bayesian models, they
suffer from both errors.
727
And the real question is whether the sum
of bias and variance of one method is
728
larger than
729
of the other one.
730
And then ecologically rational it means,
let me illustrate this with the, with
731
Markowitz versus Van der Weyne.
732
So if you have more, if n is larger, then
you have more parameters to estimate
733
because the covariances, they just
increase.
734
That means more measurement error.
735
So you can...
736
derived from that, that in situations
where we have a large number of assets,
737
then the complex methods will likely not
be as good.
738
While 1 over n doesn't have more
estimation error, it has none anyhow.
739
And then another thing is, if the true
distribution of
740
the so-called optimal weights that you
only can know in the future, is highly
741
skewed.
742
Then 1 over n is not a good model for
that.
743
But it's roughly equal, then that's the
case.
744
So these are, and then sample size plays a
role for the estimation.
745
So the more data you have, the Bayesian or
Markowitz model will profit, while it
746
doesn't matter.
747
for the 1 over n heuristic because it
doesn't even look at the data.
748
So that's the kind of ecological
rationality thinking.
749
And there are some estimates just to give
you some flesh into that.
750
One study has asked, one study that found
that mostly in seven out of eight, I
751
think, tests 1 over n made more money in
terms of Sharpe ratio and similar.
752
criteria than the optimal Markowitz
portfolio and with 10 years of data.
753
So they asked the question how many years
of data would one need so that the
754
estimates get precise so that eventually
the complex model outperforms the simple
755
heuristic.
756
And that depends on the number of assets
you have.
757
And if they are 50, for instance, then the
estimate is you need 500 years of stock
758
data.
759
So in the year 2500, we can turn to the
complex models, provided the same stocks
760
are still around in the stock market in
the first place.
761
That's a very different way to think about
a situation.
762
It's the Herbert Simonian way, or don't
think about a method by itself, and don't
763
ever believe that a method is rational in
every situation.
764
But think about how this method matches
with the structure of environment.
765
And that's a much more difficult question
to answer than just claiming that
766
something is optimal.
767
Yeah, I see.
768
That's interesting.
769
I love the very practical aspect of that.
770
And also that, I mean, in a way that focus
on simplicity is something I found also
771
very important in the way of basically
thinking about parsimony.
772
Why make something more difficult when you
don't have to?
773
And it's something that I always use also
in my teaching, where I teach how to build
774
a model.
775
Don't start with the hierarchical time
series model, but start with a really
776
simple linear regression, which is just
one predictor, maybe.
777
And don't make it hierarchical yet, even
though that makes sense.
778
the problem at hand because from a very
practical standpoint if the model fails
779
and it will at first if it's too complex
you will not know which part to take apart
780
right and to and to make better so it's
just the parsimony makes it way easier to
781
build the model and also to choose the
prior right just don't make your priors
782
turn complicated find good enough priors
because you won't find
783
Find good enough priors and then go with
that.
784
I mean, the often use of the term optimal
is mostly misleading.
785
Under uncertainty or interactability, you
cannot find the optimal solution and prove
786
it.
787
It's an illusion.
788
And under uncertainty, so when you have to
make predictions, for instance, about the
789
future and you don't know whether the
future is like the past,
790
quite simple heuristics outperform highly
complex methods.
791
An example is, remember when Google
engineers try to predict the flu with a
792
system that's called Google Flu Trends.
793
and it was a secret system and it started
with 45 variables, they were also secret,
794
and the algorithm was secret.
795
And it ran from 2008 till 2015.
796
And at the very beginning in 2009 the
swine flu occurred.
797
And out of season in the summer.
798
And Google flew trends, so the big data
algorithm had learned that the flu is high
799
in the winter and low in the summer.
800
So it underestimated the flu-related
doctor visits, which was the criterion.
801
And the Google engineers then tried to
revise the algorithm to make it better.
802
And here are two choices.
803
One is what I call the complexity
illusion, namely you have a complex
804
algorithm and the high uncertainty, like
the flu is a virus that mutates very
805
quickly, and it doesn't work.
806
What do you do now?
807
You make it more complex.
808
And that's what the Google engineers did.
809
So they used a revision with about 160
variables, also secret.
810
and thought they would solve the problem,
but it didn't improve at all.
811
The opposite reaction would have been...
812
You have a complex and high uncertain
problem.
813
You have a complex algorithm.
814
It doesn't work.
815
What do you do now?
816
You make it simpler.
817
Because you have too much estimation
error.
818
The future isn't like the past.
819
We have tested those published paper on a
very simple heuristic that just takes one
820
data point.
821
So remember that.
822
Google Flu Trends estimated next week's or
this week's flu-related doctor visits.
823
So the one data point algorithm is you
take the most recent data, it's usually
824
one week or two weeks in the past, and
then make the simple prediction that's
825
what it will be this or next week.
826
That's a heuristic called the recency
heuristic, which is well documented in
827
human thinking, is often mistaken as a
bias heuristic.
828
And we showed it for the entire run of
Google Flu Trends for eight years.
829
The simple heuristic outperformed Google
Flu Tense in all updates, about a total, I
830
think, three updates.
831
for every year and for each of the updates
and reduce the error by about half.
832
You can intuitively see that.
833
So a big data algorithm gets stuck like if
something unexpected happened like in the
834
swine flu.
835
The recency heuristic can quickly adapt to
the new situation and
836
So that's another example showing that you
always should test a simple algorithm
837
first.
838
And you can learn from the human brain.
839
So the heuristics we use are not what the
heuristics and bios people think, always
840
second best.
841
No.
842
You need to see in a situation of high
uncertainty.
843
Pick a right heuristic.
844
A way to find it is to study what humans
do in these situations.
845
I call this psychological AI.
846
Yeah, I love that.
847
Um, and actually that, so before closing
up the show that, um, sets us up nicely
848
for one of my last questions, which is a
bit more, uh, formal thinking.
849
Because so you, you've been talking about
AI and, and these decision-making science.
850
So I'm wondering how you see the future of
decision science.
851
And where do vision statistics fit into
this evolving landscape, especially
852
considering the increasing availability of
data and computational power?
853
And that may be related to your latest
book.
854
Yeah.
855
My latest book is about, it's called How
to Stay Smart in a Smart World, and it
856
teaches one thing, a distinction between
stable worlds and unstable worlds.
857
Stable worlds are like what the economist
Frank Knight called a situation of risk,
858
where you can calculate the risk as
opposed to uncertainty.
859
That's unstable worlds.
860
If you have a stable world,
861
That's the world of optimization
algorithms, at least if it's fractable.
862
And here more data helps, because you can
fine-tune your parameters.
863
If you have to deal with an unstable
world, and that's most of things are
864
unstable, are not just viruses, but human
behavior.
865
And complex algorithms typically do not
help in predicting human behavior.
866
In my book I have a number of examples.
867
And here you need to study smart adaptive
heuristics that help.
868
And for instance, we are working with the
largest credit rating company in Germany.
869
And they have...
870
intransparent, secret, complex algorithms.
871
That has caused an outcry in the public
because these are decisions that decide
872
whether you are considered for, if you
want to rent a flat or not, and other
873
things.
874
And we have shown them that if they make
the algorithms simpler.
875
then they actually get better and more
transparent.
876
And that's an interesting combination.
877
Here is one future about solving the
so-called XAI problem.
878
First try a simple heuristic, that means a
simple algorithm, and see how good it is.
879
And not just test competitively, a handful
of complex algorithms.
880
Because the simple algorithm may be
881
do as well or better than the complex
ones.
882
And also they are transparent.
883
And that means that doctors, for instance,
may accept an algorithm because they
884
understand it.
885
And a responsible doctor would not really
want to have a neural network diagnostic
886
system that he or she doesn't understand.
887
So the future of decision making would be,
if you want it in a few sentences, take
888
uncertainty serious.
889
and distinguish it from situations of
risk.
890
We are not foreign, I hear this.
891
And second, take heuristics seriously and
don't confuse them with viruses.
892
And third...
893
If you can, go out in the real world and
study decision making there.
894
How firefighters like Gary Klein make
decisions, how chess masters make
895
decisions, how scientists come up with
their theories.
896
And you will find that standard decision
theory that's geared on small worlds of
897
calculated risk will have little to tell
you about that.
898
and then have the courage to study
empirically what experience people do, how
899
to model this as heuristics and find out
their ecological rationality.
900
That's what I see will be the future.
901
Nice.
902
Yeah, I find that super interesting in the
sense that it's also something I can see
903
as an attractive feature of the patient
modeling framework from people coming to
904
us for consulting or education, where the
fact that the models are clear on the
905
assumptions.
906
and the priors and the structure of the
model make them much more interpretable.
907
And so way less black boxy than classic AI
models.
908
And that's, yeah, definitely a trend we
see and it's also related to causal
909
inference.
910
People most of the time wanna know if X
influences Y and in what way, and if that
911
is, you know, predictable way.
912
And so for that causal inference,
913
fits extremely well in the Bayesian
framework.
914
So that's also something I'm really
curious about to see evolve in the coming
915
years, especially with some new tools that
start to appear.
916
Like I had Ben Vincent lately on the show
for episode 97, and we talked about causal
917
pi and how to do causal inference in PyMC.
918
And now we have the new do operator.
919
in Pintsy, which helps you do that.
920
So, yeah, I really love seeing all those
tools coming together to help people do
921
more causal inference and also more state
of the art causal inference.
922
And for the curious, we will do with
Benjamin Vincent a modeling webinar in the
923
coming weeks, probably in September, where
he will demonstrate how to use the
924
Dooperator in PIMC.
925
So if you're curious about that, follow
the show.
926
And if you are a patron of the show, you
will get early access to the recording.
927
So if you want to support the show with...
928
Cafe latte per month.
929
Um, I, uh, I'm really, um, uh, thanking
you from the bottom of my heart.
930
Um, well, Gert, um, I have so many other
questions, but I think, I think it's a
931
good time to, to stop.
932
Uh, I've already taken a lot of your time,
so I want to be mindful of that.
933
Um, but before letting you go.
934
I'm going to ask you the last two
questions I ask every guest at the end of
935
the show.
936
Number one, if you had unlimited time and
resources, which problem would you try to
937
solve?
938
I would try to solve the problem to
understanding the ecological rationality
939
of strategies, particular heuristics.
940
Hmm.
941
That's a next.
942
Yeah.
943
You're the first one to answer that.
944
And that's a very precise answer.
945
I am absolutely impressed.
946
And second question, if you could have
dinner with any great scientific mind,
947
dead, alive, or fictional, who would it
be?
948
Oh, I would love to have dinner with two
women.
949
The first one is a pioneer of computers,
Ada Lovelace.
950
And the second one is a woman of courage
and brain, Marie Curie.
951
The only woman who got two Nobel Prizes.
952
And Marie Curie said something very
interesting.
953
Nothing in life.
954
is to be feared.
955
It is only to be understood.
956
Now is the time to understand more so that
we may fear less." Kori said this when she
957
discovered that she had cancer and was
soon to die.
958
extremely inspiring.
959
Yeah, thanks, Edgar.
960
That's really inspiring.
961
But having courage is something that's
very important for every researcher.
962
And also having courage to look forward,
to dare, to find new avenues, rather than
963
playing the game of the time.
964
Well, on that note, I think, well, thank
you for coming on the show, Gert.
965
That was an absolute pleasure.
966
I'm really happy that we could have that
more, let's say epistemological discussion
967
than we're used to on the podcast.
968
I love doing that from time to time.
969
Also filled with applications and
encourage people to take a look at the
970
show notes.
971
I put.
972
your books over there, some of your
papers, a lot of resources for those who
973
want to dig deeper.
974
So thank you again, Gert, for taking the
time and being on this show.
975
It was my pleasure.
976
Bye bye.