title
stringlengths
12
112
published
stringlengths
19
23
url
stringlengths
28
28
video_id
stringlengths
11
11
channel_id
stringclasses
5 values
id
stringlengths
16
31
text
stringlengths
0
596
start
float64
0
37.8k
end
float64
2.18
37.8k
GLOM: How to represent part-whole hierarchies in a neural network (Geoff Hinton's Paper Explained)
2021-02-27 15:47:03
https://youtu.be/cllFzkvrYmE
cllFzkvrYmE
UCZHmQk67mSJgfCCTn7xBfew
cllFzkvrYmE-t2372.92
color. So what he's saying right here is that, you know, you could give preference to two
2,372.92
2,387.84
GLOM: How to represent part-whole hierarchies in a neural network (Geoff Hinton's Paper Explained)
2021-02-27 15:47:03
https://youtu.be/cllFzkvrYmE
cllFzkvrYmE
UCZHmQk67mSJgfCCTn7xBfew
cllFzkvrYmE-t2380.72
similar color locations when you decide what you want to attend to. But the color isn't
2,380.72
2,394.76
GLOM: How to represent part-whole hierarchies in a neural network (Geoff Hinton's Paper Explained)
2021-02-27 15:47:03
https://youtu.be/cllFzkvrYmE
cllFzkvrYmE
UCZHmQk67mSJgfCCTn7xBfew
cllFzkvrYmE-t2387.8399999999997
as easy as simply saying what color is there in the location that you are at. But you could
2,387.84
2,403.36
GLOM: How to represent part-whole hierarchies in a neural network (Geoff Hinton's Paper Explained)
2021-02-27 15:47:03
https://youtu.be/cllFzkvrYmE
cllFzkvrYmE
UCZHmQk67mSJgfCCTn7xBfew
cllFzkvrYmE-t2394.76
be so if this is green, and this here is blue, then the bottom layer would say yes, I'm green
2,394.76
2,409
GLOM: How to represent part-whole hierarchies in a neural network (Geoff Hinton's Paper Explained)
2021-02-27 15:47:03
https://youtu.be/cllFzkvrYmE
cllFzkvrYmE
UCZHmQk67mSJgfCCTn7xBfew
cllFzkvrYmE-t2403.36
and yes, I'm blue. But they could also be saying, well, I am part of a green blue object,
2,403.36
2,415.76
GLOM: How to represent part-whole hierarchies in a neural network (Geoff Hinton's Paper Explained)
2021-02-27 15:47:03
https://youtu.be/cllFzkvrYmE
cllFzkvrYmE
UCZHmQk67mSJgfCCTn7xBfew
cllFzkvrYmE-t2409.0
right? And then the the higher layer here, you know, attending or caring about multiple
2,409
2,421.18
GLOM: How to represent part-whole hierarchies in a neural network (Geoff Hinton's Paper Explained)
2021-02-27 15:47:03
https://youtu.be/cllFzkvrYmE
cllFzkvrYmE
UCZHmQk67mSJgfCCTn7xBfew
cllFzkvrYmE-t2415.76
or bigger region, its color would then be, you know, green, blue, and the consensus could
2,415.76
2,427.16
GLOM: How to represent part-whole hierarchies in a neural network (Geoff Hinton's Paper Explained)
2021-02-27 15:47:03
https://youtu.be/cllFzkvrYmE
cllFzkvrYmE
UCZHmQk67mSJgfCCTn7xBfew
cllFzkvrYmE-t2421.18
reach on, well, we are a green blue object, even though the object isn't a pure green
2,421.18
2,436
GLOM: How to represent part-whole hierarchies in a neural network (Geoff Hinton's Paper Explained)
2021-02-27 15:47:03
https://youtu.be/cllFzkvrYmE
cllFzkvrYmE
UCZHmQk67mSJgfCCTn7xBfew
cllFzkvrYmE-t2427.16
or pure blue all throughout. So he, I think, yeah, it's it's, I think it's a side suggestion.
2,427.16
2,443
GLOM: How to represent part-whole hierarchies in a neural network (Geoff Hinton's Paper Explained)
2021-02-27 15:47:03
https://youtu.be/cllFzkvrYmE
cllFzkvrYmE
UCZHmQk67mSJgfCCTn7xBfew
cllFzkvrYmE-t2436.0
Maybe he has this as a core motivation between the system. But it's just interesting to see
2,436
2,450.6
GLOM: How to represent part-whole hierarchies in a neural network (Geoff Hinton's Paper Explained)
2021-02-27 15:47:03
https://youtu.be/cllFzkvrYmE
cllFzkvrYmE
UCZHmQk67mSJgfCCTn7xBfew
cllFzkvrYmE-t2443.0
how he thinks of things and he extends the color here to textures and even shapes. The
2,443
2,455.04
GLOM: How to represent part-whole hierarchies in a neural network (Geoff Hinton's Paper Explained)
2021-02-27 15:47:03
https://youtu.be/cllFzkvrYmE
cllFzkvrYmE
UCZHmQk67mSJgfCCTn7xBfew
cllFzkvrYmE-t2450.6
individual texture elements have their own shapes and poses in spatial relationships,
2,450.6
2,459.2
GLOM: How to represent part-whole hierarchies in a neural network (Geoff Hinton's Paper Explained)
2021-02-27 15:47:03
https://youtu.be/cllFzkvrYmE
cllFzkvrYmE
UCZHmQk67mSJgfCCTn7xBfew
cllFzkvrYmE-t2455.04
but an object with a textured surface has exactly the same texture everywhere at the
2,455.04
2,466.4
GLOM: How to represent part-whole hierarchies in a neural network (Geoff Hinton's Paper Explained)
2021-02-27 15:47:03
https://youtu.be/cllFzkvrYmE
cllFzkvrYmE
UCZHmQk67mSJgfCCTn7xBfew
cllFzkvrYmE-t2459.2
object level. glom extends this ideas to shapes, an object may have parts that are very different
2,459.2
2,471.18
GLOM: How to represent part-whole hierarchies in a neural network (Geoff Hinton's Paper Explained)
2021-02-27 15:47:03
https://youtu.be/cllFzkvrYmE
cllFzkvrYmE
UCZHmQk67mSJgfCCTn7xBfew
cllFzkvrYmE-t2466.4
from one another. But at the object level, it has exactly the same compound shape in
2,466.4
2,476.28
GLOM: How to represent part-whole hierarchies in a neural network (Geoff Hinton's Paper Explained)
2021-02-27 15:47:03
https://youtu.be/cllFzkvrYmE
cllFzkvrYmE
UCZHmQk67mSJgfCCTn7xBfew
cllFzkvrYmE-t2471.18
all of the location that it occupies, basically saying that, okay, every pixel that's part
2,471.18
2,481.72
GLOM: How to represent part-whole hierarchies in a neural network (Geoff Hinton's Paper Explained)
2021-02-27 15:47:03
https://youtu.be/cllFzkvrYmE
cllFzkvrYmE
UCZHmQk67mSJgfCCTn7xBfew
cllFzkvrYmE-t2476.28
of a cat head is a is a cat head has the shape of a cat head, even though the individual
2,476.28
2,487.66
GLOM: How to represent part-whole hierarchies in a neural network (Geoff Hinton's Paper Explained)
2021-02-27 15:47:03
https://youtu.be/cllFzkvrYmE
cllFzkvrYmE
UCZHmQk67mSJgfCCTn7xBfew
cllFzkvrYmE-t2481.7200000000003
locations might not recognize that. And that information could be passed around through
2,481.72
2,495.16
GLOM: How to represent part-whole hierarchies in a neural network (Geoff Hinton's Paper Explained)
2021-02-27 15:47:03
https://youtu.be/cllFzkvrYmE
cllFzkvrYmE
UCZHmQk67mSJgfCCTn7xBfew
cllFzkvrYmE-t2487.6600000000003
this consensus mechanism over time. So the cluster discovery versus cluster formation,
2,487.66
2,502.04
GLOM: How to represent part-whole hierarchies in a neural network (Geoff Hinton's Paper Explained)
2021-02-27 15:47:03
https://youtu.be/cllFzkvrYmE
cllFzkvrYmE
UCZHmQk67mSJgfCCTn7xBfew
cllFzkvrYmE-t2495.1600000000003
we've seen that and he makes a lot of, he makes a lot of analogies to face recognition.
2,495.16
2,507.16
GLOM: How to represent part-whole hierarchies in a neural network (Geoff Hinton's Paper Explained)
2021-02-27 15:47:03
https://youtu.be/cllFzkvrYmE
cllFzkvrYmE
UCZHmQk67mSJgfCCTn7xBfew
cllFzkvrYmE-t2502.04
But yeah, the clusters are not the islands of similar embedding vectors at a level can
2,502.04
2,512.1
GLOM: How to represent part-whole hierarchies in a neural network (Geoff Hinton's Paper Explained)
2021-02-27 15:47:03
https://youtu.be/cllFzkvrYmE
cllFzkvrYmE
UCZHmQk67mSJgfCCTn7xBfew
cllFzkvrYmE-t2507.16
be viewed as clusters, but these clusters are not discovered in immutable data. They
2,507.16
2,517.82
GLOM: How to represent part-whole hierarchies in a neural network (Geoff Hinton's Paper Explained)
2021-02-27 15:47:03
https://youtu.be/cllFzkvrYmE
cllFzkvrYmE
UCZHmQk67mSJgfCCTn7xBfew
cllFzkvrYmE-t2512.1
are formed by the interaction between the intra level process that favors islands of
2,512.1
2,523.4
GLOM: How to represent part-whole hierarchies in a neural network (Geoff Hinton's Paper Explained)
2021-02-27 15:47:03
https://youtu.be/cllFzkvrYmE
cllFzkvrYmE
UCZHmQk67mSJgfCCTn7xBfew
cllFzkvrYmE-t2517.82
similarity and dynamically changing suggestions coming from the locations embedding at adjacent
2,517.82
2,531.88
GLOM: How to represent part-whole hierarchies in a neural network (Geoff Hinton's Paper Explained)
2021-02-27 15:47:03
https://youtu.be/cllFzkvrYmE
cllFzkvrYmE
UCZHmQk67mSJgfCCTn7xBfew
cllFzkvrYmE-t2523.4
levels. So the core here is really this consensus algorithm that creates these clusters. And
2,523.4
2,535.72
GLOM: How to represent part-whole hierarchies in a neural network (Geoff Hinton's Paper Explained)
2021-02-27 15:47:03
https://youtu.be/cllFzkvrYmE
cllFzkvrYmE
UCZHmQk67mSJgfCCTn7xBfew
cllFzkvrYmE-t2531.88
yeah, the clustering algorithm doesn't work by simply looking at embeddings and deciding
2,531.88
2,540.68
GLOM: How to represent part-whole hierarchies in a neural network (Geoff Hinton's Paper Explained)
2021-02-27 15:47:03
https://youtu.be/cllFzkvrYmE
cllFzkvrYmE
UCZHmQk67mSJgfCCTn7xBfew
cllFzkvrYmE-t2535.7200000000003
which ones go together, but the embeddings themselves update themselves in order to form
2,535.72
2,548.76
GLOM: How to represent part-whole hierarchies in a neural network (Geoff Hinton's Paper Explained)
2021-02-27 15:47:03
https://youtu.be/cllFzkvrYmE
cllFzkvrYmE
UCZHmQk67mSJgfCCTn7xBfew
cllFzkvrYmE-t2540.6800000000003
clusters. And yeah, this is a replicating embedding vectors. This is a response to a
2,540.68
2,555.24
GLOM: How to represent part-whole hierarchies in a neural network (Geoff Hinton's Paper Explained)
2021-02-27 15:47:03
https://youtu.be/cllFzkvrYmE
cllFzkvrYmE
UCZHmQk67mSJgfCCTn7xBfew
cllFzkvrYmE-t2548.76
criticism that I guess he got where someone said, Well, why don't Why do you represent
2,548.76
2,559.12
GLOM: How to represent part-whole hierarchies in a neural network (Geoff Hinton's Paper Explained)
2021-02-27 15:47:03
https://youtu.be/cllFzkvrYmE
cllFzkvrYmE
UCZHmQk67mSJgfCCTn7xBfew
cllFzkvrYmE-t2555.2400000000002
if you have these, you know, these columns at the bottom, it makes sense, you have all
2,555.24
2,562.84
GLOM: How to represent part-whole hierarchies in a neural network (Geoff Hinton's Paper Explained)
2021-02-27 15:47:03
https://youtu.be/cllFzkvrYmE
cllFzkvrYmE
UCZHmQk67mSJgfCCTn7xBfew
cllFzkvrYmE-t2559.12
the different vectors, but then as you go up, you know, you have that kind of the same
2,559.12
2,568.6
GLOM: How to represent part-whole hierarchies in a neural network (Geoff Hinton's Paper Explained)
2021-02-27 15:47:03
https://youtu.be/cllFzkvrYmE
cllFzkvrYmE
UCZHmQk67mSJgfCCTn7xBfew
cllFzkvrYmE-t2562.8399999999997
vector for all locations, because it's the same object, why does it make sense to replicate
2,562.84
2,575.88
GLOM: How to represent part-whole hierarchies in a neural network (Geoff Hinton's Paper Explained)
2021-02-27 15:47:03
https://youtu.be/cllFzkvrYmE
cllFzkvrYmE
UCZHmQk67mSJgfCCTn7xBfew
cllFzkvrYmE-t2568.6
that everywhere, and not just have one, because, you know, in a database, we just have one.
2,568.6
2,581.12
GLOM: How to represent part-whole hierarchies in a neural network (Geoff Hinton's Paper Explained)
2021-02-27 15:47:03
https://youtu.be/cllFzkvrYmE
cllFzkvrYmE
UCZHmQk67mSJgfCCTn7xBfew
cllFzkvrYmE-t2575.88
And it basically says that, in order to reach the consensus, first of all, it's important
2,575.88
2,585.4
GLOM: How to represent part-whole hierarchies in a neural network (Geoff Hinton's Paper Explained)
2021-02-27 15:47:03
https://youtu.be/cllFzkvrYmE
cllFzkvrYmE
UCZHmQk67mSJgfCCTn7xBfew
cllFzkvrYmE-t2581.12
to have different vectors, they might be slightly different. So they might have some nuance
2,581.12
2,590.56
GLOM: How to represent part-whole hierarchies in a neural network (Geoff Hinton's Paper Explained)
2021-02-27 15:47:03
https://youtu.be/cllFzkvrYmE
cllFzkvrYmE
UCZHmQk67mSJgfCCTn7xBfew
cllFzkvrYmE-t2585.4
in them, because, you know, they might get pulled into different directions from the
2,585.4
2,597.6
GLOM: How to represent part-whole hierarchies in a neural network (Geoff Hinton's Paper Explained)
2021-02-27 15:47:03
https://youtu.be/cllFzkvrYmE
cllFzkvrYmE
UCZHmQk67mSJgfCCTn7xBfew
cllFzkvrYmE-t2590.56
sign of bottom up signal, then from the consensus algorithm on the same layer. So I, you know,
2,590.56
2,603.84
GLOM: How to represent part-whole hierarchies in a neural network (Geoff Hinton's Paper Explained)
2021-02-27 15:47:03
https://youtu.be/cllFzkvrYmE
cllFzkvrYmE
UCZHmQk67mSJgfCCTn7xBfew
cllFzkvrYmE-t2597.6
I believe that it is that is important. Here, I think it's just this is a criticism he got.
2,597.6
2,611
GLOM: How to represent part-whole hierarchies in a neural network (Geoff Hinton's Paper Explained)
2021-02-27 15:47:03
https://youtu.be/cllFzkvrYmE
cllFzkvrYmE
UCZHmQk67mSJgfCCTn7xBfew
cllFzkvrYmE-t2603.84
And then he decided to put this in here, learning islands. So what we haven't discussed about
2,603.84
2,618.48
GLOM: How to represent part-whole hierarchies in a neural network (Geoff Hinton's Paper Explained)
2021-02-27 15:47:03
https://youtu.be/cllFzkvrYmE
cllFzkvrYmE
UCZHmQk67mSJgfCCTn7xBfew
cllFzkvrYmE-t2611.0
this yet is how this is trained. And Hinton says this is trained as a denoising auto encoder.
2,611
2,623.98
GLOM: How to represent part-whole hierarchies in a neural network (Geoff Hinton's Paper Explained)
2021-02-27 15:47:03
https://youtu.be/cllFzkvrYmE
cllFzkvrYmE
UCZHmQk67mSJgfCCTn7xBfew
cllFzkvrYmE-t2618.48
Let us assume that glom is trained to reconstruct at its output, the uncorrupted version of
2,618.48
2,632.2
GLOM: How to represent part-whole hierarchies in a neural network (Geoff Hinton's Paper Explained)
2021-02-27 15:47:03
https://youtu.be/cllFzkvrYmE
cllFzkvrYmE
UCZHmQk67mSJgfCCTn7xBfew
cllFzkvrYmE-t2623.98
an image from which some region has been have been removed. So he goes into self supervised
2,623.98
2,638.36
GLOM: How to represent part-whole hierarchies in a neural network (Geoff Hinton's Paper Explained)
2021-02-27 15:47:03
https://youtu.be/cllFzkvrYmE
cllFzkvrYmE
UCZHmQk67mSJgfCCTn7xBfew
cllFzkvrYmE-t2632.2
learning with this system. This objective should ensure that information about the input
2,632.2
2,643.52
GLOM: How to represent part-whole hierarchies in a neural network (Geoff Hinton's Paper Explained)
2021-02-27 15:47:03
https://youtu.be/cllFzkvrYmE
cllFzkvrYmE
UCZHmQk67mSJgfCCTn7xBfew
cllFzkvrYmE-t2638.36
is preserved during the forward pass. And if the regions are sufficiently large, it
2,638.36
2,648.48
GLOM: How to represent part-whole hierarchies in a neural network (Geoff Hinton's Paper Explained)
2021-02-27 15:47:03
https://youtu.be/cllFzkvrYmE
cllFzkvrYmE
UCZHmQk67mSJgfCCTn7xBfew
cllFzkvrYmE-t2643.52
should also ensure that identifying familiar objects will be helpful for filling in the
2,643.52
2,656.84
GLOM: How to represent part-whole hierarchies in a neural network (Geoff Hinton's Paper Explained)
2021-02-27 15:47:03
https://youtu.be/cllFzkvrYmE
cllFzkvrYmE
UCZHmQk67mSJgfCCTn7xBfew
cllFzkvrYmE-t2648.48
missing regions. To encourage islands of near identity, we need to add a regularizer. And
2,648.48
2,660.88
GLOM: How to represent part-whole hierarchies in a neural network (Geoff Hinton's Paper Explained)
2021-02-27 15:47:03
https://youtu.be/cllFzkvrYmE
cllFzkvrYmE
UCZHmQk67mSJgfCCTn7xBfew
cllFzkvrYmE-t2656.84
experience shows that a regularizer that simply encourages similarity between the embeddings
2,656.84
2,667.28
GLOM: How to represent part-whole hierarchies in a neural network (Geoff Hinton's Paper Explained)
2021-02-27 15:47:03
https://youtu.be/cllFzkvrYmE
cllFzkvrYmE
UCZHmQk67mSJgfCCTn7xBfew
cllFzkvrYmE-t2660.88
of nearby locations can cause representations to collapse. All the embedding vectors may
2,660.88
2,672.32
GLOM: How to represent part-whole hierarchies in a neural network (Geoff Hinton's Paper Explained)
2021-02-27 15:47:03
https://youtu.be/cllFzkvrYmE
cllFzkvrYmE
UCZHmQk67mSJgfCCTn7xBfew
cllFzkvrYmE-t2667.28
become very small, so that they are all very similar. And the reconstruction will then
2,667.28
2,676.92
GLOM: How to represent part-whole hierarchies in a neural network (Geoff Hinton's Paper Explained)
2021-02-27 15:47:03
https://youtu.be/cllFzkvrYmE
cllFzkvrYmE
UCZHmQk67mSJgfCCTn7xBfew
cllFzkvrYmE-t2672.32
use very large weights to deal with the very small scale to prevent collapse. And then
2,672.32
2,683.84
GLOM: How to represent part-whole hierarchies in a neural network (Geoff Hinton's Paper Explained)
2021-02-27 15:47:03
https://youtu.be/cllFzkvrYmE
cllFzkvrYmE
UCZHmQk67mSJgfCCTn7xBfew
cllFzkvrYmE-t2676.92
he says contrastive learning is the answer to this. So how do you regularize the model
2,676.92
2,691.92
GLOM: How to represent part-whole hierarchies in a neural network (Geoff Hinton's Paper Explained)
2021-02-27 15:47:03
https://youtu.be/cllFzkvrYmE
cllFzkvrYmE
UCZHmQk67mSJgfCCTn7xBfew
cllFzkvrYmE-t2683.84
such that this consensus is formed? He says contrastive learning might be useful, but
2,683.84
2,698.02
GLOM: How to represent part-whole hierarchies in a neural network (Geoff Hinton's Paper Explained)
2021-02-27 15:47:03
https://youtu.be/cllFzkvrYmE
cllFzkvrYmE
UCZHmQk67mSJgfCCTn7xBfew
cllFzkvrYmE-t2691.92
you can't simply apply it straight out. So it learns to make representations of two different
2,691.92
2,703.18
GLOM: How to represent part-whole hierarchies in a neural network (Geoff Hinton's Paper Explained)
2021-02-27 15:47:03
https://youtu.be/cllFzkvrYmE
cllFzkvrYmE
UCZHmQk67mSJgfCCTn7xBfew
cllFzkvrYmE-t2698.02
crops of the same image agree, and the representations of two crops from different images disagree.
2,698.02
2,709.24
GLOM: How to represent part-whole hierarchies in a neural network (Geoff Hinton's Paper Explained)
2021-02-27 15:47:03
https://youtu.be/cllFzkvrYmE
cllFzkvrYmE
UCZHmQk67mSJgfCCTn7xBfew
cllFzkvrYmE-t2703.1800000000003
But this is not a sensible thing to do. If our aim is to recognize objects, if crop one
2,703.18
2,714.96
GLOM: How to represent part-whole hierarchies in a neural network (Geoff Hinton's Paper Explained)
2021-02-27 15:47:03
https://youtu.be/cllFzkvrYmE
cllFzkvrYmE
UCZHmQk67mSJgfCCTn7xBfew
cllFzkvrYmE-t2709.2400000000002
contains objects A and B and crop two from the same image contains objects B and C, it
2,709.24
2,720.34
GLOM: How to represent part-whole hierarchies in a neural network (Geoff Hinton's Paper Explained)
2021-02-27 15:47:03
https://youtu.be/cllFzkvrYmE
cllFzkvrYmE
UCZHmQk67mSJgfCCTn7xBfew
cllFzkvrYmE-t2714.96
does not make sense to demand that the representation of the two crops is the same at the object
2,714.96
2,727.56
GLOM: How to represent part-whole hierarchies in a neural network (Geoff Hinton's Paper Explained)
2021-02-27 15:47:03
https://youtu.be/cllFzkvrYmE
cllFzkvrYmE
UCZHmQk67mSJgfCCTn7xBfew
cllFzkvrYmE-t2720.34
level. Okay, so he says that contrastive learning is good, but you have to pay very careful
2,720.34
2,736.88
GLOM: How to represent part-whole hierarchies in a neural network (Geoff Hinton's Paper Explained)
2021-02-27 15:47:03
https://youtu.be/cllFzkvrYmE
cllFzkvrYmE
UCZHmQk67mSJgfCCTn7xBfew
cllFzkvrYmE-t2727.56
attention at which layer you employ it. Because, you know, if you go down far enough, then
2,727.56
2,742.36
GLOM: How to represent part-whole hierarchies in a neural network (Geoff Hinton's Paper Explained)
2021-02-27 15:47:03
https://youtu.be/cllFzkvrYmE
cllFzkvrYmE
UCZHmQk67mSJgfCCTn7xBfew
cllFzkvrYmE-t2736.88
contrastive learning, especially, you know, this this type where you crop the image into
2,736.88
2,746.28
GLOM: How to represent part-whole hierarchies in a neural network (Geoff Hinton's Paper Explained)
2021-02-27 15:47:03
https://youtu.be/cllFzkvrYmE
cllFzkvrYmE
UCZHmQk67mSJgfCCTn7xBfew
cllFzkvrYmE-t2742.36
different parts, and you say, well, since it's the same image, the representations should
2,742.36
2,751.48
GLOM: How to represent part-whole hierarchies in a neural network (Geoff Hinton's Paper Explained)
2021-02-27 15:47:03
https://youtu.be/cllFzkvrYmE
cllFzkvrYmE
UCZHmQk67mSJgfCCTn7xBfew
cllFzkvrYmE-t2746.28
agree in would say, well, at the top layer, yes, but at the bottom layer, certainly not
2,746.28
2,759.72
GLOM: How to represent part-whole hierarchies in a neural network (Geoff Hinton's Paper Explained)
2021-02-27 15:47:03
https://youtu.be/cllFzkvrYmE
cllFzkvrYmE
UCZHmQk67mSJgfCCTn7xBfew
cllFzkvrYmE-t2751.48
because they display different things, right. So you have to be careful where you apply
2,751.48
2,767.4
GLOM: How to represent part-whole hierarchies in a neural network (Geoff Hinton's Paper Explained)
2021-02-27 15:47:03
https://youtu.be/cllFzkvrYmE
cllFzkvrYmE
UCZHmQk67mSJgfCCTn7xBfew
cllFzkvrYmE-t2759.7200000000003
this contrastive learning. And he gives a bunch of suggestions on how to solve that.
2,759.72
2,773.12
GLOM: How to represent part-whole hierarchies in a neural network (Geoff Hinton's Paper Explained)
2021-02-27 15:47:03
https://youtu.be/cllFzkvrYmE
cllFzkvrYmE
UCZHmQk67mSJgfCCTn7xBfew
cllFzkvrYmE-t2767.4
He says things like, well, negative examples, for example, might not might not even be needed.
2,767.4
2,778.84
GLOM: How to represent part-whole hierarchies in a neural network (Geoff Hinton's Paper Explained)
2021-02-27 15:47:03
https://youtu.be/cllFzkvrYmE
cllFzkvrYmE
UCZHmQk67mSJgfCCTn7xBfew
cllFzkvrYmE-t2773.12
Well, that's it. Sorry, that's a different thing. So the obvious solution is to regularize
2,773.12
2,783.16
GLOM: How to represent part-whole hierarchies in a neural network (Geoff Hinton's Paper Explained)
2021-02-27 15:47:03
https://youtu.be/cllFzkvrYmE
cllFzkvrYmE
UCZHmQk67mSJgfCCTn7xBfew
cllFzkvrYmE-t2778.8399999999997
the bottom up and top down neural networks by encouraging each of them to predict the
2,778.84
2,793.24
GLOM: How to represent part-whole hierarchies in a neural network (Geoff Hinton's Paper Explained)
2021-02-27 15:47:03
https://youtu.be/cllFzkvrYmE
cllFzkvrYmE
UCZHmQk67mSJgfCCTn7xBfew
cllFzkvrYmE-t2783.16
consensus option. Yeah, this is the way to geometric mean of the predictions coming from
2,783.16
2,798.16
GLOM: How to represent part-whole hierarchies in a neural network (Geoff Hinton's Paper Explained)
2021-02-27 15:47:03
https://youtu.be/cllFzkvrYmE
cllFzkvrYmE
UCZHmQk67mSJgfCCTn7xBfew
cllFzkvrYmE-t2793.24
the top down and bottom up networks, the attention weighted average of the embeddings at nearby
2,793.24
2,804.6
GLOM: How to represent part-whole hierarchies in a neural network (Geoff Hinton's Paper Explained)
2021-02-27 15:47:03
https://youtu.be/cllFzkvrYmE
cllFzkvrYmE
UCZHmQk67mSJgfCCTn7xBfew
cllFzkvrYmE-t2798.16
locations at the previous time step, the previous state of end, I guess, and there should be
2,798.16
2,810.92
GLOM: How to represent part-whole hierarchies in a neural network (Geoff Hinton's Paper Explained)
2021-02-27 15:47:03
https://youtu.be/cllFzkvrYmE
cllFzkvrYmE
UCZHmQk67mSJgfCCTn7xBfew
cllFzkvrYmE-t2804.6
an end, and the previous state of the embedding, training the inter level prediction to agree
2,804.6
2,815.52
GLOM: How to represent part-whole hierarchies in a neural network (Geoff Hinton's Paper Explained)
2021-02-27 15:47:03
https://youtu.be/cllFzkvrYmE
cllFzkvrYmE
UCZHmQk67mSJgfCCTn7xBfew
cllFzkvrYmE-t2810.92
with the consensus will clearly make the islands found during feed forward inference be more
2,810.92
2,825.74
GLOM: How to represent part-whole hierarchies in a neural network (Geoff Hinton's Paper Explained)
2021-02-27 15:47:03
https://youtu.be/cllFzkvrYmE
cllFzkvrYmE
UCZHmQk67mSJgfCCTn7xBfew
cllFzkvrYmE-t2815.52
coherent. So he says you could regularize the model to, to regress to the consensus
2,815.52
2,833.96
GLOM: How to represent part-whole hierarchies in a neural network (Geoff Hinton's Paper Explained)
2021-02-27 15:47:03
https://youtu.be/cllFzkvrYmE
cllFzkvrYmE
UCZHmQk67mSJgfCCTn7xBfew
cllFzkvrYmE-t2825.74
option. So it's sort of like a self a self regression. And he asks whether or not that
2,825.74
2,840.9
GLOM: How to represent part-whole hierarchies in a neural network (Geoff Hinton's Paper Explained)
2021-02-27 15:47:03
https://youtu.be/cllFzkvrYmE
cllFzkvrYmE
UCZHmQk67mSJgfCCTn7xBfew
cllFzkvrYmE-t2833.9599999999996
will lead to a collapse. Because if you don't have negative examples and contrastive learning,
2,833.96
2,848.24
GLOM: How to represent part-whole hierarchies in a neural network (Geoff Hinton's Paper Explained)
2021-02-27 15:47:03
https://youtu.be/cllFzkvrYmE
cllFzkvrYmE
UCZHmQk67mSJgfCCTn7xBfew
cllFzkvrYmE-t2840.8999999999996
this could lead to simply a collapse. An important question is whether this type of training
2,840.9
2,853.08
GLOM: How to represent part-whole hierarchies in a neural network (Geoff Hinton's Paper Explained)
2021-02-27 15:47:03
https://youtu.be/cllFzkvrYmE
cllFzkvrYmE
UCZHmQk67mSJgfCCTn7xBfew
cllFzkvrYmE-t2848.24
will necessarily cause collapse if it is not accompanied by training the inter level predictions
2,848.24
2,858.4
GLOM: How to represent part-whole hierarchies in a neural network (Geoff Hinton's Paper Explained)
2021-02-27 15:47:03
https://youtu.be/cllFzkvrYmE
cllFzkvrYmE
UCZHmQk67mSJgfCCTn7xBfew
cllFzkvrYmE-t2853.08
to be different for negative examples that use the consensus options for unrelated spatial
2,853.08
2,865.84
GLOM: How to represent part-whole hierarchies in a neural network (Geoff Hinton's Paper Explained)
2021-02-27 15:47:03
https://youtu.be/cllFzkvrYmE
cllFzkvrYmE
UCZHmQk67mSJgfCCTn7xBfew
cllFzkvrYmE-t2858.4
contexts. So here is that problem, right? If you use the consensus opinion for unrelated
2,858.4
2,875.32
GLOM: How to represent part-whole hierarchies in a neural network (Geoff Hinton's Paper Explained)
2021-02-27 15:47:03
https://youtu.be/cllFzkvrYmE
cllFzkvrYmE
UCZHmQk67mSJgfCCTn7xBfew
cllFzkvrYmE-t2865.84
spatial context, that might be a problem. He says using layer or batch norm should reduce
2,865.84
2,880.24
GLOM: How to represent part-whole hierarchies in a neural network (Geoff Hinton's Paper Explained)
2021-02-27 15:47:03
https://youtu.be/cllFzkvrYmE
cllFzkvrYmE
UCZHmQk67mSJgfCCTn7xBfew
cllFzkvrYmE-t2875.3199999999997
the tendency to collapse, but a more important consideration may be the achievability of
2,875.32
2,888.9
GLOM: How to represent part-whole hierarchies in a neural network (Geoff Hinton's Paper Explained)
2021-02-27 15:47:03
https://youtu.be/cllFzkvrYmE
cllFzkvrYmE
UCZHmQk67mSJgfCCTn7xBfew
cllFzkvrYmE-t2880.24
the goal. It goes into why regularization could help. And he says, if however, an embedding
2,880.24
2,893.84
GLOM: How to represent part-whole hierarchies in a neural network (Geoff Hinton's Paper Explained)
2021-02-27 15:47:03
https://youtu.be/cllFzkvrYmE
cllFzkvrYmE
UCZHmQk67mSJgfCCTn7xBfew
cllFzkvrYmE-t2888.8999999999996
at one location is free to choose which embeddings at other locations, it should resemble, the
2,888.9
2,898.52
GLOM: How to represent part-whole hierarchies in a neural network (Geoff Hinton's Paper Explained)
2021-02-27 15:47:03
https://youtu.be/cllFzkvrYmE
cllFzkvrYmE
UCZHmQk67mSJgfCCTn7xBfew
cllFzkvrYmE-t2893.8399999999997
goal can be achieved almost perfectly by learning to form islands of identical vectors and attending
2,893.84
2,905.4
GLOM: How to represent part-whole hierarchies in a neural network (Geoff Hinton's Paper Explained)
2021-02-27 15:47:03
https://youtu.be/cllFzkvrYmE
cllFzkvrYmE
UCZHmQk67mSJgfCCTn7xBfew
cllFzkvrYmE-t2898.52
almost entirely to other locations that are in the same island. And I don't know, I don't
2,898.52
2,913.4
GLOM: How to represent part-whole hierarchies in a neural network (Geoff Hinton's Paper Explained)
2021-02-27 15:47:03
https://youtu.be/cllFzkvrYmE
cllFzkvrYmE
UCZHmQk67mSJgfCCTn7xBfew
cllFzkvrYmE-t2905.4
know if this is what I suggested. So I guess this is kind of a convoluted paragraph. And
2,905.4
2,918.76
GLOM: How to represent part-whole hierarchies in a neural network (Geoff Hinton's Paper Explained)
2021-02-27 15:47:03
https://youtu.be/cllFzkvrYmE
cllFzkvrYmE
UCZHmQk67mSJgfCCTn7xBfew
cllFzkvrYmE-t2913.4
I had to also read it multiple times. And I still don't exactly know what he's trying
2,913.4
2,925.8
GLOM: How to represent part-whole hierarchies in a neural network (Geoff Hinton's Paper Explained)
2021-02-27 15:47:03
https://youtu.be/cllFzkvrYmE
cllFzkvrYmE
UCZHmQk67mSJgfCCTn7xBfew
cllFzkvrYmE-t2918.76
to say right here. But I think what he's saying is that what we want to do is we want to sort
2,918.76
2,933.38
GLOM: How to represent part-whole hierarchies in a neural network (Geoff Hinton's Paper Explained)
2021-02-27 15:47:03
https://youtu.be/cllFzkvrYmE
cllFzkvrYmE
UCZHmQk67mSJgfCCTn7xBfew
cllFzkvrYmE-t2925.8
of regularize the network to produce this consensus, right? So we have a bottom up signal,
2,925.8
2,939.08
GLOM: How to represent part-whole hierarchies in a neural network (Geoff Hinton's Paper Explained)
2021-02-27 15:47:03
https://youtu.be/cllFzkvrYmE
cllFzkvrYmE
UCZHmQk67mSJgfCCTn7xBfew
cllFzkvrYmE-t2933.38
a top down signal, we have a current value, and we have the signal from the attention
2,933.38
2,945.8
GLOM: How to represent part-whole hierarchies in a neural network (Geoff Hinton's Paper Explained)
2021-02-27 15:47:03
https://youtu.be/cllFzkvrYmE
cllFzkvrYmE
UCZHmQk67mSJgfCCTn7xBfew
cllFzkvrYmE-t2939.08
mechanism. Now, what we want to do is we want to reach a consensus such that these islands
2,939.08
2,953.46
GLOM: How to represent part-whole hierarchies in a neural network (Geoff Hinton's Paper Explained)
2021-02-27 15:47:03
https://youtu.be/cllFzkvrYmE
cllFzkvrYmE
UCZHmQk67mSJgfCCTn7xBfew
cllFzkvrYmE-t2945.8
form. However, if you attend to any sort of things here that have nothing to do with you,
2,945.8
2,959.1
GLOM: How to represent part-whole hierarchies in a neural network (Geoff Hinton's Paper Explained)
2021-02-27 15:47:03
https://youtu.be/cllFzkvrYmE
cllFzkvrYmE
UCZHmQk67mSJgfCCTn7xBfew
cllFzkvrYmE-t2953.46
you might not be able to reach this consensus, right? That's I think that's the problem I
2,953.46
2,966.2
GLOM: How to represent part-whole hierarchies in a neural network (Geoff Hinton's Paper Explained)
2021-02-27 15:47:03
https://youtu.be/cllFzkvrYmE
cllFzkvrYmE
UCZHmQk67mSJgfCCTn7xBfew
cllFzkvrYmE-t2959.1
think he's touching on the problem that I said before. So what he says is, you know,
2,959.1
2,974.44
GLOM: How to represent part-whole hierarchies in a neural network (Geoff Hinton's Paper Explained)
2021-02-27 15:47:03
https://youtu.be/cllFzkvrYmE
cllFzkvrYmE
UCZHmQk67mSJgfCCTn7xBfew
cllFzkvrYmE-t2966.2
what you should do is you should simply attend to things that are in the same islands already.
2,966.2
2,979.12
GLOM: How to represent part-whole hierarchies in a neural network (Geoff Hinton's Paper Explained)
2021-02-27 15:47:03
https://youtu.be/cllFzkvrYmE
cllFzkvrYmE
UCZHmQk67mSJgfCCTn7xBfew
cllFzkvrYmE-t2974.44
So if an embedding at one location is free to choose which embedding at other locations,
2,974.44
2,985.28
GLOM: How to represent part-whole hierarchies in a neural network (Geoff Hinton's Paper Explained)
2021-02-27 15:47:03
https://youtu.be/cllFzkvrYmE
cllFzkvrYmE
UCZHmQk67mSJgfCCTn7xBfew
cllFzkvrYmE-t2979.12
it should resemble the goal can be achieved by learning to form islands of identical vectors
2,979.12
2,992.08
GLOM: How to represent part-whole hierarchies in a neural network (Geoff Hinton's Paper Explained)
2021-02-27 15:47:03
https://youtu.be/cllFzkvrYmE
cllFzkvrYmE
UCZHmQk67mSJgfCCTn7xBfew
cllFzkvrYmE-t2985.28
and attending almost entirely to other locations that are in the same island. Now, I think
2,985.28
2,998.76
GLOM: How to represent part-whole hierarchies in a neural network (Geoff Hinton's Paper Explained)
2021-02-27 15:47:03
https://youtu.be/cllFzkvrYmE
cllFzkvrYmE
UCZHmQk67mSJgfCCTn7xBfew
cllFzkvrYmE-t2992.0800000000004
here, what he's doing, he makes the case for the attention mechanism itself, right? So
2,992.08
3,005.74
GLOM: How to represent part-whole hierarchies in a neural network (Geoff Hinton's Paper Explained)
2021-02-27 15:47:03
https://youtu.be/cllFzkvrYmE
cllFzkvrYmE
UCZHmQk67mSJgfCCTn7xBfew
cllFzkvrYmE-t2998.76
he says, if if we simply draw in information from the same layer here, you know, anything,
2,998.76
3,010.8