title
stringlengths
12
112
published
stringlengths
19
23
url
stringlengths
28
28
video_id
stringlengths
11
11
channel_id
stringclasses
5 values
id
stringlengths
16
31
text
stringlengths
0
596
start
float64
0
37.8k
end
float64
2.18
37.8k
GLOM: How to represent part-whole hierarchies in a neural network (Geoff Hinton's Paper Explained)
2021-02-27 15:47:03
https://youtu.be/cllFzkvrYmE
cllFzkvrYmE
UCZHmQk67mSJgfCCTn7xBfew
cllFzkvrYmE-t3005.7400000000002
any old information might come in, and we might collapse and or we might never reach
3,005.74
3,016.34
GLOM: How to represent part-whole hierarchies in a neural network (Geoff Hinton's Paper Explained)
2021-02-27 15:47:03
https://youtu.be/cllFzkvrYmE
cllFzkvrYmE
UCZHmQk67mSJgfCCTn7xBfew
cllFzkvrYmE-t3010.8
consensus because any old information might come in. However, if we introduce the attention
3,010.8
3,022.12
GLOM: How to represent part-whole hierarchies in a neural network (Geoff Hinton's Paper Explained)
2021-02-27 15:47:03
https://youtu.be/cllFzkvrYmE
cllFzkvrYmE
UCZHmQk67mSJgfCCTn7xBfew
cllFzkvrYmE-t3016.34
mechanism into this whole thing, and only draw in information from the selected neighbors
3,016.34
3,028.68
GLOM: How to represent part-whole hierarchies in a neural network (Geoff Hinton's Paper Explained)
2021-02-27 15:47:03
https://youtu.be/cllFzkvrYmE
cllFzkvrYmE
UCZHmQk67mSJgfCCTn7xBfew
cllFzkvrYmE-t3022.1200000000003
that already are in the same group in the same island as me, then this consensus algorithm
3,022.12
3,034.44
GLOM: How to represent part-whole hierarchies in a neural network (Geoff Hinton's Paper Explained)
2021-02-27 15:47:03
https://youtu.be/cllFzkvrYmE
cllFzkvrYmE
UCZHmQk67mSJgfCCTn7xBfew
cllFzkvrYmE-t3028.6800000000003
works. So if the network, the network is now forced kind of to learn to build these islands
3,028.68
3,042.6
GLOM: How to represent part-whole hierarchies in a neural network (Geoff Hinton's Paper Explained)
2021-02-27 15:47:03
https://youtu.be/cllFzkvrYmE
cllFzkvrYmE
UCZHmQk67mSJgfCCTn7xBfew
cllFzkvrYmE-t3034.44
of similar things in order to make this consensus work if we regularize this consensus. So I
3,034.44
3,049.92
GLOM: How to represent part-whole hierarchies in a neural network (Geoff Hinton's Paper Explained)
2021-02-27 15:47:03
https://youtu.be/cllFzkvrYmE
cllFzkvrYmE
UCZHmQk67mSJgfCCTn7xBfew
cllFzkvrYmE-t3042.6
believe he makes the case for the attention mechanism. I don't think he, in this case,
3,042.6
3,056.88
GLOM: How to represent part-whole hierarchies in a neural network (Geoff Hinton's Paper Explained)
2021-02-27 15:47:03
https://youtu.be/cllFzkvrYmE
cllFzkvrYmE
UCZHmQk67mSJgfCCTn7xBfew
cllFzkvrYmE-t3049.92
considers kind of the up the next up layer islands, what I would say is you need to consider
3,049.92
3,066.36
GLOM: How to represent part-whole hierarchies in a neural network (Geoff Hinton's Paper Explained)
2021-02-27 15:47:03
https://youtu.be/cllFzkvrYmE
cllFzkvrYmE
UCZHmQk67mSJgfCCTn7xBfew
cllFzkvrYmE-t3056.88
the island membership all the way up the columns in order to decide which things which locations,
3,056.88
3,073
GLOM: How to represent part-whole hierarchies in a neural network (Geoff Hinton's Paper Explained)
2021-02-27 15:47:03
https://youtu.be/cllFzkvrYmE
cllFzkvrYmE
UCZHmQk67mSJgfCCTn7xBfew
cllFzkvrYmE-t3066.36
right, it's free to choose which embeddings at other locations it should resemble. I think,
3,066.36
3,084.2
GLOM: How to represent part-whole hierarchies in a neural network (Geoff Hinton's Paper Explained)
2021-02-27 15:47:03
https://youtu.be/cllFzkvrYmE
cllFzkvrYmE
UCZHmQk67mSJgfCCTn7xBfew
cllFzkvrYmE-t3073.0
yeah, this is the case for the attention mechanism. Okay, I hope you're still half with me. If
3,073
3,090.66
GLOM: How to represent part-whole hierarchies in a neural network (Geoff Hinton's Paper Explained)
2021-02-27 15:47:03
https://youtu.be/cllFzkvrYmE
cllFzkvrYmE
UCZHmQk67mSJgfCCTn7xBfew
cllFzkvrYmE-t3084.2
not, I'm, I'm bit confused too. But I think what he's doing is he says, contrastive learning
3,084.2
3,096.24
GLOM: How to represent part-whole hierarchies in a neural network (Geoff Hinton's Paper Explained)
2021-02-27 15:47:03
https://youtu.be/cllFzkvrYmE
cllFzkvrYmE
UCZHmQk67mSJgfCCTn7xBfew
cllFzkvrYmE-t3090.66
would be good, you can use it, but you have to be careful at which layer you do it. Another
3,090.66
3,104.72
GLOM: How to represent part-whole hierarchies in a neural network (Geoff Hinton's Paper Explained)
2021-02-27 15:47:03
https://youtu.be/cllFzkvrYmE
cllFzkvrYmE
UCZHmQk67mSJgfCCTn7xBfew
cllFzkvrYmE-t3096.24
regularizer to form these islands would be this regularize the network to conform to
3,096.24
3,110.96
GLOM: How to represent part-whole hierarchies in a neural network (Geoff Hinton's Paper Explained)
2021-02-27 15:47:03
https://youtu.be/cllFzkvrYmE
cllFzkvrYmE
UCZHmQk67mSJgfCCTn7xBfew
cllFzkvrYmE-t3104.72
the consensus option, opinion. However, if you simply aggregate information from the
3,104.72
3,118.04
GLOM: How to represent part-whole hierarchies in a neural network (Geoff Hinton's Paper Explained)
2021-02-27 15:47:03
https://youtu.be/cllFzkvrYmE
cllFzkvrYmE
UCZHmQk67mSJgfCCTn7xBfew
cllFzkvrYmE-t3110.96
same layer, then that wouldn't work because, you know, the different things in the same
3,110.96
3,123.6
GLOM: How to represent part-whole hierarchies in a neural network (Geoff Hinton's Paper Explained)
2021-02-27 15:47:03
https://youtu.be/cllFzkvrYmE
cllFzkvrYmE
UCZHmQk67mSJgfCCTn7xBfew
cllFzkvrYmE-t3118.04
layer might correspond to completely different parts of the image. Drawing in information
3,118.04
3,128.52
GLOM: How to represent part-whole hierarchies in a neural network (Geoff Hinton's Paper Explained)
2021-02-27 15:47:03
https://youtu.be/cllFzkvrYmE
cllFzkvrYmE
UCZHmQk67mSJgfCCTn7xBfew
cllFzkvrYmE-t3123.6
from there would not help you. How do you solve this by introducing the very attention
3,123.6
3,134.98
GLOM: How to represent part-whole hierarchies in a neural network (Geoff Hinton's Paper Explained)
2021-02-27 15:47:03
https://youtu.be/cllFzkvrYmE
cllFzkvrYmE
UCZHmQk67mSJgfCCTn7xBfew
cllFzkvrYmE-t3128.52
mechanism that he introduced in order to only draw in information from parts of the same
3,128.52
3,144.8
GLOM: How to represent part-whole hierarchies in a neural network (Geoff Hinton's Paper Explained)
2021-02-27 15:47:03
https://youtu.be/cllFzkvrYmE
cllFzkvrYmE
UCZHmQk67mSJgfCCTn7xBfew
cllFzkvrYmE-t3134.98
layer that actually are related to you? Okay, the next thing, the next consideration he
3,134.98
3,151.06
GLOM: How to represent part-whole hierarchies in a neural network (Geoff Hinton's Paper Explained)
2021-02-27 15:47:03
https://youtu.be/cllFzkvrYmE
cllFzkvrYmE
UCZHmQk67mSJgfCCTn7xBfew
cllFzkvrYmE-t3144.8
does is representing coordinate transformations. How does this represent coordinate transformations,
3,144.8
3,158.14
GLOM: How to represent part-whole hierarchies in a neural network (Geoff Hinton's Paper Explained)
2021-02-27 15:47:03
https://youtu.be/cllFzkvrYmE
cllFzkvrYmE
UCZHmQk67mSJgfCCTn7xBfew
cllFzkvrYmE-t3151.06
there was a capsule net paper where he explicitly represents coordinate transformations in kind
3,151.06
3,166.32
GLOM: How to represent part-whole hierarchies in a neural network (Geoff Hinton's Paper Explained)
2021-02-27 15:47:03
https://youtu.be/cllFzkvrYmE
cllFzkvrYmE
UCZHmQk67mSJgfCCTn7xBfew
cllFzkvrYmE-t3158.14
of four dimension quaternion space. And he says, that is probably not needed, because
3,158.14
3,176.92
GLOM: How to represent part-whole hierarchies in a neural network (Geoff Hinton's Paper Explained)
2021-02-27 15:47:03
https://youtu.be/cllFzkvrYmE
cllFzkvrYmE
UCZHmQk67mSJgfCCTn7xBfew
cllFzkvrYmE-t3166.3199999999997
you don't want to hear says you could represent this by a by four by four matrices. However,
3,166.32
3,182.68
GLOM: How to represent part-whole hierarchies in a neural network (Geoff Hinton's Paper Explained)
2021-02-27 15:47:03
https://youtu.be/cllFzkvrYmE
cllFzkvrYmE
UCZHmQk67mSJgfCCTn7xBfew
cllFzkvrYmE-t3176.92
if you simply allocate 16 numbers in each embedding vector, in order to represent the
3,176.92
3,187.04
GLOM: How to represent part-whole hierarchies in a neural network (Geoff Hinton's Paper Explained)
2021-02-27 15:47:03
https://youtu.be/cllFzkvrYmE
cllFzkvrYmE
UCZHmQk67mSJgfCCTn7xBfew
cllFzkvrYmE-t3182.68
part whole coordinate transformation, like the transformation that relates the part to
3,182.68
3,192.52
GLOM: How to represent part-whole hierarchies in a neural network (Geoff Hinton's Paper Explained)
2021-02-27 15:47:03
https://youtu.be/cllFzkvrYmE
cllFzkvrYmE
UCZHmQk67mSJgfCCTn7xBfew
cllFzkvrYmE-t3187.04
the whole, that does not make it easy to represent uncertainty about the aspects of pose and
3,187.04
3,198.8
GLOM: How to represent part-whole hierarchies in a neural network (Geoff Hinton's Paper Explained)
2021-02-27 15:47:03
https://youtu.be/cllFzkvrYmE
cllFzkvrYmE
UCZHmQk67mSJgfCCTn7xBfew
cllFzkvrYmE-t3192.52
certainty about others. So the problem here is that we know that humans, when they watch
3,192.52
3,206.4
GLOM: How to represent part-whole hierarchies in a neural network (Geoff Hinton's Paper Explained)
2021-02-27 15:47:03
https://youtu.be/cllFzkvrYmE
cllFzkvrYmE
UCZHmQk67mSJgfCCTn7xBfew
cllFzkvrYmE-t3198.8
something right here, when they watch a scene, like this is a chair, and there is a person,
3,198.8
3,212.44
GLOM: How to represent part-whole hierarchies in a neural network (Geoff Hinton's Paper Explained)
2021-02-27 15:47:03
https://youtu.be/cllFzkvrYmE
cllFzkvrYmE
UCZHmQk67mSJgfCCTn7xBfew
cllFzkvrYmE-t3206.4
a very tiny person on the chair, we don't see necessarily the coordinate frame of the
3,206.4
3,217.52
GLOM: How to represent part-whole hierarchies in a neural network (Geoff Hinton's Paper Explained)
2021-02-27 15:47:03
https://youtu.be/cllFzkvrYmE
cllFzkvrYmE
UCZHmQk67mSJgfCCTn7xBfew
cllFzkvrYmE-t3212.44
world, what we see is we see the coordinate frame of the chair, like maybe this is the
3,212.44
3,225.16
GLOM: How to represent part-whole hierarchies in a neural network (Geoff Hinton's Paper Explained)
2021-02-27 15:47:03
https://youtu.be/cllFzkvrYmE
cllFzkvrYmE
UCZHmQk67mSJgfCCTn7xBfew
cllFzkvrYmE-t3217.52
center, and we see the person in relation to the chair, our brain seems to do this intuitively,
3,217.52
3,229.98
GLOM: How to represent part-whole hierarchies in a neural network (Geoff Hinton's Paper Explained)
2021-02-27 15:47:03
https://youtu.be/cllFzkvrYmE
cllFzkvrYmE
UCZHmQk67mSJgfCCTn7xBfew
cllFzkvrYmE-t3225.16
and hinting things that a system like this should also do it intuitively. So somehow,
3,225.16
3,234.84
GLOM: How to represent part-whole hierarchies in a neural network (Geoff Hinton's Paper Explained)
2021-02-27 15:47:03
https://youtu.be/cllFzkvrYmE
cllFzkvrYmE
UCZHmQk67mSJgfCCTn7xBfew
cllFzkvrYmE-t3229.98
the coordinate transformations involved going from the eye to the reference through the
3,229.98
3,241.72
GLOM: How to represent part-whole hierarchies in a neural network (Geoff Hinton's Paper Explained)
2021-02-27 15:47:03
https://youtu.be/cllFzkvrYmE
cllFzkvrYmE
UCZHmQk67mSJgfCCTn7xBfew
cllFzkvrYmE-t3234.84
frame of the chair, and then from the chair to the person, they should be somehow in encoded
3,234.84
3,248.32
GLOM: How to represent part-whole hierarchies in a neural network (Geoff Hinton's Paper Explained)
2021-02-27 15:47:03
https://youtu.be/cllFzkvrYmE
cllFzkvrYmE
UCZHmQk67mSJgfCCTn7xBfew
cllFzkvrYmE-t3241.72
in this network. However, he also says that it's probably not necessary to encode them
3,241.72
3,252.84
GLOM: How to represent part-whole hierarchies in a neural network (Geoff Hinton's Paper Explained)
2021-02-27 15:47:03
https://youtu.be/cllFzkvrYmE
cllFzkvrYmE
UCZHmQk67mSJgfCCTn7xBfew
cllFzkvrYmE-t3248.3199999999997
explicitly as you know, explicit coordinate transformations, because not only does that
3,248.32
3,259.8
GLOM: How to represent part-whole hierarchies in a neural network (Geoff Hinton's Paper Explained)
2021-02-27 15:47:03
https://youtu.be/cllFzkvrYmE
cllFzkvrYmE
UCZHmQk67mSJgfCCTn7xBfew
cllFzkvrYmE-t3252.8399999999997
make it harder, probably to learn, but also, you can't represent uncertainty. In fact,
3,252.84
3,264.88
GLOM: How to represent part-whole hierarchies in a neural network (Geoff Hinton's Paper Explained)
2021-02-27 15:47:03
https://youtu.be/cllFzkvrYmE
cllFzkvrYmE
UCZHmQk67mSJgfCCTn7xBfew
cllFzkvrYmE-t3259.7999999999997
you can represent uncertainty, that's the next thing right here, much better by having
3,259.8
3,272.24
GLOM: How to represent part-whole hierarchies in a neural network (Geoff Hinton's Paper Explained)
2021-02-27 15:47:03
https://youtu.be/cllFzkvrYmE
cllFzkvrYmE
UCZHmQk67mSJgfCCTn7xBfew
cllFzkvrYmE-t3264.88
a higher dimensional thing that you're trying to guess, right? If you are trying to guess
3,264.88
3,277.76
GLOM: How to represent part-whole hierarchies in a neural network (Geoff Hinton's Paper Explained)
2021-02-27 15:47:03
https://youtu.be/cllFzkvrYmE
cllFzkvrYmE
UCZHmQk67mSJgfCCTn7xBfew
cllFzkvrYmE-t3272.2400000000002
a distribution with three components, and you simply have a three dimensional vector,
3,272.24
3,283.5
GLOM: How to represent part-whole hierarchies in a neural network (Geoff Hinton's Paper Explained)
2021-02-27 15:47:03
https://youtu.be/cllFzkvrYmE
cllFzkvrYmE
UCZHmQk67mSJgfCCTn7xBfew
cllFzkvrYmE-t3277.76
you have no way of representing uncertainty. However, if you have a nine dimensional vector,
3,277.76
3,290.62
GLOM: How to represent part-whole hierarchies in a neural network (Geoff Hinton's Paper Explained)
2021-02-27 15:47:03
https://youtu.be/cllFzkvrYmE
cllFzkvrYmE
UCZHmQk67mSJgfCCTn7xBfew
cllFzkvrYmE-t3283.5
you can have three opinions about the distribution. So this is an opinion, this is an opinion,
3,283.5
3,295.88
GLOM: How to represent part-whole hierarchies in a neural network (Geoff Hinton's Paper Explained)
2021-02-27 15:47:03
https://youtu.be/cllFzkvrYmE
cllFzkvrYmE
UCZHmQk67mSJgfCCTn7xBfew
cllFzkvrYmE-t3290.62
and then this is an opinion. And then you can sort of aggregate and you can say, Well,
3,290.62
3,301.12
GLOM: How to represent part-whole hierarchies in a neural network (Geoff Hinton's Paper Explained)
2021-02-27 15:47:03
https://youtu.be/cllFzkvrYmE
cllFzkvrYmE
UCZHmQk67mSJgfCCTn7xBfew
cllFzkvrYmE-t3295.88
I'm pretty sure about these two things, because all my opinions are pretty close. But this
3,295.88
3,309.32
GLOM: How to represent part-whole hierarchies in a neural network (Geoff Hinton's Paper Explained)
2021-02-27 15:47:03
https://youtu.be/cllFzkvrYmE
cllFzkvrYmE
UCZHmQk67mSJgfCCTn7xBfew
cllFzkvrYmE-t3301.12
one here, I'm not so sure because my individual things say different things, things say things.
3,301.12
3,315.48
GLOM: How to represent part-whole hierarchies in a neural network (Geoff Hinton's Paper Explained)
2021-02-27 15:47:03
https://youtu.be/cllFzkvrYmE
cllFzkvrYmE
UCZHmQk67mSJgfCCTn7xBfew
cllFzkvrYmE-t3309.3199999999997
All right, I've this video is too long. So that's his argument right here, we don't need
3,309.32
3,322.78
GLOM: How to represent part-whole hierarchies in a neural network (Geoff Hinton's Paper Explained)
2021-02-27 15:47:03
https://youtu.be/cllFzkvrYmE
cllFzkvrYmE
UCZHmQk67mSJgfCCTn7xBfew
cllFzkvrYmE-t3315.48
explicit representing of uncertainty, because by simply over parameterizing, we can already
3,315.48
3,331.98
GLOM: How to represent part-whole hierarchies in a neural network (Geoff Hinton's Paper Explained)
2021-02-27 15:47:03
https://youtu.be/cllFzkvrYmE
cllFzkvrYmE
UCZHmQk67mSJgfCCTn7xBfew
cllFzkvrYmE-t3322.78
represent uncertainty well. And we also don't need disentangled position information and,
3,322.78
3,341.52
GLOM: How to represent part-whole hierarchies in a neural network (Geoff Hinton's Paper Explained)
2021-02-27 15:47:03
https://youtu.be/cllFzkvrYmE
cllFzkvrYmE
UCZHmQk67mSJgfCCTn7xBfew
cllFzkvrYmE-t3331.98
and so on. Sorry, we don't need different position informations, because, again, the
3,331.98
3,346.88
GLOM: How to represent part-whole hierarchies in a neural network (Geoff Hinton's Paper Explained)
2021-02-27 15:47:03
https://youtu.be/cllFzkvrYmE
cllFzkvrYmE
UCZHmQk67mSJgfCCTn7xBfew
cllFzkvrYmE-t3341.52
work can take care of that. And he gives a good example, like why would you have disentangled
3,341.52
3,357.48
GLOM: How to represent part-whole hierarchies in a neural network (Geoff Hinton's Paper Explained)
2021-02-27 15:47:03
https://youtu.be/cllFzkvrYmE
cllFzkvrYmE
UCZHmQk67mSJgfCCTn7xBfew
cllFzkvrYmE-t3346.88
coordinate frame if you have an image? And in the image, the picture in it is this. How
3,346.88
3,367.12
GLOM: How to represent part-whole hierarchies in a neural network (Geoff Hinton's Paper Explained)
2021-02-27 15:47:03
https://youtu.be/cllFzkvrYmE
cllFzkvrYmE
UCZHmQk67mSJgfCCTn7xBfew
cllFzkvrYmE-t3357.48
do you know if that is a rhomboid shape? Or if it is a rec, if it is a rectangular piece
3,357.48
3,373.64
GLOM: How to represent part-whole hierarchies in a neural network (Geoff Hinton's Paper Explained)
2021-02-27 15:47:03
https://youtu.be/cllFzkvrYmE
cllFzkvrYmE
UCZHmQk67mSJgfCCTn7xBfew
cllFzkvrYmE-t3367.12
of paper viewed from the side, I should probably draw it way closer, something like something
3,367.12
3,382.08
GLOM: How to represent part-whole hierarchies in a neural network (Geoff Hinton's Paper Explained)
2021-02-27 15:47:03
https://youtu.be/cllFzkvrYmE
cllFzkvrYmE
UCZHmQk67mSJgfCCTn7xBfew
cllFzkvrYmE-t3373.64
like this. I suck at this. You get probably get what I mean. Like, if it is a different
3,373.64
3,389.4
GLOM: How to represent part-whole hierarchies in a neural network (Geoff Hinton's Paper Explained)
2021-02-27 15:47:03
https://youtu.be/cllFzkvrYmE
cllFzkvrYmE
UCZHmQk67mSJgfCCTn7xBfew
cllFzkvrYmE-t3382.08
object, it has a like the object and the coordinate transformation are dependent upon each other.
3,382.08
3,394.9
GLOM: How to represent part-whole hierarchies in a neural network (Geoff Hinton's Paper Explained)
2021-02-27 15:47:03
https://youtu.be/cllFzkvrYmE
cllFzkvrYmE
UCZHmQk67mSJgfCCTn7xBfew
cllFzkvrYmE-t3389.4
And so it makes sense for the neural network to actually entangle the two, because the
3,389.4
3,401.52
GLOM: How to represent part-whole hierarchies in a neural network (Geoff Hinton's Paper Explained)
2021-02-27 15:47:03
https://youtu.be/cllFzkvrYmE
cllFzkvrYmE
UCZHmQk67mSJgfCCTn7xBfew
cllFzkvrYmE-t3394.9
two things depend on each other. In essence, he's just saying, don't worry about explicitly
3,394.9
3,407.72
GLOM: How to represent part-whole hierarchies in a neural network (Geoff Hinton's Paper Explained)
2021-02-27 15:47:03
https://youtu.be/cllFzkvrYmE
cllFzkvrYmE
UCZHmQk67mSJgfCCTn7xBfew
cllFzkvrYmE-t3401.52
representing all of the different things. We got it like the neural network can do all
3,401.52
3,415.12
GLOM: How to represent part-whole hierarchies in a neural network (Geoff Hinton's Paper Explained)
2021-02-27 15:47:03
https://youtu.be/cllFzkvrYmE
cllFzkvrYmE
UCZHmQk67mSJgfCCTn7xBfew
cllFzkvrYmE-t3407.7200000000003
of these things, like uncertainty or position, and post transformations. So here he compares
3,407.72
3,425.4
GLOM: How to represent part-whole hierarchies in a neural network (Geoff Hinton's Paper Explained)
2021-02-27 15:47:03
https://youtu.be/cllFzkvrYmE
cllFzkvrYmE
UCZHmQk67mSJgfCCTn7xBfew
cllFzkvrYmE-t3415.12
it to different other architectures. comparison to CNN comparison to transformers comparison
3,415.12
3,430.88
GLOM: How to represent part-whole hierarchies in a neural network (Geoff Hinton's Paper Explained)
2021-02-27 15:47:03
https://youtu.be/cllFzkvrYmE
cllFzkvrYmE
UCZHmQk67mSJgfCCTn7xBfew
cllFzkvrYmE-t3425.4
to capsule models. And at the end, it goes into video at the very beginning, he says,
3,425.4
3,437.48
GLOM: How to represent part-whole hierarchies in a neural network (Geoff Hinton's Paper Explained)
2021-02-27 15:47:03
https://youtu.be/cllFzkvrYmE
cllFzkvrYmE
UCZHmQk67mSJgfCCTn7xBfew
cllFzkvrYmE-t3430.88
the paper is about actually a video system. And you can kind of see that because we go
3,430.88
3,443.98
GLOM: How to represent part-whole hierarchies in a neural network (Geoff Hinton's Paper Explained)
2021-02-27 15:47:03
https://youtu.be/cllFzkvrYmE
cllFzkvrYmE
UCZHmQk67mSJgfCCTn7xBfew
cllFzkvrYmE-t3437.48
through this algorithm in multiple time steps, right? You have, it's like you analyze an
3,437.48
3,451.16
GLOM: How to represent part-whole hierarchies in a neural network (Geoff Hinton's Paper Explained)
2021-02-27 15:47:03
https://youtu.be/cllFzkvrYmE
cllFzkvrYmE
UCZHmQk67mSJgfCCTn7xBfew
cllFzkvrYmE-t3443.98
image with these columns, which gives you sort of a 3d 3d tensor with the image at the
3,443.98
3,458.04
GLOM: How to represent part-whole hierarchies in a neural network (Geoff Hinton's Paper Explained)
2021-02-27 15:47:03
https://youtu.be/cllFzkvrYmE
cllFzkvrYmE
UCZHmQk67mSJgfCCTn7xBfew
cllFzkvrYmE-t3451.16
bottom. And you go in the next time step, you have a new 3d tensor, right, you pass
3,451.16
3,464.76
GLOM: How to represent part-whole hierarchies in a neural network (Geoff Hinton's Paper Explained)
2021-02-27 15:47:03
https://youtu.be/cllFzkvrYmE
cllFzkvrYmE
UCZHmQk67mSJgfCCTn7xBfew
cllFzkvrYmE-t3458.04
this whole information around with the image at the bottom. And it says, well, why does
3,458.04
3,469.3
GLOM: How to represent part-whole hierarchies in a neural network (Geoff Hinton's Paper Explained)
2021-02-27 15:47:03
https://youtu.be/cllFzkvrYmE
cllFzkvrYmE
UCZHmQk67mSJgfCCTn7xBfew
cllFzkvrYmE-t3464.76
that need to be the same image that could also be different images. So you could use
3,464.76
3,476.26
GLOM: How to represent part-whole hierarchies in a neural network (Geoff Hinton's Paper Explained)
2021-02-27 15:47:03
https://youtu.be/cllFzkvrYmE
cllFzkvrYmE
UCZHmQk67mSJgfCCTn7xBfew
cllFzkvrYmE-t3469.3
the system to analyze video. So what he does is he says, at the same time, you do this
3,469.3
3,482.58
GLOM: How to represent part-whole hierarchies in a neural network (Geoff Hinton's Paper Explained)
2021-02-27 15:47:03
https://youtu.be/cllFzkvrYmE
cllFzkvrYmE
UCZHmQk67mSJgfCCTn7xBfew
cllFzkvrYmE-t3476.26
time step to find agreement, you could actually swap out the video frame, the X, you can swap
3,476.26
3,487.12
GLOM: How to represent part-whole hierarchies in a neural network (Geoff Hinton's Paper Explained)
2021-02-27 15:47:03
https://youtu.be/cllFzkvrYmE
cllFzkvrYmE
UCZHmQk67mSJgfCCTn7xBfew
cllFzkvrYmE-t3482.5800000000004
out the video frame, and produce a slightly different video frame. And you could actually
3,482.58
3,493.24
GLOM: How to represent part-whole hierarchies in a neural network (Geoff Hinton's Paper Explained)
2021-02-27 15:47:03
https://youtu.be/cllFzkvrYmE
cllFzkvrYmE
UCZHmQk67mSJgfCCTn7xBfew
cllFzkvrYmE-t3487.1200000000003
have a kind of an ensemble regularizing effect. So as the whole columns here, the whole system
3,487.12
3,499.72
GLOM: How to represent part-whole hierarchies in a neural network (Geoff Hinton's Paper Explained)
2021-02-27 15:47:03
https://youtu.be/cllFzkvrYmE
cllFzkvrYmE
UCZHmQk67mSJgfCCTn7xBfew
cllFzkvrYmE-t3493.24
comes to a consensus over time, you feed in different information at the bottom. And what
3,493.24
3,507.28
GLOM: How to represent part-whole hierarchies in a neural network (Geoff Hinton's Paper Explained)
2021-02-27 15:47:03
https://youtu.be/cllFzkvrYmE
cllFzkvrYmE
UCZHmQk67mSJgfCCTn7xBfew
cllFzkvrYmE-t3499.72
he says is that, you know, if this is a slow enough video, then the top layers here would
3,499.72
3,513.28
GLOM: How to represent part-whole hierarchies in a neural network (Geoff Hinton's Paper Explained)
2021-02-27 15:47:03
https://youtu.be/cllFzkvrYmE
cllFzkvrYmE
UCZHmQk67mSJgfCCTn7xBfew
cllFzkvrYmE-t3507.2799999999997
probably could still reach an agreement, while the bottom layers would change rapidly. But
3,507.28
3,521
GLOM: How to represent part-whole hierarchies in a neural network (Geoff Hinton's Paper Explained)
2021-02-27 15:47:03
https://youtu.be/cllFzkvrYmE
cllFzkvrYmE
UCZHmQk67mSJgfCCTn7xBfew
cllFzkvrYmE-t3513.2799999999997
that could be sort of an ensemble or a regularizer, regularizing effect that it even has. So he
3,513.28
3,526.48
GLOM: How to represent part-whole hierarchies in a neural network (Geoff Hinton's Paper Explained)
2021-02-27 15:47:03
https://youtu.be/cllFzkvrYmE
cllFzkvrYmE
UCZHmQk67mSJgfCCTn7xBfew
cllFzkvrYmE-t3521.0
intrinsically connects these two time dimensions, because they would be separate, right, you
3,521
3,533.64
GLOM: How to represent part-whole hierarchies in a neural network (Geoff Hinton's Paper Explained)
2021-02-27 15:47:03
https://youtu.be/cllFzkvrYmE
cllFzkvrYmE
UCZHmQk67mSJgfCCTn7xBfew
cllFzkvrYmE-t3526.48
could input a video. And then in, you know, in each frame, you could do this consensus
3,526.48
3,539.32
GLOM: How to represent part-whole hierarchies in a neural network (Geoff Hinton's Paper Explained)
2021-02-27 15:47:03
https://youtu.be/cllFzkvrYmE
cllFzkvrYmE
UCZHmQk67mSJgfCCTn7xBfew
cllFzkvrYmE-t3533.64
finding algorithm. But he says, No, it's actually cool to consider them together to do the consensus
3,533.64
3,545.24
GLOM: How to represent part-whole hierarchies in a neural network (Geoff Hinton's Paper Explained)
2021-02-27 15:47:03
https://youtu.be/cllFzkvrYmE
cllFzkvrYmE
UCZHmQk67mSJgfCCTn7xBfew
cllFzkvrYmE-t3539.32
finding while you sort of watch the video, it's just not clear that you always need the
3,539.32
3,550.8
GLOM: How to represent part-whole hierarchies in a neural network (Geoff Hinton's Paper Explained)
2021-02-27 15:47:03
https://youtu.be/cllFzkvrYmE
cllFzkvrYmE
UCZHmQk67mSJgfCCTn7xBfew
cllFzkvrYmE-t3545.24
same amount of consensus finding steps as you need as you have video frames. So maybe,
3,545.24
3,556
GLOM: How to represent part-whole hierarchies in a neural network (Geoff Hinton's Paper Explained)
2021-02-27 15:47:03
https://youtu.be/cllFzkvrYmE
cllFzkvrYmE
UCZHmQk67mSJgfCCTn7xBfew
cllFzkvrYmE-t3550.7999999999997
you want to, maybe you want to take like five consensus steps per video frame, or the other
3,550.8
3,564.16
GLOM: How to represent part-whole hierarchies in a neural network (Geoff Hinton's Paper Explained)
2021-02-27 15:47:03
https://youtu.be/cllFzkvrYmE
cllFzkvrYmE
UCZHmQk67mSJgfCCTn7xBfew
cllFzkvrYmE-t3556.0
way around? Not sure. In any case, I think that's a pretty cool idea. And he says things
3,556
3,569.64
GLOM: How to represent part-whole hierarchies in a neural network (Geoff Hinton's Paper Explained)
2021-02-27 15:47:03
https://youtu.be/cllFzkvrYmE
cllFzkvrYmE
UCZHmQk67mSJgfCCTn7xBfew
cllFzkvrYmE-t3564.16
like, if the changes are rapid, there is no time available to iteratively settle on a
3,564.16
3,574.64
GLOM: How to represent part-whole hierarchies in a neural network (Geoff Hinton's Paper Explained)
2021-02-27 15:47:03
https://youtu.be/cllFzkvrYmE
cllFzkvrYmE
UCZHmQk67mSJgfCCTn7xBfew
cllFzkvrYmE-t3569.64
good set of embedding vectors for interpreting a specific frame. This means that the glom
3,569.64
3,580.56
GLOM: How to represent part-whole hierarchies in a neural network (Geoff Hinton's Paper Explained)
2021-02-27 15:47:03
https://youtu.be/cllFzkvrYmE
cllFzkvrYmE
UCZHmQk67mSJgfCCTn7xBfew
cllFzkvrYmE-t3574.64
architecture cannot correctly interpret complicated shapes. If the images are changing rapidly,
3,574.64
3,585.24
GLOM: How to represent part-whole hierarchies in a neural network (Geoff Hinton's Paper Explained)
2021-02-27 15:47:03
https://youtu.be/cllFzkvrYmE
cllFzkvrYmE
UCZHmQk67mSJgfCCTn7xBfew
cllFzkvrYmE-t3580.56
try taking an irregularly shaped potato and throwing it up in the air such a way that
3,580.56
3,590.76
GLOM: How to represent part-whole hierarchies in a neural network (Geoff Hinton's Paper Explained)
2021-02-27 15:47:03
https://youtu.be/cllFzkvrYmE
cllFzkvrYmE
UCZHmQk67mSJgfCCTn7xBfew
cllFzkvrYmE-t3585.24
it rotates at one or two cycles per second. Even if you smoothly track the potato, you
3,585.24
3,596.88
GLOM: How to represent part-whole hierarchies in a neural network (Geoff Hinton's Paper Explained)
2021-02-27 15:47:03
https://youtu.be/cllFzkvrYmE
cllFzkvrYmE
UCZHmQk67mSJgfCCTn7xBfew
cllFzkvrYmE-t3590.7599999999998
cannot see what shape it is. Now I don't have a potato, but I can give you an avocado. So
3,590.76
3,613.2
GLOM: How to represent part-whole hierarchies in a neural network (Geoff Hinton's Paper Explained)
2021-02-27 15:47:03
https://youtu.be/cllFzkvrYmE
cllFzkvrYmE
UCZHmQk67mSJgfCCTn7xBfew
cllFzkvrYmE-t3596.88
if you give me a second, how's that? Could you track the shape? I don't know. Probably
3,596.88
3,621.8
GLOM: How to represent part-whole hierarchies in a neural network (Geoff Hinton's Paper Explained)
2021-02-27 15:47:03
https://youtu.be/cllFzkvrYmE
cllFzkvrYmE
UCZHmQk67mSJgfCCTn7xBfew
cllFzkvrYmE-t3613.2000000000003
in his correct. All right, he talks about is this biologically plausible? And I don't
3,613.2
3,626.6
GLOM: How to represent part-whole hierarchies in a neural network (Geoff Hinton's Paper Explained)
2021-02-27 15:47:03
https://youtu.be/cllFzkvrYmE
cllFzkvrYmE
UCZHmQk67mSJgfCCTn7xBfew
cllFzkvrYmE-t3621.8
want to go too much into this. He discusses some restrictions like yeah, we still use
3,621.8
3,632.4
GLOM: How to represent part-whole hierarchies in a neural network (Geoff Hinton's Paper Explained)
2021-02-27 15:47:03
https://youtu.be/cllFzkvrYmE
cllFzkvrYmE
UCZHmQk67mSJgfCCTn7xBfew
cllFzkvrYmE-t3626.6
backprop and is backprop plausible and so on. I love this sentence. In the long run,
3,626.6
3,638.68
GLOM: How to represent part-whole hierarchies in a neural network (Geoff Hinton's Paper Explained)
2021-02-27 15:47:03
https://youtu.be/cllFzkvrYmE
cllFzkvrYmE
UCZHmQk67mSJgfCCTn7xBfew
cllFzkvrYmE-t3632.4
however, we are all dead. And then the footnote saying there are alternative facts. But yeah,
3,632.4
3,645.48
GLOM: How to represent part-whole hierarchies in a neural network (Geoff Hinton's Paper Explained)
2021-02-27 15:47:03
https://youtu.be/cllFzkvrYmE
cllFzkvrYmE
UCZHmQk67mSJgfCCTn7xBfew
cllFzkvrYmE-t3638.68
he discusses whether it's biological plausible. How could you modify it to make it more plausible?
3,638.68
3,652.64
GLOM: How to represent part-whole hierarchies in a neural network (Geoff Hinton's Paper Explained)
2021-02-27 15:47:03
https://youtu.be/cllFzkvrYmE
cllFzkvrYmE
UCZHmQk67mSJgfCCTn7xBfew
cllFzkvrYmE-t3645.48
For example, when you want to do contrastive learning, there is evidence that dreams during
3,645.48
3,657.32
GLOM: How to represent part-whole hierarchies in a neural network (Geoff Hinton's Paper Explained)
2021-02-27 15:47:03
https://youtu.be/cllFzkvrYmE
cllFzkvrYmE
UCZHmQk67mSJgfCCTn7xBfew
cllFzkvrYmE-t3652.64
so during sleep, you do contrastive learning, like you produce the negative examples during
3,652.64
3,665
GLOM: How to represent part-whole hierarchies in a neural network (Geoff Hinton's Paper Explained)
2021-02-27 15:47:03
https://youtu.be/cllFzkvrYmE
cllFzkvrYmE
UCZHmQk67mSJgfCCTn7xBfew
cllFzkvrYmE-t3657.3199999999997
sleep, and then during the day, you collect the positive examples and so on. So I think
3,657.32
3,673.08
GLOM: How to represent part-whole hierarchies in a neural network (Geoff Hinton's Paper Explained)
2021-02-27 15:47:03
https://youtu.be/cllFzkvrYmE
cllFzkvrYmE
UCZHmQk67mSJgfCCTn7xBfew
cllFzkvrYmE-t3665.0
this is a more speculative part of the paper, but it's pretty cool to it's pretty cool to
3,665
3,680.28
GLOM: How to represent part-whole hierarchies in a neural network (Geoff Hinton's Paper Explained)
2021-02-27 15:47:03
https://youtu.be/cllFzkvrYmE
cllFzkvrYmE
UCZHmQk67mSJgfCCTn7xBfew
cllFzkvrYmE-t3673.08
read it. And lastly, he goes into discussion. He also says that this paper is too long already.
3,673.08
3,686.44