Bengio is one of the most articulate, level headed, and mesmerizing voices in AI. Thank you for this!
Such a pleasant voice and such a calm person. It's really pleasant to listen to. Everyone has their own way. Some are more hectic, some are brisk, but this gentleman seems to have eaten calmness with a spoon.
Outstanding talk Mr Bengio! Impressed by your grasp on the goal/optimisation risks. Your visionary honesty and wisdom should be listened to all around the world. Elon Musk's LLM has the main goal of understanding the Universe. Imagine the subgoals...
"We could potentially use that non-agentic scientist AI to help us answer the most important question, which is how do we build an agentic AI that is safe?" You're on the right track, Professor! :-)
Hive Mind AGI is scary. Also Professor Yoshua Bengio deserves his own Noble price .Legendary. great video very well put Video .Thank you
Yoshua is one of the GOAT′s in the game.
22:00 On hacking of the reward: I don't buy it. Why exactly a machine would decide to get the maximum reward? I don't think it's the ultimate goal of the machine. If the machine is programmed to achieve A, then it's achieving A that is the goal of the machine, not obtaining the max reward. Obtaining max reward is just some additional property of the machine. But the machine is programmed to pursuit A, not "to pursuit the max reward"; the neural weights push it in the direction of A. The analogy with a human (taking heroine to hack the reward of dopamine or whatever) is not right. In case of the human the dopamine is the intermediary part, hence the human can "hack the system" and go straight to the dopamine, not to some primary goal which was meant to provide dopamine. In case of the machine there is no intermediary layer. It's just the neural weights that are made so that the agent pursuits A.
What a great conversation
Yoshua is right, we're not ready. Reach out to your reps
Attributing human traits to AIs is very common and problematic. Humans are highly evolved to be pro social, so most people assume other intelligent beings will have those characteristics. The fact that our training mechanism rewards pretending to have those characteristics means we are even less likely to remember that AIs are actually like human sociopaths, whose instincts to be pro-social are broken but who have been trained to behave as if they are normal humans.
Every episode is pure gold! Could you please make an episode addressing which jobs would AI/AGI affect the most, and which are safer? Do you think Robotics is a good career to start?
~21:00 OK so if the AI can hack itself to maximise its reward why can't it just give itself infinite reward for doing nothing at all?
this is the most relevant and fascinating youtube channel.
This is a wonderful session. Thank you.
such an amazing conversation to be listening to
let's gooo!
Our experience of the world is a collection of helpful biases that reduces the solution set. Exploration requires examining a solution set, and these sets can be unbounded.
I think the way to AGI / ASI is the concept of delayed gratification. Being able to struggle now to have a better way situation in the future. Understand to get to certain bigger goals take sacrifice and time
1:35:00 Tim, you always say something that sounds so simple but strikes me profoundly 😊 I so, so, so love this channel.
@MachineLearningStreetTalk