Eliezer Yudkowsky – Why AI Will Kill Us, Aligning LLMs, Nature of Intelligence, SciFi, & Rationality

hqdefault 132

For 4 hours, I tried to come up reasons for why AI might not kill us all, and Eliezer Yudkowsky explained why I was wrong.

We also discuss his call to halt AI, why LLMs make alignment harder, what it would take to save humanity, his millions of words of sci-fi, and much more.

If you want to get to the crux of the conversation, fast forward to 2:35:00 through 3:43:54. Here we go through and debate the main reasons I still think doom is unlikely.

Transcript:
Apple Podcasts:
Spotify:

Follow me on Twitter:

Timestamps:
(0:00:00) – TIME article
(0:09:06) – Are humans aligned?
(0:37:35) – Large language models
(1:07:15) – Can AIs help with alignment?
(1:30:17) РSociety’s response to AI
(1:44:42) – Predictions (or lack thereof)
(1:56:55) – Being Eliezer
(2:13:06) – Othogonality
(2:35:00) – Could alignment be easier than we think?
(3:02:15) – What will AIs want?
(3:43:54) – Writing fiction & whether rationality helps you win

By: Dwarkesh Patel
Title: Eliezer Yudkowsky – Why AI Will Kill Us, Aligning LLMs, Nature of Intelligence, SciFi, & Rationality
Sourced From: www.youtube.com/watch?v=41SUp-TRVlg

Follow us on FB: https://www.facebook.com/UXClub.net/

Did you miss our previous article…
https://www.uxclub.net/109-admitting-you-have-bad-taste/

Leave a Reply