Follow TV Tropes

Following

History Creator / EliezerYudkowsky

Go To

OR

Is there an issue? Send a MessageReason:
None


* ReferenceOverdosed: Eliezer loves his {{ShoutOut}}s. ''FanFic/HarryPotterAndTheMethodsOfRationality'' has [[http://tvtropes.org/pmwiki/pmwiki.php/ShoutOut/HarryPotterAndTheMethodsOfRationality at least one in most chapters]], while the first half of "The Finale of the Ultimate Meta Mega Crossover" has several per paragraph. Even ''Literature/ThreeWorldsCollide'' includes VisualNovel/FateStayNight in a list of "classic" literature.

to:

* ReferenceOverdosed: Eliezer loves his {{ShoutOut}}s. ''FanFic/HarryPotterAndTheMethodsOfRationality'' has [[http://tvtropes.org/pmwiki/pmwiki.php/ShoutOut/HarryPotterAndTheMethodsOfRationality [[ShoutOut/HarryPotterAndTheMethodsOfRationality at least one in most chapters]], while the first half of "The Finale of the Ultimate Meta Mega Crossover" has several per paragraph. Even ''Literature/ThreeWorldsCollide'' includes VisualNovel/FateStayNight in a list of "classic" literature.
Is there an issue? Send a MessageReason:
None


%%* GeniusBonus [[invoked]] [Zero Context Example. Please write up an actual example before uncommenting.]

to:

%%* * GeniusBonus / ViewersAreGeniuses: [[invoked]] [Zero Context Example. Please write up an actual example before uncommenting.]Some of the references to other fiction (see ReferenceOverdosed, below) are to rather obscure works or to concepts in university-level mathematics, science, and/or philosophy.



* ReferenceOverdosed: Eliezer loves his {{ShoutOut}}s.

to:

* ReferenceOverdosed: Eliezer loves his {{ShoutOut}}s. \n ''FanFic/HarryPotterAndTheMethodsOfRationality'' has [[http://tvtropes.org/pmwiki/pmwiki.php/ShoutOut/HarryPotterAndTheMethodsOfRationality at least one in most chapters]], while the first half of "The Finale of the Ultimate Meta Mega Crossover" has several per paragraph. Even ''Literature/ThreeWorldsCollide'' includes VisualNovel/FateStayNight in a list of "classic" literature.
Is there an issue? Send a MessageReason:
Commented out Zero Context Example. Please write up an actual example before uncommenting. See How To Write An Example to learn how.


* GeniusBonus [[invoked]]

to:

* %%* GeniusBonus [[invoked]][[invoked]] [Zero Context Example. Please write up an actual example before uncommenting.]

Added: 257

Changed: 173

Is there an issue? Send a MessageReason:
None


* GeniusBonus

to:

* GeniusBonusGeniusBonus [[invoked]]



* MindScrewdriver: "The Finale of the Ultimate Meta Mega Crossover" is this for ''Literature/PermutationCity'' by Creator/GregEgan.

to:

* LivingForeverIsAwesome: Eliezer's hatred of death appears as a recurring theme in his stories.
* MindScrewdriver: "The Finale of the Ultimate Meta Mega Crossover" is this for the ending of ''Literature/PermutationCity'' by Creator/GregEgan.Creator/GregEgan. On the other hand, [[ViewersAreGeniuses the explanation involves quite a bit of theoretical computer science]].
Is there an issue? Send a MessageReason:
None


* GeniusBonus: [[invoked]] Among them are some of the more obscure references.

to:

* GeniusBonus: [[invoked]] Among them are some of the more obscure references.GeniusBonus
Is there an issue? Send a MessageReason:
None

Added DiffLines:

* GeniusBonus: [[invoked]] Among them are some of the more obscure references.


Added DiffLines:

* MindScrewdriver: "The Finale of the Ultimate Meta Mega Crossover" is this for ''Literature/PermutationCity'' by Creator/GregEgan.
* ReferenceOverdosed: Eliezer loves his {{ShoutOut}}s.
Is there an issue? Send a MessageReason:
None


* GenreAdultery: Almost all of his works, whether [[FanFic/HarryPotterAndTheMethodsOfRationality fanfic]] or [[Literature/ThreeWorldsCollide original]], are highly philosophical {{Author Tract}}s. And then there's ''[[http://www.fanfiction.net/s/5731071/1/Peggy_Susie Peggy Susie]]'', which is merely [[spoiler: a ''Calvin and Hobbes'' fic]] and parody of ''Film/TheTerminator''... with no philosophical elements whatsoever.

to:

* GenreAdultery: Almost all of his works, whether [[FanFic/HarryPotterAndTheMethodsOfRationality fanfic]] or [[Literature/ThreeWorldsCollide original]], are highly philosophical {{Author Tract}}s. And then there's ''[[http://www.fanfiction.net/s/5731071/1/Peggy_Susie Peggy Susie]]'', which is merely a [[spoiler: a ''Calvin and Hobbes'' fic]] and parody of ''Film/TheTerminator''... with no philosophical elements whatsoever.
Is there an issue? Send a MessageReason:


* OldShame: It's still possible to find some really awful fiction that he wrote on {{USENET}} many years ago.
* OneOfUs: Partakes from time to time in discussions on TV Tropes, usually regarding his own writing.

Changed: 36

Removed: 188

Is there an issue? Send a MessageReason:
\"many find good and entertaining\" sounds like stealth gushing. Author Tract and One Of Us is already listed in the trope list, so duplicating it in the description is kind of redundant.



Most of his fictional works are {{Author Tract}}s, albeit ones that [[RuleOfCautiousEditingJudgment many find]] good and entertaining.

Occasionally [[OneOfUs drops by our very own forums]].



* AuthorTract: As mentioned above, his theories and [[AuthorAvatar avatars]] crop up a fair bit in his fictional works.

to:

* AuthorTract: As mentioned above, his His theories and [[AuthorAvatar avatars]] crop up a fair bit in his fictional works.



* OneOfUs: Partakes from time to time in discussions here, usually regarding his own writing.

to:

* OneOfUs: Partakes from time to time in discussions here, on TV Tropes, usually regarding his own writing.

Changed: 1227

Removed: 7241

Is there an issue? Send a MessageReason:
Please no potholes in page quotes. Please use namespaces & italicize work titles. Trope lists on Creator pages are supposed to be *only* about a creator\'s work, not their life or their views and opinions. See Creator Page Guidelines. Please no Zero Context Examples. Weblinks Are Not Examples. Conversational Troping is OK, but \"discussing a topic\" is not the same as \"troping\". Please avoid gushing.


-> ''"[[TheSingularity Singularitarians]] are the {{munchkin}}s of the real world. We just ignore [[LevelGrinding all the usual dungeons]] and head straight for the [[LoopholeAbuse cycle of]] [[GameBreaker infinite wish spells.]]"''

Author of ThreeWorldsCollide and HarryPotterAndTheMethodsOfRationality, the shorter works [[{{FanficRecs/HaruhiSuzumiya}} Trust in God/The Riddle of Kyon]] and [[FanficRecs/MegaCrossover The Finale of the Ultimate Meta Mega Crossover]], and [[http://yudkowsky.net/other/fiction various other fiction]]. Helped found and is probably the most prolific poster to the blog/discussion site LessWrong, and is currently writing a book on rationality.

to:

-> ''"[[TheSingularity Singularitarians]] ''"Singularitarians are the {{munchkin}}s of the real world. We just ignore [[LevelGrinding all the usual dungeons]] dungeons and head straight for the [[LoopholeAbuse cycle of]] [[GameBreaker of infinite wish spells.]]"''

"''

Author of ThreeWorldsCollide ''Literature/ThreeWorldsCollide'' and HarryPotterAndTheMethodsOfRationality, ''FanFic/HarryPotterAndTheMethodsOfRationality'', the shorter works [[{{FanficRecs/HaruhiSuzumiya}} ''[[{{FanficRecs/HaruhiSuzumiya}} Trust in God/The Riddle of Kyon]] Kyon]]'' and [[FanficRecs/MegaCrossover ''[[FanficRecs/MegaCrossover The Finale of the Ultimate Meta Mega Crossover]], Crossover]]'', and [[http://yudkowsky.net/other/fiction various other fiction]]. Helped found and is probably the most prolific poster to the blog/discussion site LessWrong, ''Blog/LessWrong'', and is currently writing a book on rationality.



!!!Tropes describing Yudkowsky or his work:
* AffablyEvil: Has tossed around "super villain" as a potential hobby more than once.
* [[{{UsefulNotes/Atheism}} Atheist]]: Doesn't mind calling religion "insanity", but believes in promoting [[http://lesswrong.com/lw/1e/raising_the_sanity_waterline/ rational thinking in general]] and letting atheism follow.
* BadassBoast: [[http://lesswrong.com/lw/gz/policy_debates_should_not_appear_onesided/ "Unfortunately the universe doesn't agree with me. We'll see which one of us is still standing when this is over."]]
* BiTheWay: Is straight, but would take [[http://lesswrong.com/lw/1ww/undiscriminating_skepticism/1r80?c=1 a pill that would make him bisexual]] because it would raise his ability to have fun.
* ChildProdigy: He was one.
* TheEndOfTheWorldAsWeKnowIt: Seeks to prevent it.
* {{Fanboy}}: Isn't one, but has them despite trying to [[http://lesswrong.com/lw/ln/resist_the_happy_death_spiral/ warn against being one]].
* GenreAdultery: Almost all of his works, whether [[FanFic/HarryPotterAndTheMethodsOfRationality fanfic]] or [[ThreeWorldsCollide original]], are highly philosophical {{Author Tract}}s. And then there's ''[[http://www.fanfiction.net/s/5731071/1/Peggy_Susie Peggy Susie]]'', which is merely [[spoiler: a ''Calvin and Hobbes'' fic]] and parody of ''TheTerminator''... with no philosophical elements whatsoever.
* HannibalLecture: In the [[http://yudkowsky.net/singularity/aibox AI-Box]] [[http://lesswrong.com/lw/up/shut_up_and_do_the_impossible/ experiment]], presumably.
* HumanPopsicle: His membership with the [[http://www.cryonics.org/ Cryonics Institute]] is his backup plan if he dies before achieving proper immortality.
* ImmortalitySeeker: Though he hopes never to die at all, in case he does, he wants to be cryonically frozen and then reanimated.
* InsufferableGenius: Can come across as this.
* ItsAllAboutMe: [[RuleOfCautiousEditingJudgment Maybe.]] But he doesn't leave much wiggle room to argue with his viewpoints when he folds the very foundations of 'rationality' into his personal philosophy and names his blog collab LessWrong.
* JewishAndNerdy
* ManipulativeBastard: His explanation of [[http://lesswrong.com/lw/up/shut_up_and_do_the_impossible/nxw how he could "win" the AI-Box experiment]].
-->Part of the ''real'' reason that I wanted to run the original AI-Box Experiment, is that I thought I had an ability that I could never test in real life. Was I really making a sacrifice for my ethics, or just overestimating my own ability? The AI-Box Experiment let me test that.
* MeasuringTheMarigolds: Criticized and averted; he wants people to be able to see [[http://lesswrong.com/lw/or/joy_in_the_merely_real/ joy in the 'merely' real world]].
* OldShame: The document [[http://lesswrong.com/lw/yd/the_thing_that_i_protect/qze Creating Friendly AI]], along with much of what he wrote before fully [[http://lesswrong.com/lw/ue/the_magnitude_of_his_own_folly/ recognizing the dangers of AI]].
** It's still possible to find some really awful fiction that he wrote on {{USENET}} many years ago.

to:

!!!Tropes !! Tropes describing Yudkowsky or Yudkowsky's work:

* AuthorTract: As mentioned above,
his work:
* AffablyEvil: Has tossed around "super villain" as a potential hobby more than once.
* [[{{UsefulNotes/Atheism}} Atheist]]: Doesn't mind calling religion "insanity", but believes in promoting [[http://lesswrong.com/lw/1e/raising_the_sanity_waterline/ rational thinking in general]]
theories and letting atheism follow.
* BadassBoast: [[http://lesswrong.com/lw/gz/policy_debates_should_not_appear_onesided/ "Unfortunately the universe doesn't agree with me. We'll see which one of us is still standing when this is over."]]
* BiTheWay: Is straight, but would take [[http://lesswrong.com/lw/1ww/undiscriminating_skepticism/1r80?c=1
[[AuthorAvatar avatars]] crop up a pill that would make him bisexual]] because it would raise fair bit in his ability to have fun.
* ChildProdigy: He was one.
* TheEndOfTheWorldAsWeKnowIt: Seeks to prevent it.
* {{Fanboy}}: Isn't one, but has them despite trying to [[http://lesswrong.com/lw/ln/resist_the_happy_death_spiral/ warn against being one]].
fictional works.
* GenreAdultery: Almost all of his works, whether [[FanFic/HarryPotterAndTheMethodsOfRationality fanfic]] or [[ThreeWorldsCollide [[Literature/ThreeWorldsCollide original]], are highly philosophical {{Author Tract}}s. And then there's ''[[http://www.fanfiction.net/s/5731071/1/Peggy_Susie Peggy Susie]]'', which is merely [[spoiler: a ''Calvin and Hobbes'' fic]] and parody of ''TheTerminator''...''Film/TheTerminator''... with no philosophical elements whatsoever.
* HannibalLecture: In the [[http://yudkowsky.net/singularity/aibox AI-Box]] [[http://lesswrong.com/lw/up/shut_up_and_do_the_impossible/ experiment]], presumably.
* HumanPopsicle: His membership with the [[http://www.cryonics.org/ Cryonics Institute]] is his backup plan if he dies before achieving proper immortality.
* ImmortalitySeeker: Though he hopes never to die at all, in case he does, he wants to be cryonically frozen and then reanimated.
* InsufferableGenius: Can come across as this.
* ItsAllAboutMe: [[RuleOfCautiousEditingJudgment Maybe.]] But he doesn't leave much wiggle room to argue with his viewpoints when he folds the very foundations of 'rationality' into his personal philosophy and names his blog collab LessWrong.
* JewishAndNerdy
* ManipulativeBastard: His explanation of [[http://lesswrong.com/lw/up/shut_up_and_do_the_impossible/nxw how he could "win" the AI-Box experiment]].
-->Part of the ''real'' reason that I wanted to run the original AI-Box Experiment, is that I thought I had an ability that I could never test in real life. Was I really making a sacrifice for my ethics, or just overestimating my own ability? The AI-Box Experiment let me test that.
* MeasuringTheMarigolds: Criticized and averted; he wants people to be able to see [[http://lesswrong.com/lw/or/joy_in_the_merely_real/ joy in the 'merely' real world]].
* OldShame: The document [[http://lesswrong.com/lw/yd/the_thing_that_i_protect/qze Creating Friendly AI]], along with much of what he wrote before fully [[http://lesswrong.com/lw/ue/the_magnitude_of_his_own_folly/ recognizing the dangers of AI]].
**
It's still possible to find some really awful fiction that he wrote on {{USENET}} many years ago.



* {{Protectorate}}: [[http://lesswrong.com/lw/yd/the_thing_that_i_protect/ The Thing That I Protect]].
* PoweredByAForsakenChild: [[http://lesswrong.com/lw/kn/torture_vs_dust_specks/ A rare proponent]].
* ScaleOfScientificSins: Plans to commit most of them, but backs away from [[http://lesswrong.com/lw/x7/cant_unbirth_a_child/ creating life]].
* ScienceHero: Openly uses the word "hero" to [[http://lesswrong.com/lw/1mc/normal_cryonics/ describe himself]].
* SingleIssueWonk: He has this thing about death, in that he... doesn't really see it's purpose. If you do think that death is part of the natural order of things, then he tends to bring out the soapbox a little.
* TheSingularity: He [[strike:is convinced of a coming technological singularity]] is determined to help bring about a positive singularity regardless of the [[http://lesswrong.com/lw/uk/beyond_the_reach_of_god/ uncertainty of success]].
* TalkingYourWayOut: The AI-box experiment.
* {{Transhumanism}}: He is a transhumanist.
* TrueArtIsAngsty: [[http://lesswrong.com/lw/xi/serious_stories/ Discussed/invoked/Deconstructed]]:
-->''In one sense, it's clear that we do not want to live the sort of lives that are depicted in most stories that human authors have written so far. Think of the truly great stories, the ones that have become legendary for being the very best of the best of their genre: Literature/TheIliad, RomeoAndJuliet, Film/TheGodfather, ComicBook/{{Watchmen}}, PlanescapeTorment, the second season of Series/BuffyTheVampireSlayer, or '''that''' ending in {{Tsukihime}}. Is there a single story on the list that isn't tragic? Ordinarily, we prefer pleasure to pain, joy to sadness, and life to death. Yet it seems we prefer to empathize with hurting, sad, dead characters. Or stories about happier people aren't serious, aren't artistically great enough to be worthy of praise - but then why selectively praise stories containing unhappy people? Is there some hidden benefit to us in it? [...] You simply don't optimize a story the way you optimize a real life. The best story and the best life will be produced by different criteria. [...] There is another rule of writing which states that stories have to shout. A human brain is a long way off those printed letters. Every event and feeling needs to take place at ten times natural volume in order to have any impact at all. You must not try to make your characters behave or feel realistically —especially, you must not faithfully reproduce your own past experiences—because without exaggeration, they'll be too quiet to rise from the page. Maybe all the Great Stories are tragedies because happiness can't shout loud enough to a human reader.''
* WeDoTheImpossible: [[http://lesswrong.com/lw/65/money_the_unit_of_caring/ "I specialize... in the impossible questions business."]]. See also [[http://lesswrong.com/lw/un/on_doing_the_impossible/ On Doing the Impossible]].
* TheWorldIsJustAwesome

!!!Tropes related to Yudkowsky's writing:
* AuthorTract: As mentioned above, his theories and [[AuthorAvatar avatars]] crop up a fair bit in his fictional works.
* DisobeyThisMessage: Discussed in a number of posts.
-->... if you think you would ''totally'' wear that [[http://lesswrong.com/lw/mb/lonely_dissent/ clown suit]], then don't be too proud of that either! It just means that you need to make an effort in the ''opposite'' direction to avoid dissenting too easily.
* DrugsAreBad: Averted -- Although Yudkowsky apparently [[http://lesswrong.com/lw/1ww/undiscriminating_skepticism/1r80?c=1 doesn't want]] to try mind-altering drugs, he is [[http://lesswrong.com/lw/66/rationality_common_interest_of_many_causes/ in favor]] of drug legalization.
* EatsBabies: [[http://lesswrong.com/lw/1ww/undiscriminating_skepticism/1scb?context=2 Presented without comment.]]
* HoldingOutForAHero: [[http://lesswrong.com/lw/mc/to_lead_you_must_stand_up/ To Lead, You Must Stand Up]], [[http://lesswrong.com/lw/un/on_doing_the_impossible/ On Doing the Impossible]].
* HollywoodEvolution: Something he tries to counteract in his discussions about the limitations on evolution, most notably in explaining why {{Storm}} doesn't make sense.
* LiteralGenie: What an improperly designed AI [[http://lesswrong.com/lw/ld/the_hidden_complexity_of_wishes/ would]] [[http://lesswrong.com/lw/tc/unnatural_categories/ be]].
* LivingForeverIsAwesome: He definitely [[http://yudkowsky.net/singularity/simplified wants nobody to die anymore]].
* TheMultiverse: The quantum-physics version is discussed in detail in a [[http://lesswrong.com/lw/r8/and_the_winner_is_manyworlds/ sequence of posts]]; [[http://lesswrong.com/lw/ws/for_the_people_who_are_still_alive/ For the People Who Are Still Alive]] takes a still broader view.
* TranshumanTreachery: Discussed in [[http://lesswrong.com/lw/xd/growing_up_is_hard/ Growing Up is Hard]] as one reason to develop AI before human enhancement and {{brain uploading}}.
* TheWorldIsNotReady
Is there an issue? Send a MessageReason:
None


-->...if you think you would ''totally'' wear that [[http://lesswrong.com/lw/mb/lonely_dissent/ clown suit]], then don't be too proud of that either! It just means that you need to make an effort in the ''opposite'' direction to avoid dissenting too easily.

to:

-->... if you think you would ''totally'' wear that [[http://lesswrong.com/lw/mb/lonely_dissent/ clown suit]], then don't be too proud of that either! It just means that you need to make an effort in the ''opposite'' direction to avoid dissenting too easily.
Is there an issue? Send a MessageReason:
None


* WeDoTheImpossible: [[http://lesswrong.com/lw/65/money_the_unit_of_caring/ "I specialize...in the impossible questions business."]]. See also [[http://lesswrong.com/lw/un/on_doing_the_impossible/ On Doing the Impossible]].

to:

* WeDoTheImpossible: [[http://lesswrong.com/lw/65/money_the_unit_of_caring/ "I specialize... in the impossible questions business."]]. See also [[http://lesswrong.com/lw/un/on_doing_the_impossible/ On Doing the Impossible]].
Is there an issue? Send a MessageReason:
None


-> ''"Singularitarians are the {{munchkin}}s of the real world. We just ignore [[LevelGrinding all the usual dungeons]] and head straight for the [[LoopholeAbuse cycle of]] [[GameBreaker infinite wish spells.]]"''

to:

-> ''"Singularitarians ''"[[TheSingularity Singularitarians]] are the {{munchkin}}s of the real world. We just ignore [[LevelGrinding all the usual dungeons]] and head straight for the [[LoopholeAbuse cycle of]] [[GameBreaker infinite wish spells.]]"''
Is there an issue? Send a MessageReason:
None

Added DiffLines:

-> ''"Singularitarians are the {{munchkin}}s of the real world. We just ignore [[LevelGrinding all the usual dungeons]] and head straight for the [[LoopholeAbuse cycle of]] [[GameBreaker infinite wish spells.]]"''
Is there an issue? Send a MessageReason:
None

Added: 115

Removed: 115

Is there an issue? Send a MessageReason:
None


* EatsBabies: [[http://lesswrong.com/lw/1ww/undiscriminating_skepticism/1scb?context=2 Presented without comment.]]


Added DiffLines:

* EatsBabies: [[http://lesswrong.com/lw/1ww/undiscriminating_skepticism/1scb?context=2 Presented without comment.]]
Is there an issue? Send a MessageReason:
None


-->''In one sense, it's clear that we do not want to live the sort of lives that are depicted in most stories that human authors have written so far. Think of the truly great stories, the ones that have become legendary for being the very best of the best of their genre: ''Literature/TheIliad'', ''RomeoAndJuliet'', ''Film/TheGodfather'', ''ComicBook/{{Watchmen}}'', ''PlanescapeTorment'', the second season of ''Series/BuffyTheVampireSlayer'', or '''that''' ending in ''{{Tsukihime}}''. Is there a single story on the list that isn't tragic? Ordinarily, we prefer pleasure to pain, joy to sadness, and life to death. Yet it seems we prefer to empathize with hurting, sad, dead characters. Or stories about happier people aren't serious, aren't artistically great enough to be worthy of praise - but then why selectively praise stories containing unhappy people? Is there some hidden benefit to us in it? [...] You simply don't optimize a story the way you optimize a real life. The best story and the best life will be produced by different criteria. [...] There is another rule of writing which states that stories have to shout. A human brain is a long way off those printed letters. Every event and feeling needs to take place at ten times natural volume in order to have any impact at all. You must not try to make your characters behave or feel realistically —especially, you must not faithfully reproduce your own past experiences—because without exaggeration, they'll be too quiet to rise from the page. Maybe all the Great Stories are tragedies because happiness can't shout loud enough to a human reader.''

to:

-->''In one sense, it's clear that we do not want to live the sort of lives that are depicted in most stories that human authors have written so far. Think of the truly great stories, the ones that have become legendary for being the very best of the best of their genre: ''Literature/TheIliad'', ''RomeoAndJuliet'', ''Film/TheGodfather'', ''ComicBook/{{Watchmen}}'', ''PlanescapeTorment'', Literature/TheIliad, RomeoAndJuliet, Film/TheGodfather, ComicBook/{{Watchmen}}, PlanescapeTorment, the second season of ''Series/BuffyTheVampireSlayer'', Series/BuffyTheVampireSlayer, or '''that''' ending in ''{{Tsukihime}}''.{{Tsukihime}}. Is there a single story on the list that isn't tragic? Ordinarily, we prefer pleasure to pain, joy to sadness, and life to death. Yet it seems we prefer to empathize with hurting, sad, dead characters. Or stories about happier people aren't serious, aren't artistically great enough to be worthy of praise - but then why selectively praise stories containing unhappy people? Is there some hidden benefit to us in it? [...] You simply don't optimize a story the way you optimize a real life. The best story and the best life will be produced by different criteria. [...] There is another rule of writing which states that stories have to shout. A human brain is a long way off those printed letters. Every event and feeling needs to take place at ten times natural volume in order to have any impact at all. You must not try to make your characters behave or feel realistically —especially, you must not faithfully reproduce your own past experiences—because without exaggeration, they'll be too quiet to rise from the page. Maybe all the Great Stories are tragedies because happiness can't shout loud enough to a human reader.''
Is there an issue? Send a MessageReason:
None


* GenreAdultery: Almost all of his works, whether [[FanFic/HarryPotterAndTheMethodsOfRationality fanfic]] or [[ThreeWorldsCollide original]], are highly philosophical {{Author Tract}}s. And then there's ''[[http://www.fanfiction.net/s/5731071/1/Peggy_Susie Peggy Susie]]'', which is merely [[spoiler: a ''CalvinAndHobbes'' fic]] and parody of ''TheTerminator''... with no philosophical elements whatsoever.

to:

* GenreAdultery: Almost all of his works, whether [[FanFic/HarryPotterAndTheMethodsOfRationality fanfic]] or [[ThreeWorldsCollide original]], are highly philosophical {{Author Tract}}s. And then there's ''[[http://www.fanfiction.net/s/5731071/1/Peggy_Susie Peggy Susie]]'', which is merely [[spoiler: a ''CalvinAndHobbes'' ''Calvin and Hobbes'' fic]] and parody of ''TheTerminator''... with no philosophical elements whatsoever.
Is there an issue? Send a MessageReason:
looks like a broken link...


His day job is with the [[http://http://intelligence.org/ Machine Intelligence Research Institute]], specifically working on how to make an {{AI}} that ''won't'' just [[http://singinst.org/riskintro/index.html kill us all accidentally]], and make [[AIIsACrapshoot AI less of a crapshoot]].

to:

His day job is with the [[http://http://intelligence.[[http://intelligence.org/ Machine Intelligence Research Institute]], specifically working on how to make an {{AI}} that ''won't'' just [[http://singinst.org/riskintro/index.html kill us all accidentally]], and make [[AIIsACrapshoot AI less of a crapshoot]].
Is there an issue? Send a MessageReason:
None


** It's still possible to find some really awful fiction that he wrote on {{USENET}} many years ago. (No, I won't link to it.)

to:

** It's still possible to find some really awful fiction that he wrote on {{USENET}} many years ago. (No, I won't link to it.)



* [[ScaleOfScientificSins Scale of Scientific "Sins"]]: Plans to commit most of them, but backs away from [[http://lesswrong.com/lw/x7/cant_unbirth_a_child/ creating life]].

to:

* [[ScaleOfScientificSins Scale of Scientific "Sins"]]: ScaleOfScientificSins: Plans to commit most of them, but backs away from [[http://lesswrong.com/lw/x7/cant_unbirth_a_child/ creating life]].
Is there an issue? Send a MessageReason:
(this one also seems like lies and slander from my large active Internet hatedom - let me know if there\'s a more standard dispute process for this sort of thing)


* {{Derailing}}: He has a habit of going down the list of tvtropes fanfic recommendations and leaving reviews on ff.net exclaiming how amazing it is that the author’s story is just as or almost as good as his own writing while dropping his story’s [[HarryPotterAndTheMethodsOfRationality name]]. He probably reads these stories but considering that in roughly 30% of these reviews he quotes lines of these stories out of context in order to draw [[BrokenAesop false parallels]] to [[ShamelessSelfPromotion his philosophies]], he may simply be skimming them.
Is there an issue? Send a MessageReason:
I don\'t have a near-perfect record in AI-Boxing (3 for 5) and I have no idea how anyone gets that particular dilemma out of my actual post on it.


* AffablyEvil: Has tossed around "super villain" as a potential hobby more than once, and some of his morality proofs are a little... concerning, if he can even comprehend choosing between [[BlueAndOrangeMorality damning 10 million people to get dust in their eyes, or 12 hours of torture for one person.]] The fact he has a near-perfect record in the AI-Box experiment of his own design doesn't help one bit.

to:

* AffablyEvil: Has tossed around "super villain" as a potential hobby more than once, and some of his morality proofs are a little... concerning, if he can even comprehend choosing between [[BlueAndOrangeMorality damning 10 million people to get dust in their eyes, or 12 hours of torture for one person.]] The fact he has a near-perfect record in the AI-Box experiment of his own design doesn't help one bit.once.
Is there an issue? Send a MessageReason:
Moving from YMMV.

Added DiffLines:

* AffablyEvil: Has tossed around "super villain" as a potential hobby more than once, and some of his morality proofs are a little... concerning, if he can even comprehend choosing between [[BlueAndOrangeMorality damning 10 million people to get dust in their eyes, or 12 hours of torture for one person.]] The fact he has a near-perfect record in the AI-Box experiment of his own design doesn't help one bit.


Added DiffLines:

* {{Derailing}}: He has a habit of going down the list of tvtropes fanfic recommendations and leaving reviews on ff.net exclaiming how amazing it is that the author’s story is just as or almost as good as his own writing while dropping his story’s [[HarryPotterAndTheMethodsOfRationality name]]. He probably reads these stories but considering that in roughly 30% of these reviews he quotes lines of these stories out of context in order to draw [[BrokenAesop false parallels]] to [[ShamelessSelfPromotion his philosophies]], he may simply be skimming them.


Added DiffLines:

* ItsAllAboutMe: [[RuleOfCautiousEditingJudgment Maybe.]] But he doesn't leave much wiggle room to argue with his viewpoints when he folds the very foundations of 'rationality' into his personal philosophy and names his blog collab LessWrong.


Added DiffLines:

* SingleIssueWonk: He has this thing about death, in that he... doesn't really see it's purpose. If you do think that death is part of the natural order of things, then he tends to bring out the soapbox a little.
Is there an issue? Send a MessageReason:
None


-->''In one sense, it's clear that we do not want to live the sort of lives that are depicted in most stories that human authors have written so far. Think of the truly great stories, the ones that have become legendary for being the very best of the best of their genre: ''Literature/TheIliad'', ''RomeoAndJuliet'', ''Film/TheGodfather'', ''ComicBook/{Watchmen}}'', ''PlanescapeTorment'', the second season of ''Series/BuffyTheVampireSlayer'', or '''that''' ending in ''{{Tsukihime}}''. Is there a single story on the list that isn't tragic? Ordinarily, we prefer pleasure to pain, joy to sadness, and life to death. Yet it seems we prefer to empathize with hurting, sad, dead characters. Or stories about happier people aren't serious, aren't artistically great enough to be worthy of praise - but then why selectively praise stories containing unhappy people? Is there some hidden benefit to us in it? [...] You simply don't optimize a story the way you optimize a real life. The best story and the best life will be produced by different criteria. [...] There is another rule of writing which states that stories have to shout. A human brain is a long way off those printed letters. Every event and feeling needs to take place at ten times natural volume in order to have any impact at all. You must not try to make your characters behave or feel realistically —especially, you must not faithfully reproduce your own past experiences—because without exaggeration, they'll be too quiet to rise from the page. Maybe all the Great Stories are tragedies because happiness can't shout loud enough to a human reader.''

to:

-->''In one sense, it's clear that we do not want to live the sort of lives that are depicted in most stories that human authors have written so far. Think of the truly great stories, the ones that have become legendary for being the very best of the best of their genre: ''Literature/TheIliad'', ''RomeoAndJuliet'', ''Film/TheGodfather'', ''ComicBook/{Watchmen}}'', ''ComicBook/{{Watchmen}}'', ''PlanescapeTorment'', the second season of ''Series/BuffyTheVampireSlayer'', or '''that''' ending in ''{{Tsukihime}}''. Is there a single story on the list that isn't tragic? Ordinarily, we prefer pleasure to pain, joy to sadness, and life to death. Yet it seems we prefer to empathize with hurting, sad, dead characters. Or stories about happier people aren't serious, aren't artistically great enough to be worthy of praise - but then why selectively praise stories containing unhappy people? Is there some hidden benefit to us in it? [...] You simply don't optimize a story the way you optimize a real life. The best story and the best life will be produced by different criteria. [...] There is another rule of writing which states that stories have to shout. A human brain is a long way off those printed letters. Every event and feeling needs to take place at ten times natural volume in order to have any impact at all. You must not try to make your characters behave or feel realistically —especially, you must not faithfully reproduce your own past experiences—because without exaggeration, they'll be too quiet to rise from the page. Maybe all the Great Stories are tragedies because happiness can't shout loud enough to a human reader.''
Is there an issue? Send a MessageReason:
They changed their name.


His day job is with the [[http://singinst.org/ Singularity Institute for Artificial Intelligence]], specifically working on how to make an {{AI}} that ''won't'' just [[http://singinst.org/riskintro/index.html kill us all accidentally]], and make [[AIIsACrapshoot AI less of a crapshoot]].

to:

His day job is with the [[http://singinst.[[http://http://intelligence.org/ Singularity Institute for Artificial Intelligence]], Machine Intelligence Research Institute]], specifically working on how to make an {{AI}} that ''won't'' just [[http://singinst.org/riskintro/index.html kill us all accidentally]], and make [[AIIsACrapshoot AI less of a crapshoot]].
Is there an issue? Send a MessageReason:
no thanks


* PlatoIsAMoron: [[http://www.sl4.org/wiki/SoYouWantToBeASeedAIProgrammer His treatment of AI researchers with actual credentials]].
Is there an issue? Send a MessageReason:
None


* PlatoIsAMoron: [[http://www.sl4.org/wiki/SoYouWantToBeASeedAIProgrammer His treatment of AI researchers with an actual credentials]].

to:

* PlatoIsAMoron: [[http://www.sl4.org/wiki/SoYouWantToBeASeedAIProgrammer His treatment of AI researchers with an actual credentials]].

Added: 234

Removed: 199

Is there an issue? Send a MessageReason:
None


->''Let me say that Eliezer may have already done more to save the world than most people in history.''
-->-- From a [[http://lesswrong.com//lw/2lr/the_importance_of_selfdoubt/2ia7?c=1 blog comment]].


Added DiffLines:

* PlatoIsAMoron: [[http://www.sl4.org/wiki/SoYouWantToBeASeedAIProgrammer His treatment of AI researchers with an actual credentials]].


Added DiffLines:

* PoweredByAForsakenChild: [[http://lesswrong.com/lw/kn/torture_vs_dust_specks/ A rare proponent]].

Removed: 336

Is there an issue? Send a MessageReason:
Dead Little Sister was renamed. Check to see if the example actually fits before readding.


* [[DeadLittleSister Dead Little Brother]]: Sort of; His little brother [[http://yudkowsky.net/other/yehuda died]] at the age of 19, but although it may have made Yudkowsky's search for immortality more personal, he was already an atheist, [[LivingForeverIsAwesome immortalist]] and an advocate for cryonics before this tragic incident.
Is there an issue? Send a MessageReason:
None

Added DiffLines:

->''Let me say that Eliezer may have already done more to save the world than most people in history.''
-->-- From a [[http://lesswrong.com//lw/2lr/the_importance_of_selfdoubt/2ia7?c=1 blog comment]].

Author of ThreeWorldsCollide and HarryPotterAndTheMethodsOfRationality, the shorter works [[{{FanficRecs/HaruhiSuzumiya}} Trust in God/The Riddle of Kyon]] and [[FanficRecs/MegaCrossover The Finale of the Ultimate Meta Mega Crossover]], and [[http://yudkowsky.net/other/fiction various other fiction]]. Helped found and is probably the most prolific poster to the blog/discussion site LessWrong, and is currently writing a book on rationality.

His day job is with the [[http://singinst.org/ Singularity Institute for Artificial Intelligence]], specifically working on how to make an {{AI}} that ''won't'' just [[http://singinst.org/riskintro/index.html kill us all accidentally]], and make [[AIIsACrapshoot AI less of a crapshoot]].

Most of his fictional works are {{Author Tract}}s, albeit ones that [[RuleOfCautiousEditingJudgment many find]] good and entertaining.

Occasionally [[OneOfUs drops by our very own forums]].
----
!!!Tropes describing Yudkowsky or his work:
* [[{{UsefulNotes/Atheism}} Atheist]]: Doesn't mind calling religion "insanity", but believes in promoting [[http://lesswrong.com/lw/1e/raising_the_sanity_waterline/ rational thinking in general]] and letting atheism follow.
* BadassBoast: [[http://lesswrong.com/lw/gz/policy_debates_should_not_appear_onesided/ "Unfortunately the universe doesn't agree with me. We'll see which one of us is still standing when this is over."]]
* BiTheWay: Is straight, but would take [[http://lesswrong.com/lw/1ww/undiscriminating_skepticism/1r80?c=1 a pill that would make him bisexual]] because it would raise his ability to have fun.
* ChildProdigy: He was one.
* [[DeadLittleSister Dead Little Brother]]: Sort of; His little brother [[http://yudkowsky.net/other/yehuda died]] at the age of 19, but although it may have made Yudkowsky's search for immortality more personal, he was already an atheist, [[LivingForeverIsAwesome immortalist]] and an advocate for cryonics before this tragic incident.
* EatsBabies: [[http://lesswrong.com/lw/1ww/undiscriminating_skepticism/1scb?context=2 Presented without comment.]]
* TheEndOfTheWorldAsWeKnowIt: Seeks to prevent it.
* {{Fanboy}}: Isn't one, but has them despite trying to [[http://lesswrong.com/lw/ln/resist_the_happy_death_spiral/ warn against being one]].
* GenreAdultery: Almost all of his works, whether [[FanFic/HarryPotterAndTheMethodsOfRationality fanfic]] or [[ThreeWorldsCollide original]], are highly philosophical {{Author Tract}}s. And then there's ''[[http://www.fanfiction.net/s/5731071/1/Peggy_Susie Peggy Susie]]'', which is merely [[spoiler: a ''CalvinAndHobbes'' fic]] and parody of ''TheTerminator''... with no philosophical elements whatsoever.
* HannibalLecture: In the [[http://yudkowsky.net/singularity/aibox AI-Box]] [[http://lesswrong.com/lw/up/shut_up_and_do_the_impossible/ experiment]], presumably.
* HumanPopsicle: His membership with the [[http://www.cryonics.org/ Cryonics Institute]] is his backup plan if he dies before achieving proper immortality.
* ImmortalitySeeker: Though he hopes never to die at all, in case he does, he wants to be cryonically frozen and then reanimated.
* InsufferableGenius: Can come across as this.
* JewishAndNerdy
* ManipulativeBastard: His explanation of [[http://lesswrong.com/lw/up/shut_up_and_do_the_impossible/nxw how he could "win" the AI-Box experiment]].
-->Part of the ''real'' reason that I wanted to run the original AI-Box Experiment, is that I thought I had an ability that I could never test in real life. Was I really making a sacrifice for my ethics, or just overestimating my own ability? The AI-Box Experiment let me test that.
* MeasuringTheMarigolds: Criticized and averted; he wants people to be able to see [[http://lesswrong.com/lw/or/joy_in_the_merely_real/ joy in the 'merely' real world]].
* OldShame: The document [[http://lesswrong.com/lw/yd/the_thing_that_i_protect/qze Creating Friendly AI]], along with much of what he wrote before fully [[http://lesswrong.com/lw/ue/the_magnitude_of_his_own_folly/ recognizing the dangers of AI]].
** It's still possible to find some really awful fiction that he wrote on {{USENET}} many years ago. (No, I won't link to it.)
* OneOfUs: Partakes from time to time in discussions here, usually regarding his own writing.
* {{Protectorate}}: [[http://lesswrong.com/lw/yd/the_thing_that_i_protect/ The Thing That I Protect]].
* [[ScaleOfScientificSins Scale of Scientific "Sins"]]: Plans to commit most of them, but backs away from [[http://lesswrong.com/lw/x7/cant_unbirth_a_child/ creating life]].
* ScienceHero: Openly uses the word "hero" to [[http://lesswrong.com/lw/1mc/normal_cryonics/ describe himself]].
* TheSingularity: He [[strike:is convinced of a coming technological singularity]] is determined to help bring about a positive singularity regardless of the [[http://lesswrong.com/lw/uk/beyond_the_reach_of_god/ uncertainty of success]].
* TalkingYourWayOut: The AI-box experiment.
* {{Transhumanism}}: He is a transhumanist.
* TrueArtIsAngsty: [[http://lesswrong.com/lw/xi/serious_stories/ Discussed/invoked/Deconstructed]]:
-->''In one sense, it's clear that we do not want to live the sort of lives that are depicted in most stories that human authors have written so far. Think of the truly great stories, the ones that have become legendary for being the very best of the best of their genre: ''Literature/TheIliad'', ''RomeoAndJuliet'', ''Film/TheGodfather'', ''ComicBook/{Watchmen}}'', ''PlanescapeTorment'', the second season of ''Series/BuffyTheVampireSlayer'', or '''that''' ending in ''{{Tsukihime}}''. Is there a single story on the list that isn't tragic? Ordinarily, we prefer pleasure to pain, joy to sadness, and life to death. Yet it seems we prefer to empathize with hurting, sad, dead characters. Or stories about happier people aren't serious, aren't artistically great enough to be worthy of praise - but then why selectively praise stories containing unhappy people? Is there some hidden benefit to us in it? [...] You simply don't optimize a story the way you optimize a real life. The best story and the best life will be produced by different criteria. [...] There is another rule of writing which states that stories have to shout. A human brain is a long way off those printed letters. Every event and feeling needs to take place at ten times natural volume in order to have any impact at all. You must not try to make your characters behave or feel realistically —especially, you must not faithfully reproduce your own past experiences—because without exaggeration, they'll be too quiet to rise from the page. Maybe all the Great Stories are tragedies because happiness can't shout loud enough to a human reader.''
* WeDoTheImpossible: [[http://lesswrong.com/lw/65/money_the_unit_of_caring/ "I specialize...in the impossible questions business."]]. See also [[http://lesswrong.com/lw/un/on_doing_the_impossible/ On Doing the Impossible]].
* TheWorldIsJustAwesome

!!!Tropes related to Yudkowsky's writing:
* AuthorTract: As mentioned above, his theories and [[AuthorAvatar avatars]] crop up a fair bit in his fictional works.
* DisobeyThisMessage: Discussed in a number of posts.
-->...if you think you would ''totally'' wear that [[http://lesswrong.com/lw/mb/lonely_dissent/ clown suit]], then don't be too proud of that either! It just means that you need to make an effort in the ''opposite'' direction to avoid dissenting too easily.
* DrugsAreBad: Averted -- Although Yudkowsky apparently [[http://lesswrong.com/lw/1ww/undiscriminating_skepticism/1r80?c=1 doesn't want]] to try mind-altering drugs, he is [[http://lesswrong.com/lw/66/rationality_common_interest_of_many_causes/ in favor]] of drug legalization.
* HoldingOutForAHero: [[http://lesswrong.com/lw/mc/to_lead_you_must_stand_up/ To Lead, You Must Stand Up]], [[http://lesswrong.com/lw/un/on_doing_the_impossible/ On Doing the Impossible]].
* HollywoodEvolution: Something he tries to counteract in his discussions about the limitations on evolution, most notably in explaining why {{Storm}} doesn't make sense.
* LiteralGenie: What an improperly designed AI [[http://lesswrong.com/lw/ld/the_hidden_complexity_of_wishes/ would]] [[http://lesswrong.com/lw/tc/unnatural_categories/ be]].
* LivingForeverIsAwesome: He definitely [[http://yudkowsky.net/singularity/simplified wants nobody to die anymore]].
* TheMultiverse: The quantum-physics version is discussed in detail in a [[http://lesswrong.com/lw/r8/and_the_winner_is_manyworlds/ sequence of posts]]; [[http://lesswrong.com/lw/ws/for_the_people_who_are_still_alive/ For the People Who Are Still Alive]] takes a still broader view.
* TranshumanTreachery: Discussed in [[http://lesswrong.com/lw/xd/growing_up_is_hard/ Growing Up is Hard]] as one reason to develop AI before human enhancement and {{brain uploading}}.
* TheWorldIsNotReady
----

Top