rationalism AI cultist creeps

Message Bookmarked
Bookmark Removed
Not all messages are displayed: show all messages (171 of them)

Looks like the myriad achievements of poor HAL are ignored as he is shoe-horned into being the latest of a long line of ILX strawmen.

What About The Half That's Never Been POLLed (James Redd and the Blecchs), Saturday, 6 April 2013 21:41 (eleven years ago) link

http://singularityhub.com/about/
http://lukeprog.com/SaveTheWorld.html

Hardware and software are improving, there are no signs that we will stop this, and human biology and biases indicate that we are far below the upper limit on intelligence. Economic arguments indicate that most AIs would act to become more intelligent. Therefore, intelligence explosion is very likely. The apparent diversity and irreducibility of information about "what is good" suggests that value is complex and fragile; therefore, an AI is unlikely to have any significant overlap with human values if that is not engineered in at significant cost. Therefore, a bad AI explosion is our default future.

its deeply weird to me how much of this stuff is out there, and how much is fixated on the idea that superintelligent machines are coming soon and the big problem is making sure they don't decide to kill all humans.

Chuck E was a hero to most (s.clover), Sunday, 7 April 2013 00:32 (eleven years ago) link

* How can we identify, understand, and reduce cognitive biases?
* How can institutional innovations such as prediction markets improve information aggregation and probabilistic forecasting?
* How should an ethically-motivated agent act under conditions of profound moral uncertainty?
* How can we correct for observation selection effects in anthropic reasoning?

http://www.fhi.ox.ac.uk/research/rationality_and_wisdom

Chuck E was a hero to most (s.clover), Sunday, 7 April 2013 00:34 (eleven years ago) link

how much is fixated on the idea that superintelligent machines are coming soon and the big problem is making sure they don't decide to kill all humans.

Some of us were traumatised by Servotron at a young age, OK?

Just noise and screaming and no musical value at all. (Colonel Poo), Sunday, 7 April 2013 00:54 (eleven years ago) link

"how much of this stuff is out there" : the big ideas are made by the same few people (yudkowsky, maybe bostrom) and the evangelization is made by about a dozens younger "lesser names" (that probably were hanging out on the sl4 mailing list) on 3 or 4 of their platforms that they rename / shuffle around every few years. they were "theorizing" about friendly a.i. way back then, i doubt they made any breakthroughs since then... how could they?

Sébastien, Sunday, 7 April 2013 01:06 (eleven years ago) link

in a way the "friendly a.i." advocates are like the epicureans who 2300 years ago conceptualized the atom only by using their bare eyes and their intuition: some time down the line we sort of prove them right but back then they really had no good understanding of how it worked. who knows in the (far) future it's possible some stuff they talk about in their conceptualizaiton of a friendly a.i. will be seen as useful and recuperated .

Sébastien, Sunday, 7 April 2013 02:25 (eleven years ago) link

three months pass...

Some people familiar with the LessWrong memeplex have suffered serious psychological distress after contemplating basilisk-like ideas — even when they're fairly sure intellectually that it's a silly problem.[5] The notion is taken sufficiently seriously by some LessWrong posters that they try to work out how to erase evidence of themselves so a future AI can't reconstruct a copy of them to torture.[6]

http://rationalwiki.org/wiki/Roko%27s_basilisk

stefon taylor swiftboat (s.clover), Thursday, 1 August 2013 01:49 (ten years ago) link

In LessWrong's Timeless Decision Theory (TDT),[3] this is taken to be equivalent to punishment of your own actual self, not just someone else very like you — and furthermore, you might be the simulation.

sounds like a great theory to have, sooper sound

j., Thursday, 1 August 2013 02:55 (ten years ago) link

if you can believe that life is a simulation where you may be punished you might as well believe that there's a god who wants you to act a certain way

Mordy , Thursday, 1 August 2013 02:58 (ten years ago) link

and i thought these people were only slightly crazy

click here to start exploding (ledge), Thursday, 1 August 2013 10:46 (ten years ago) link

at least they seem to be nominally self-policing

j., Thursday, 1 August 2013 10:59 (ten years ago) link

i mean tdt and rokos basilisk basically turns the AI into an arbitrary and all powerful god, if it wasnt that already

max, Thursday, 1 August 2013 11:03 (ten years ago) link

A god who wants you to send all your money to a particular organisation - so Yudkowsky is the televangelist in this scenario.

Yeah it is interesting that a bunch of self proclaimed rationalists have managed to reason themselves into a belief system functionally identical to one they would decry as irrational.

click here to start exploding (ledge), Thursday, 1 August 2013 11:49 (ten years ago) link

i love it! its borgesian

max, Thursday, 1 August 2013 11:52 (ten years ago) link

yeah i think the aporia there might be the "self proclaimed" bit

phasmid beetle types (Noodle Vague), Thursday, 1 August 2013 12:10 (ten years ago) link

If I were a future AI, I'd recreate the LessWrong memeplex, except I'd call it LessRong. Why? For the lulz...

slamming on the dubstep brakes (snoball), Thursday, 1 August 2013 12:39 (ten years ago) link

Roko's basilisk sounds like the name of an Italian prog rock band.

slamming on the dubstep brakes (snoball), Thursday, 1 August 2013 12:48 (ten years ago) link

Roko's basilisk is notable for being completely banned from discussion on LessWrong, where any mention of it is deleted.[4] Eliezer Yudkowsky, founder of LessWrong, considers the basilisk would not work, but will not explain why because he does not consider open discussion of the notion of acausal trade with possible superintelligences to be provably safe.

stefon taylor swiftboat (s.clover), Thursday, 1 August 2013 12:49 (ten years ago) link

that wiki article is like half of a great ted chiang short story

max, Thursday, 1 August 2013 12:55 (ten years ago) link

Found the following beautiful sentence at the bottom of the LessWrong page:

The basilisk kerfuffle has also alienated fellow cryonicists.

click here to start exploding (ledge), Thursday, 1 August 2013 12:58 (ten years ago) link

Why a basilisk?

wombspace (abanana), Thursday, 1 August 2013 13:03 (ten years ago) link

I'm not sure you should give these guys what they want and proclaim them to be the vangaurd of Hard AI proponents... I really don't think anyone who has seriously grappled with the philosophical implications of, say, the physical symbol system hypothesis, could ever proclaim any development or avenue of research to be "provably friendly."

Furthermore I think its not very fair to suggest any and all fans, theorists or proponents of AI are similarly robotic in their thinking as these LessWrong people.

Kissin' Cloacas (Viceroy), Thursday, 1 August 2013 13:41 (ten years ago) link

this is the best part

[T]here is the ominous possibility that if a positive singularity does occur, the resultant singleton may have precommitted to punish all potential donors who knew about existential risks but who didn't give 100% of their disposable incomes to x-risk motivation. ... So a post-singularity world may be a world of fun and plenty for the people who are currently ignoring the problem, whilst being a living hell for a significant fraction of current existential risk reducers (say, the least generous half).

because it means that one of the things that caused "severe psychological distress" was the suggestion that posters on rationalism message boards would in the future be punished for being smarter than everyone

what a terrifying perversion of one's value system

These fools are the enemy of the true cybernetic revolution.

Banaka™ (banaka), Thursday, 1 August 2013 17:15 (ten years ago) link

ok sam harris doesn't really belong here but c'mon

http://www.samharris.org/blog/item/free-will-and-the-reality-of-love

stefon taylor swiftboat (s.clover), Thursday, 1 August 2013 19:10 (ten years ago) link

Consider the present moment from the point of view of my conscious mind: I have decided to write this blog post, and I am now writing it. I almost didn’t write it, however. In fact, I went back and forth about it: I feel that I’ve said more or less everything I have to say on the topic of free will and now worry about repeating myself. I started the post, and then set it aside. But after several more emails came in, I realized that I might be able to clarify a few points. Did I choose to be affected in this way? No. Some readers were urging me to comment on depressing developments in “the Arab Spring.” Others wanted me to write about the practice of meditation. At first I ignored all these voices and went back to working on my next book. Eventually, however, I returned to this blog post. Was that a choice? Well, in a conventional sense, yes. But my experience of making the choice did not include an awareness of its actual causes. Subjectively speaking, it is an absolute mystery to me why I am writing this.

this is sub david brooks

stefon taylor swiftboat (s.clover), Thursday, 1 August 2013 19:11 (ten years ago) link

this is not going to shock anyone but this frame of mind/crew of people trends very strongly into some supremely nasty politics

R'LIAH (goole), Thursday, 1 August 2013 19:12 (ten years ago) link

http://lesswrong.com/lw/hcy/link_more_right_launched/

R'LIAH (goole), Thursday, 1 August 2013 19:13 (ten years ago) link

ahahaa "Just so long as we don't end up with an asymmetrical effect, where the PUAs leave but the feminists stay."

stefon taylor swiftboat (s.clover), Thursday, 1 August 2013 19:32 (ten years ago) link

ah god i don't think i've seen the term "race realism" before

phasmid beetle types (Noodle Vague), Thursday, 1 August 2013 20:04 (ten years ago) link

The all-important gap between labeling yourself as a rationalist and actually using your reason; between labeling yourself as an empiricist and actually studying phenomena.

cardamon, Thursday, 1 August 2013 21:21 (ten years ago) link

first thing i do after the singularity: allow myself to get a girlfriend! (i have actually read that from one of the big cahuna in a chat years ago. screencapped it but decided not to save for luls, i'm not that kind of guy)

Sébastien, Thursday, 1 August 2013 22:58 (ten years ago) link

next sentence written in the chat was of him again : "that was dumb."

Sébastien, Thursday, 1 August 2013 23:01 (ten years ago) link

everyone will be your girlfriend after the singularity iirc

しるび (silby), Friday, 2 August 2013 01:43 (ten years ago) link

also that basilisk thing is o_O

しるび (silby), Friday, 2 August 2013 01:43 (ten years ago) link

not least because it apparently relies in part on some absurd population ethics ("total utilitarianism" they call it)

しるび (silby), Friday, 2 August 2013 01:44 (ten years ago) link

http://lesswrong.com/lw/hcy/link_more_right_launched/

― R'LIAH (goole), Thursday, August 1, 2013 7:13 PM (Yesterday) Bookmark Flag Post Permalink

Why did you get me down the rabbit hole of a right-wing blog?

If there are multiple cultural/ethnic identities, they need to be either assimilated into one another, be distinctive and have clear guidelines for interaction, or be separated and with separate administrative structures.

click here to start exploding (ledge), Friday, 2 August 2013 09:52 (ten years ago) link

i jumped ship at "race realism"

IIIrd Datekeeper (contenderizer), Friday, 2 August 2013 11:19 (ten years ago) link

i think they are confusing cultural/ethnic identities with member planets of the federation?

stefon taylor swiftboat (s.clover), Friday, 2 August 2013 13:06 (ten years ago) link

also from that blog

When it comes to art and music, thinkers intuitively realize that the most popular works are the most trivial and idiotic, but when it comes to politics, the uninformed opinions of the masses are placed on a pedestal. The reason for this inconsistent view of a sort of Democratic pseudo-religion that has been in place in the Anglosphere since around 1848.

stefon taylor swiftboat (s.clover), Friday, 2 August 2013 13:16 (ten years ago) link

Advocates of Democracy try to rewrite history and imply that Enlightenment principles are fundamentally incompatible with Monarchy, but this is clearly untrue. Voltaire, known as one of the greatest thinkers of the Enlightenment, had a close relationship to a number of monarchs, including Frederick the Great, and advised him regularly. It was economic and cultural flourishing brought on by absolute monarchy in France that created the conditions for the Enlightenment and the Scientific Revolution. All of this was underway well before the French Revolution.

what monarchy do you want!? do you want to join the commonwealth!?

stefon taylor swiftboat (s.clover), Friday, 2 August 2013 13:18 (ten years ago) link

For roughly 165 years (since 1848), democracy has caused social and economic mayhem worldwide. Rule-of-the-People has caused vastly increased crime (100X in the UK since 1800)

ladies and gentlemen, statistics!

click here to start exploding (ledge), Friday, 2 August 2013 13:20 (ten years ago) link

i think it's pretty obv that these people are far less smart and rational than they claim to be and after that by their own rules we're okay to ignore them

phasmid beetle types (Noodle Vague), Friday, 2 August 2013 13:22 (ten years ago) link

aaaand now we're moving into infowars territory....

Speaking for myself personally, my key motivation is not having to witness or experience global nanowar. For a grasp of the capabilities that could be invoked during such a war, I recommend the obscure volume Military Nanotechnology: Potential Applications and Preventive Arms Control.

It’s laborious for me to explain why small robots would be a major risk, because it should be self-evident. Very small robots could be made exceedingly stealthy, they could provide comprehensive surveillance of enemy activities, and could inject lethal payloads of just a few microliters. Moreover, they could self-detonate after carrying out their mission, making them untraceable.

stefon taylor swiftboat (s.clover), Friday, 2 August 2013 13:22 (ten years ago) link

uuuughhhhh ok if i HAVE TO EXPLAIN IT TO YOU

j., Friday, 2 August 2013 13:27 (ten years ago) link

lol goole you have the weirdest hobby

j., Friday, 2 August 2013 13:27 (ten years ago) link

It was economic and cultural flourishing brought on by absolute monarchy in France that created the conditions for the Enlightenment and the Scientific Revolution. All of this was underway well before the French Revolution.

Yeah but the computer this guy is typing this on is a product of capitalism proper which is only possible after the French Revolution and the death blow it dealt to the vestigial feudalism

Also questionable whether absolute monarchy 'brought on economic flourishing'? Weren't the poor in the period leading up to the Revolution having to subsist on grass and hay?

cardamon, Friday, 2 August 2013 13:45 (ten years ago) link

I'd love to be clever enough to write an algorithm that measured the ratio of pro-reason rhetoric to actual chains of reasoning in all forum posts and comment boxes on the internet

cardamon, Friday, 2 August 2013 13:48 (ten years ago) link

Which is just a fancier way of saying "the end justifies the means". But your chosen formulation of it immediately suggested the thought that consequences are open-ended, extending into all futurity, and therefore are impossible to measure

I think "the end justifies the means" is a bit more slippery - it's often used to weigh one set of consequences more heavily than another, e.g. bombing hiroshima to end the war. And, well we're talking about human actions and human consequences, I think its fair to restrcit it to humanly measurable ones.

ledge, Tuesday, 6 September 2022 19:12 (one year ago) link

Even human consequences extend indefinitely. Identifying an end point is an arbitrary imposition upon a ceaseless flow, the rough equivalent of ending a story with "and they all lived happily ever after".

more difficult than I look (Aimless), Tuesday, 6 September 2022 20:11 (one year ago) link

so do you never consider the consequences of your actions or do you have trouble getting up in the morning?

ledge, Tuesday, 6 September 2022 20:43 (one year ago) link

I am not engaged in a program of identifying a universal moral framework based upon the consequences of my actions when I get up in the morning, which certainly makes it easier to choose what to wear.

more difficult than I look (Aimless), Tuesday, 6 September 2022 20:47 (one year ago) link

touche!

ledge, Tuesday, 6 September 2022 21:08 (one year ago) link

This is the ideal utilitarian form. You may not like it, but this is what peak performance looks like pic.twitter.com/uHvCp2Cq7y

— MHR (@SpacedOutMatt) September 16, 2022

𝔠𝔞𝔢𝔨 (caek), Saturday, 17 September 2022 16:30 (one year ago) link

incredible

death generator (lukas), Sunday, 25 September 2022 23:20 (one year ago) link

one year passes...

Read this a few days ago. As AI burns through staggering amounts of money with no reasonable use case so far, all your fave fascist tech moguls are gonna hitch themselves to a government gravy train under a Trump administration (gift link): https://wapo.st/3wllikQ

Are you addicted to struggling with your horse? (Boring, Maryland), Sunday, 5 May 2024 14:35 (four weeks ago) link


You must be logged in to post. Please either login here, or if you are not registered, you may register here.