Maybe I'm doing them a disservice (lol) (I'm not about to start digging into the forums) but their big idea that if everyone were rational we could build a utopia doesn't seem to take into account the perfectly rational idea of perfectly rational sociopaths.
― ledge, Tuesday, 6 September 2022 07:46 (one year ago) link
God, grant me the serenity to accept the people I cannot change;The courage to change the person I can;And the wisdom to know: It's me.
― Sonned by a comedy podcast after a dairy network beef (bernard snowy), Tuesday, 6 September 2022 11:32 (one year ago) link
If I am not the problem, there is no solution.
― Sonned by a comedy podcast after a dairy network beef (bernard snowy), Tuesday, 6 September 2022 11:33 (one year ago) link
the existence of an infinite number of these superficially rational conclusions.
i suggest training an AI model to generate them
― ufo, Tuesday, 6 September 2022 11:35 (one year ago) link
believing that you are an entirely rational being is a greater leap of faith than anything found in any major world religion.
― link.exposing.politically (Camaraderie at Arms Length), Tuesday, 6 September 2022 12:01 (one year ago) link
"the only rational thing to do is maximize the number of insects"
What is this in reference to?
― peace, man, Tuesday, 6 September 2022 12:32 (one year ago) link
ah this is the thread for https://www.salon.com/2022/08/20/understanding-longtermism-why-this-suddenly-influential-philosophy-is-so/
― TWELVE Michelob stars?!? (seandalai), Tuesday, 6 September 2022 14:05 (one year ago) link
xp lol bernard
― Karl Malone, Tuesday, 6 September 2022 14:19 (one year ago) link
See also seandalai's link but:
(1) effective altruists are almost always utilitarians(2) they kinda ignore negative utility(3) so for them, the best thing to do is maximize the number of sentient beings, because more utils
but yeah per seandalai's link, they consider simulated beings just as good as actual beings, so we should aim for a future with lots of computers simulating people etc etc
― death generator (lukas), Tuesday, 6 September 2022 15:22 (one year ago) link
xp I thought the subheading would be enough to give me a grasp of how stupid and sad this is, but no it got much dumber:
Longtermism is a quasi-religious worldview, influenced by transhumanism and utilitarian ethics, which asserts that there could be so many digital people living in vast computer simulations millions or billions of years in the future that one of our most important moral obligations today is to take actions that ensure as many of these digital people come into existence as possible.
― recovering internet addict/shitposter (viborg), Tuesday, 6 September 2022 15:25 (one year ago) link
“Rationalism”. Ever since I’ve realized this is an obsession of goons on the dark enlightenment spectrum I use it against them as much as possible.
― recovering internet addict/shitposter (viborg), Tuesday, 6 September 2022 15:28 (one year ago) link
Not to be captain save-a-rationalist but I'm not sure about the overlap between effective altruism and longtermism/transhumanism. The former might typically be utilitarian adjacent but I don't think it's necessarily tied in with the latter, and isn't exclusively the domain of rationalist weirdos.
― ledge, Tuesday, 6 September 2022 15:32 (one year ago) link
AIUI longtermism is one branch of effective altruists. Yes, there are effective altruists who are more into sending deworming pills to schools in Africa and the like.
― death generator (lukas), Tuesday, 6 September 2022 15:40 (one year ago) link
They both spring from the same error though, which is that we just need to get some smart people to figure out things for the rest of us.
― death generator (lukas), Tuesday, 6 September 2022 15:41 (one year ago) link
Right but the best way out of this mess we’ve made would be a consensus based on inclusive dialogue that values actual real rationalism. I mean we do agree that the net effect of the actual Enlightenment was beneficial yeah? Or am I out of line here.
― recovering internet addict/shitposter (viborg), Tuesday, 6 September 2022 15:50 (one year ago) link
Maybe I don’t even believe that tbh.
― recovering internet addict/shitposter (viborg), Tuesday, 6 September 2022 15:51 (one year ago) link
I mean we do agree that the net effect of the actual Enlightenment was beneficial yeah?
I think if we avoid destroying the earth, yeah I'd agree with this.
Right but the best way out of this mess we’ve made would be a consensus based on inclusive dialogue that values actual real rationalism.
I had something more like "minimize human domination over other humans" in mind but this works too.
― death generator (lukas), Tuesday, 6 September 2022 15:56 (one year ago) link
So here's an effective altruist arguing that longtermism is bs, basically saying your little toy model of the future is useless: https://forum.effectivealtruism.org/posts/RRyHcupuDafFNXt6p/longtermism-and-computational-complexity
Someone makes a brilliant point in the comments: "Loved this post - reminds me a lot of intractability critiques of central economic planning, except now applied to consequentialism writ large."
Given that most EAs are kinda libertarian-leaning (hate central planning when applied to real-world economies) this is ... devastating.
― death generator (lukas), Tuesday, 6 September 2022 15:58 (one year ago) link
xps yeah I didn't realise how much the official EA organisation had been taken over:https://www.newyorker.com/magazine/2022/08/15/the-reluctant-prophet-of-effective-altruism
― ledge, Tuesday, 6 September 2022 16:10 (one year ago) link
xp that is an exceedingly rigorous formulation of what is a very obvious and common sense objection. (hence far more effective for the intended audience.)
― ledge, Tuesday, 6 September 2022 16:23 (one year ago) link
I had something more like "minimize human domination over other humans" in mind but this works too.Right. Am I perhaps fundamentally misunderstanding rationalism? (Genuine question, I come to these kinds of threads to learn — I may not be totally out of line but I am mostly out my depth.)My suggestion was focused on the process while yours seems more goals-oriented. Which is the problem that others seem to point out with absolute rationalism, that it has no inherent ethical framework?
― recovering internet addict/shitposter (viborg), Tuesday, 6 September 2022 16:26 (one year ago) link
Well mine is process-oriented too I think ... one of the reasons to oppose human domination over other humans is everyone has a limited view of the world, everyone sees based on their own experiences and interests, so process-wise you should avoid having people make decisions for other people, regardless of how well-meaning they might be.
I may not be totally out of line but I am mostly out my depth.
lol trust me I have a very shallow understanding of this stuff as well. My indignation, however, is bottomless.
Which is the problem that others seem to point out with absolute rationalism, that it has no inherent ethical framework?
Utilitarianism, right? (which is related to but I think not the same as consequentialism, but I don't understand the difference)
― death generator (lukas), Tuesday, 6 September 2022 16:35 (one year ago) link
Consequentialism just says that the morality of an action resides in its consequences, as opposed to how well it follows some (e.g. god given) rules or whether it's inherently virtuous (whatever that means).Utilitarianism specifies what the consequences should be.
― ledge, Tuesday, 6 September 2022 16:48 (one year ago) link
Which is partly why utilitarianism is so tempting - consequentialism itself seems almost transparently true, and then well what could be wrong with maximising happiness?
― ledge, Tuesday, 6 September 2022 17:21 (one year ago) link
Consequentialism just says that the morality of an action resides in its consequences
Which is just a fancier way of saying "the end justifies the means". But your chosen formulation of it immediately suggested the thought that consequences are open-ended, extending into all futurity, and therefore are impossible to measure.
― more difficult than I look (Aimless), Tuesday, 6 September 2022 17:30 (one year ago) link
consequentialism itself seems almost transparently true, and then well what could be wrong with maximising happiness?
my uneducated answer here is that if you've arrived at a situation where other people are pawns in your game - even if you mean them well - something has gone wrong upstream.
obviously there are situations where you need to guess what is best for someone else, but we should try to minimize them. it shouldn't be the paradigm example of moral reasoning.
― death generator (lukas), Tuesday, 6 September 2022 18:24 (one year ago) link
btw, effective altruism has its own ilx thread.
art is a waste of time; reducing suffering is all that matters
― more difficult than I look (Aimless), Tuesday, 6 September 2022 18:38 (one year ago) link
xpyes, which is why the answer to the Enlightenment: good/bad? question differs depending where in the world you ask it
― rob, Tuesday, 6 September 2022 18:39 (one year ago) link
well what could be wrong with maximising happiness?This was rhetorical but yes treating people as pawns is one major problem, as is the fact that happiness, or whatever your unit of utility is, is not the kind of thing that you can do calculations with. One hundred and one people who are all one percent happy is not at all a better state of affairs than one person who is one hundred percent happy. (Not that there isn't a place for e.g. quality adjusted life years calculations in certain institutional settings.)
― ledge, Tuesday, 6 September 2022 18:59 (one year ago) link
Which is just a fancier way of saying "the end justifies the means". But your chosen formulation of it immediately suggested the thought that consequences are open-ended, extending into all futurity, and therefore are impossible to measureI think "the end justifies the means" is a bit more slippery - it's often used to weigh one set of consequences more heavily than another, e.g. bombing hiroshima to end the war. And, well we're talking about human actions and human consequences, I think its fair to restrcit it to humanly measurable ones.
― ledge, Tuesday, 6 September 2022 19:12 (one year ago) link
Even human consequences extend indefinitely. Identifying an end point is an arbitrary imposition upon a ceaseless flow, the rough equivalent of ending a story with "and they all lived happily ever after".
― more difficult than I look (Aimless), Tuesday, 6 September 2022 20:11 (one year ago) link
so do you never consider the consequences of your actions or do you have trouble getting up in the morning?
― ledge, Tuesday, 6 September 2022 20:43 (one year ago) link
I am not engaged in a program of identifying a universal moral framework based upon the consequences of my actions when I get up in the morning, which certainly makes it easier to choose what to wear.
― more difficult than I look (Aimless), Tuesday, 6 September 2022 20:47 (one year ago) link
touche!
― ledge, Tuesday, 6 September 2022 21:08 (one year ago) link
This is the ideal utilitarian form. You may not like it, but this is what peak performance looks like pic.twitter.com/uHvCp2Cq7y— MHR (@SpacedOutMatt) September 16, 2022
― 𝔠𝔞𝔢𝔨 (caek), Saturday, 17 September 2022 16:30 (one year ago) link
incredible
― death generator (lukas), Sunday, 25 September 2022 23:20 (one year ago) link
Read this a few days ago. As AI burns through staggering amounts of money with no reasonable use case so far, all your fave fascist tech moguls are gonna hitch themselves to a government gravy train under a Trump administration (gift link): https://wapo.st/3wllikQ
― Are you addicted to struggling with your horse? (Boring, Maryland), Sunday, 5 May 2024 14:35 (four weeks ago) link