"The Happiness Code" A new approach to self-improvement is taking off in Silicon Valley: cold, hard rationality.

SLAM: debunk creationism, pseudoscience, and superstitions. Discuss logic and morality.

Moderator: Alyrium Denryle

User avatar
Battlehymn Republic
Jedi Council Member
Posts: 1824
Joined: 2004-10-27 01:34pm

"The Happiness Code" A new approach to self-improvement is taking off in Silicon Valley: cold, hard rationality.

Post by Battlehymn Republic »

The Happiness Code
A new approach to self-improvement is taking
off in Silicon Valley: cold, hard rationality.

By JENNIFER KAHN
JAN. 14, 2016

Last summer, three dozen people, mostly programmers in their 20s, gathered in a rented house in San Leandro, Calif., a sleepy suburb of San Francisco, for a lesson in ‘‘comfort-zone expansion.’’ An instructor, Michael Smith, opened the session with a brief lecture on identity, which, he observed, can seem immutable. ‘‘We think we behave in certain ways because of who we are,’’ he began. ‘‘But the opposite is also true. Experience can edit identity.’’

The goal of the ‘‘CoZE’’ exercise, Smith explained, was to ‘‘peek over the fence’’ to a new self; by doing something that makes you uncomfortable and then observing the result. There was an anticipatory hush, and then the room erupted. One person gave a toast. A product manager at Dropbox broke into song. In a corner, a programmer named Brent took off his shirt, revealing a milky chest and back, then sat with his head bowed. (He would later walk around wearing a handwritten sign that read, ‘‘Please touch me.’’)

The exercise went on for an hour, and afterward, participants giddily shared their stories. One person described going onto the patio and watching everyone else through the window, in order to experience a feeling of exclusion. Another submerged his hand in a pan of leftover chicken curry, to challenge his natural fastidiousness. Unexpectedly, he enjoyed the experience. ‘‘It felt playful,’’ he said.

At the end, Smith led everyone in a group cheer. The CoZE exercise was part of a four-day workshop offered by the Center for Applied Rationality (CFAR) in Berkeley, and each of the workshop’s sessions invariably finished with participants chanting, ‘‘3-2-1 Victory!’’ — a ritual I assumed would quickly turn halfhearted. Instead, as the weekend progressed, it was performed with increasing enthusiasm. By the time CoZE rolled around, late on the second day, the group was nearly vibrating. When Smith gave the cue, everyone cheered wildly, some ecstatically thrusting both fists in the air.

As self-help workshops go, Applied Rationality’s is not especially accessible. The center’s three founders — Julia Galef, Anna Salamon and Smith — all have backgrounds in science or math or both, and their curriculum draws heavily from behavioral economics. Over the course of the weekend, I heard instructors invoke both hyperbolic discounting (a mathematical model of how people undervalue long-term rewards) and prospect theory (developed by the behavioral economists Daniel Kahneman and Amos Tversky to capture how people inaccurately weigh risky probabilities). But the premise of the workshop is simple: Our minds, cobbled together over millenniums by that lazy craftsman, evolution, are riddled with bad mental habits. We routinely procrastinate, make poor investments, waste time, fumble important decisions, avoid problems and rationalize our unproductive behaviors, like checking Facebook instead of working. These ‘‘cognitive errors’’ ripple through our lives, CFAR argues, and underpin much of our modern malaise: Because we waste time on Facebook, we end up feeling harried; when we want to eat better or get to the gym more, we don’t, but then feel frustrated and guilty.

Some of these problems are byproducts of our brain’s reward system. We cash checks quickly but drag our feet paying credit-card bills, no matter the financial cost, because cashing a check generates a surge of dopamine but paying a bill makes us stressed. Other mistakes are glitchier. A person who owes back taxes might avoid talking to the I.R.S. because of a lingering monkey-brain belief that avoiding bad news keeps it from being true. While such logical errors may be easy to spot in others, the group says, they’re often harder to see in ourselves. The workshop promised to give participants the tools to address these flaws, which, it hinted, are almost certainly worse than we realize. As the center’s website warns, ‘‘Careful thinking just isn’t enough to understand our minds’ hidden failures.’’

Most self-help appeals to us because it promises real change without much real effort, a sort of fad diet for the psyche. (‘‘The Four-Hour Workweek,’’ ‘‘The Life-Changing Magic of Tidying Up.’’) By the magical-thinking standards of the industry, then, CFAR’s focus on science and on tiresome levels of practice can seem almost radical. It has also generated a rare level of interest among data-driven tech people and entrepreneurs who see personal development as just another optimization problem, if a uniquely central one. Yet, while CFAR’s methods are unusual, its aspirational promise — that a better version of ourselves is within reach — is distinctly familiar. The center may emphasize the benefits that will come to those who master the techniques of rational thought, like improved motivation and a more organized inbox, but it also suggests that the real reward will be far greater, enabling users to be more intellectually dynamic and nimble. Or as Smith put it, ‘‘We’re trying to invent parkour for the mind.’’

CFAR has been offering workshops since 2012, but it doesn’t typically advertise its classes. People tend to hear about the group from co-workers (usually at tech companies) or through a blog called LessWrong, associated with the artificial-intelligence researcher Eliezer Yudkowsky, who is also the author of the popular fan-fiction novel ‘‘Harry Potter and the Methods of Rationality.’’ (Yudkowsky founded the Machine Intelligence Research Institute (MIRI), which provided the original funding for CFAR; the two groups share an office space in Berkeley.) Yudkowsky is a controversial figure. Mostly self-taught — he left school after eighth grade — he has written openly about polyamory and blogged at length about the threat of a civilization-ending A.I. Despite this, CFAR’s sessions have become popular. According to Galef, Facebook hired the group to teach a workshop, and the Thiel Fellowship invited CFAR to teach several classes at its annual meeting. Jaan Tallinn, who helped create Skype, recently began paying for math and science students to attend CFAR meetings.

This is all the more surprising given that the workshops, which cost $3,900 per person, are run like a college-dorm cram session. Participants stay on-site for the entire time (typically four days and nights), often in bargain-basement conditions. In San Leandro, the organizers packed 48 people (36 participants, plus six staff members and six volunteers) into a single house, using twin mattresses scattered on the floor as extra beds. In the kitchen, I asked Matt O’Brien, a 30-year-old product manager who develops brain-training software for Lumosity, whether he minded the close quarters. He looked briefly puzzled, then explained that he already lives with 20 housemates in a shared house in San Francisco. Looking around the chaotic kitchen, he shrugged and said, ‘‘It’s not really all that different.’’

Those constraints produced a peculiar homogeneity. Nearly all the participants were in their early- to mid-20s, with quirky bios of the Bay Area variety. (‘‘Asher is a singing, freestyle rapping, former international Quidditch All-American turned software engineer.’’) Communication styles tended toward the formal. When I excused myself from one conversation, my interlocutor said, ‘‘I will allow you to disengage,’’ then gave a courtly bow. The only older attendee, a man in his 50s who described himself as polyamorous and ‘‘part Vulcan,’’ ghosted through the workshop, padding silently around the house in shorts and a polo shirt.

If the demographics of the workshop were alarmingly narrow, there was no disputing the group’s studiousness. Over the course of four days, I heard not a single scrap of chatter about anything unrelated to rationality. Nor, so far as I could discern, did anybody ever leave the house. Not for a quick trip to the Starbucks a mile down the road. Not for a walk in the sprawling park a half-mile away. One participant, Phoenix Eliot, had recently moved into a shared house where everyone was a ‘‘practicing rationalist’’ and reported that the experience had been positive. ‘‘We haven’t really had any interpersonal problems,’’ Eliot told me. ‘‘Whereas if this were a regular house, with people who just like each other, I think there would have been a lot more issues.’’

When I first spoke to Galef, she told me that, while the group tends to attract analytical thinkers, a purely logical approach to problem-solving is not the goal. ‘‘A lot of people think that rationality means acting like Spock and ignoring things like intuition and emotion,’’ she said. ‘‘But we’ve found that that approach doesn’t actually work.’’ Instead, she said, the aim was to bring the emotional, instinctive parts of the brain (dubbed ‘‘System One’’ by Kahneman) into harmony with the more intellectual, goal-setting parts of the brain (‘‘System Two’’).

At the orientation, Galef emphasized this point. System One wasn’t something to be overcome, she said, but a wise adviser, capable of sensing problems that our conscious minds hadn’t yet registered. It also played a key role in motivation. ‘‘The prefrontal cortex is like a monkey riding an elephant,’’ she told the group. ‘‘System One is the elephant. And you’re not going to steer an elephant by telling it where it should go.’’ The challenge, Galef said, was to recognize instances in which the two systems were at war, leading to a feeling of ‘‘stuckness’’: ‘‘Things like, ‘I want to go to the gym more, but I don’t go.’ Or, ‘I want my Ph.D., but I don’t want to work on it.’ ’’ She sketched a picture of a duck facing one way and its legs and feet resolutely pointed in the opposite direction. She called these problems ‘‘software bugs.’’

Afterward, I chatted with O’Brien and Mike Plotz, a circus juggler-turned-coder, about the program’s appeal. When I asked Plotz why he thought the workshops attracted so many programmers, he glanced at O’Brien. ‘‘I think most of us are fairly analytical,’’ he began. ‘‘We like to think about how complex systems work and how they can be optimized.’’ Because of this, Plotz added, he tends to notice patterns of behavior, in himself and in others. ‘‘When you realize that people are complex systems — that we operate in complicated ways, but also sort of follow rules — you start to think about how you might tweak some of those variables.’’

Deliberately or not, CFAR’s application process also filters out many of the less committed. There is an extensive, in-person interview, conducted by an instructor. Afterward, participants are required to fill out an elaborate self-report, in which they’re asked to assess their own personality traits and behaviors. (A friend or family member is given a similar questionnaire to confirm the accuracy of the applicant’s self-assessment.) ‘‘We get a fair number of people who say, ‘I want to come to the workshop because everybody I work with is really irrational and I want to fix them,’ ’’ Anna Salamon told me. ‘‘Which is not what we are looking for.’’

Despite this rigorous vetting, Salamon acknowledged that the center’s aims are ultimately proselytic. CFAR began as a spinoff of MIRI, which Yudkowsky created in 2000, in part to study the impending threat posed by artificially intelligent machines, which, he argued, could eventually destroy humanity. (Yudkowsky’s concern was that the machines could become sentient, hide this from their human operators and then decide to eliminate us.) Over the years, Yudkowsky found that people struggled to think clearly about A.I. risk and were often dismissive of it. In 2011, Salamon, who had been working at MIRI since 2008, volunteered to figure out how to overcome that problem.

When I spoke with Salamon, she said that ‘‘global catastrophic risks’’ like sentient A.I. were often difficult to assess. There wasn’t much data from which to extrapolate; this not only made the threats harder to evaluate but also discouraged researchers from digging into the question. (Studies have shown that people are more likely to avoid thinking about problems that feel depressing or vague and are also more likely to engage in mental ‘‘discounting’’ — assuming that the risk of something bad happening is lower than it actually is.) CFAR’s original mandate was to give researchers the mental tools to overcome their unconscious assumptions. Or as Salamon put it, ‘‘We were staring at the problem of staring at the problem.’’

Like many in the community, Salamon believes that the skills of rational thought, as taught by CFAR, are important to humanity’s long-term survival, in part because they can help us confront such seemingly remote catastrophic risks, as well as more familiar ones, like poverty and climate change. ‘‘One thing that primates tend to do is to make up stories for why something we believe must be true,’’ Salamon told me. ‘‘It’s very rare that we genuinely evaluate the evidence for our beliefs.’’

It was a point of view that nearly everyone at the workshop fervently shared. As one participant told me: ‘‘Self-help is just the gateway. The real goal is: Save the world.’’

The next day’s classes began with ‘‘goal factoring,’’ taught by Michael Smith. Born in Washington State, Smith was home-schooled and raised by ‘‘immortalist’’ parents. (Immortalists believe that one of humanity’s most-pressing needs is to figure out how to overcome death.) Smith, who goes by Valentine, described his father as a former ‘‘Ayn Randian objectivist’’ who believed in telepathy and named his son after the protagonist in Robert Heinlein’s science-fiction classic ‘‘Stranger in a Strange Land.’’ (In Heinlein’s book, Valentine Michael Smith is raised by Martians but returns to Earth to found a controversial cult.)

As a lecturer, Smith had a messianic quality, gazing intensely at students and moving with taut deliberation, as though perpetually engaged in a tai-chi workout. Goal factoring, Smith explained, is essentially a structured thought exercise: a way to analyze an aspiration (‘‘I want to be promoted to manager’’) by identifying the subgoals that drive it. While some of these may be obvious, others (‘‘I want to impress my ex-girlfriend’’) might be more embarrassing or less conscious. The purpose of the exercise, Smith said, was to develop a process for seeing your own motivations honestly and for spotting when they might be leading you astray. ‘‘These are blind spots,’’ Smith warned. ‘‘Blind spots that can poison your ability to keep track of what’s truly important to you.’’

To begin the factoring process, Smith asked each of us to choose a goal, list all the things we believed would come from accomplishing it and then brainstorm ways to achieve each thing. If you wanted a promotion to make more money, was there another way to get a higher salary — say, by asking for a raise or changing jobs? Finally, Smith said, we should imagine having achieved each of those subgoals. Were we satisfied? If not, that indicated the presence of a hidden motive, one that we had either overlooked or didn’t want to acknowledge.

Though the exercise didn’t strike me as especially penetrating — garden-variety introspection made punctilious — it was hugely popular. My group in the goal-factoring session included Ben Pace, a sweetly lumbering 18-year-old in a suit jacket and running shoes, who tended to balance his notepad on his knee like an old-timey newspaper reporter. Pace had flown over from Britain for the workshop, which he discovered at 15 by reading the LessWrong blog. He had applied to Oxford for the fall, and was hoping to attend. ‘‘I was feeling very worried about it,’’ he confided, ‘‘but then I goal-factored it and realized that I could get many of the same things I want from Oxford in other ways.’’

While Pace said that he had come to the workshop to practice the techniques of rationality, others had more pressing worries. During one break, I chatted with Andrew, a software developer specializing in mobile platforms who asked to be identified only by his first name to protect his privacy. Andrew acknowledged that he tended to struggle in social situations and suffered from depression and anxiety. ‘‘My brain has a lot of ridiculous social rules,’’ he told me. ‘‘I tend to be very closed off. And then there’s a switch where I’m almost completely open. It’s this binary transition.’’

Andrew said that he had initially been dubious of applied rationality, which he first heard about in a Reddit philosophy forum. Over time, though, he found that using the techniques made it easier to catch himself in the act of rationalizing a bad decision or avoiding an unpleasant task, like applying for a job. Initially, Andrew said, he assumed that he was simply afraid of rejection. But when he used aversion factoring — like goal factoring, but focused on what makes you avoid an unpleasant but important task — he made a surprising discovery. While visualizing how he would feel about applying for jobs if there were no chance of rejection, he realized that he still found the task aversive. In the end, he determined that his reluctance was rooted in a fear not of rejection but of making a bad career choice.

It was a significant insight, the kind more typically won through hours of talk therapy. And indeed, some participants reported that the techniques had genuinely changed their lives, either by helping them with mental-health issues like attention deficit or obsessive-compulsive disorder or simply by allowing them to recognize unquestioned assumptions. For a few — especially a set of high achievers for whom success hadn’t brought happiness — that process had been nearly tectonic. ‘‘For most of my life, I believed ‘If I do a good job, good things will happen,’ ’’ one person told me. ‘‘Now I ask, ‘If I do a good job, what does that mean?’ ’’

Others, though, seemed to see rationality less as a fundamental recalibration and more as a tool to be wielded. One participant, Michael Gao — who claimed that, before he turned 18, he made $10 million running a Bitcoin mine but then lost it all in the Mt. Gox collapse — seemed appalled when I suggested that the experience might have led him to value things besides accomplishment, like happiness and human connection. The problem, he clarified, was not that he had been too ambitious but that he hadn’t been ambitious enough. ‘‘I want to augment the race,’’ Gao told me earnestly, as we sat on the patio. ‘‘I want humanity to achieve great things. I want us to conquer death.’’

Given that I had already undergone a fair amount of talk therapy myself, I didn’t expect the workshop to bring me much in the way of new insights. But then, at one point, Smith cited the example of a man with a potentially cancerous mole who refuses to go see the doctor. It was part, he said, of ‘‘a broader class of mental errors’’ we’re all prone to: the belief that avoiding bad news will keep it from becoming true. While this didn’t strike me as particularly revelatory at the time, it turned out to be a stealthy insight. For an exercise the next day, I listed all the reasons I was avoiding talking with a financial planner, something I had intended to do for months. Many of them were pedestrian. Getting my financial records together would be tedious, and I was also mildly embarrassed by my income, which is on the low side. Working through the problem, though, I realized that the actual reason was humiliatingly simple: I was afraid of hearing that I needed to spend less and save more. Like mole man, I was afraid of what I might learn.

But are such realizations alone enough to create change? Fears can be stubborn and not particularly easy to argue with. When I mentioned this to Smith, he shrugged. ‘‘Hiding from the painful states of the world doesn’t prevent them from happening,’’ he said. Then, like a strict parent telling a sniffling child to shape up, he added: ‘‘The point isn’t just ‘How do I get myself to go to the doctor this time?’ It’s ‘How do I make it so that I will never be susceptible to that type of thinking error again?’ ’’

CFAR draws on the insights of behavioral economics and a growing interest in how they might be marshaled to make us happier, healthier and more fiscally responsible. For years, economists were stumped by certain consumer behaviors that seemed irrational and self-defeating, like failing to sign up for a 401K or carelessly going deep into credit-card debt. Daniel Kahneman and Amos Tversky’s prospect theory explained these quirks as a product of a seemingly inbuilt set of misperceptions, known collectively as cognitive bias.

Among other things, they found that people are typically both risk-averse and loss-averse: more likely to choose a guaranteed payout of $1,000 than to gamble on winning $1,400 when there’s a 20 percent chance they could end up with nothing. They also discovered that people tend to underestimate the chance of a low-probability event occurring, thus inadvertently exposing themselves to terrible risks. (The 2011 tsunami, for example, caught the Japanese off guard and devastated parts of northeastern Japan.)

In the past few decades, psychologists have identified dozens of cognitive biases, including ‘‘gambler’s fallacy’’ (believing that a coin toss is more likely to come up heads if the previous five flips were tails); ‘‘anchoring’’ (the tendency to rely heavily on one piece of information — usually the first thing we learn — when making a decision); the ‘‘Ikea effect’’ (disproportionately valuing things that you’ve labored over); and ‘‘unit bias’’ (assuming that a ‘‘portion’’ is the right size, which accounts for our tendency to finish off an opened bag of cookies).

More surprising was the degree to which these biases turned out to drive our behavior, in ways both quotidian (what we choose to buy) and dire (the mortgage collapse that led to the 2008 financial crisis). Since then, a welter of strategies has emerged for exploiting these same mechanisms to spur better long-term choices; some of these are already influencing public policy and public health. Governments have begun encouraging companies, for example, to make enrollment in an I.R.A. the default choice, rather than requiring people to opt in, or asking supermarkets not to put racks of candy right near the registers. Last year, President Obama established a Social and Behavioral Sciences Team at the White House; based on its findings, he recently ordered federal agencies to use behavioral-economics strategies to improve participation in their programs.

What makes CFAR novel is its effort to use those same principles to fix personal problems: to break frustrating habits, recognize self-defeating cycles and relentlessly interrogate our own wishful inclinations and avoidant instincts. Galef described ‘‘propagating urges’’ — a mental exercise designed to make long-term goals feel more viscerally rewarding — as an extension of operant conditioning, in which an experimenter who hopes to increase a certain behavior in an animal will reward incremental steps toward that behavior. Goal factoring and aversion factoring, she added, came out of behavioral economics, as well as research on a cognitive bias known as ‘‘introspection illusion’’: thinking we understand our motives or fears when we actually don’t. (That illusion is why the factoring process begins with listing every reason you’re either avoiding something or pursuing a goal, and then uses a second round of thought experiments to ferret out hidden factors.)

Figuring out how to translate behavioral-economics insights into a curriculum involved years of trial and error. Salamon recruited Galef, a former science journalist, in 2011, and later hired Smith, then a graduate student in math education at San Diego State. (Smith first met Yudkowsky at a conference dedicated to cryonics, in which a deceased person’s body is stored in a supercooled vat, to be resuscitated in a more advanced future.) In early 2012, the group began offering free classes to test its approach and quickly learned that almost none of it worked. Participants complained that the lectures were abstract and confusing and that some points seemed obvious while others simply felt wrong. A session on Bayes’s Theorem was especially unpopular, Salamon recalled, adding, ‘‘People visibly suffered through it.’’

The group also discovered a deeper problem: No one was very motivated to make his or her thinking more accurate. What people did want, Salamon recalled, was help with their personal problems. Some were constantly late to things. Others felt trapped by their own unproductive habits. Nearly everyone wanted help managing their email, eating better and improving their relationships. ‘‘Relatively early on,’’ Salamon said, ‘‘we realized that we had to disguise the epistemic rationality content as productivity advice or relationship advice.’’

In the end, the group built a curriculum largely from existing research into human behavior, but the goal of Applied Rationality remained the same: to provide tools, not advice. ‘‘Unlike a lot of self-help programs, we don’t advocate particular things that people should do,’’ Galef told me. ‘‘We just encourage them to look at the models that are driving their choices and try to examine those models rationally.’’ She shrugged. ‘‘People are already making predictions, whether or not they’re aware of it. They’re already saying ‘I’ll be miserable if I leave this relationship’ or ‘I won’t be able to make any difference in this big global problem because I’m just one person.’ So a lot of what we do is just trying to make people more aware of those predictions and to question whether they’re actually accurate.’’

At the San Leandro workshop, that approach seemed to have paid off. Participants sat raptly through the lectures, despite the intense pace: 80-minute sessions, held back to back, for nine hours, with additional sessions after dinner. Galef later said that this immersive structure was deliberate — a way to ‘‘accelerate the absorption of unfamiliar concepts’’ — but I found it overwhelming. There were sessions on developing an ‘‘inner simulator’’ to help visualize the possible outcome of a decision; one on ‘‘focused grit,’’ in which participants had to brainstorm solutions to seemingly intractable personal problems within a five-minute time limit; another on ‘‘trigger-action planning,’’ which used associative cues, or TAPs, to spur the development of productive habits, like ‘‘The minute I walk through my front door, I will change into my gym clothes.’’

The TAPs session was led by Salamon, a thin, muppety woman with a corona of brown hair. As a graduate student, Salamon studied the philosophy of science, and her lectures often seemed to take the wry view of humans as only marginally more evolved chimps. Involuntary TAPs already drive much of our behavior, she said — ‘‘Like: See open bag of Cheetos. Put in hand’’ — but could also be made intentional and productive.

My partner for the TAPs exercise, a soft-spoken engineer who works at Google, told me that he had tried TAPs before, but with limited success. ‘‘For a while, I was trying to drink more water, so I set up a TAP to drink a glass of water the minute I got to work,’’ he said. ‘‘It worked for a few weeks, but then I stopped. I started just wanting to get to work.’’ Now, he said, he was considering changing his TAP, to cue himself to drink water when he wanted a break. ‘‘Maybe it’ll help me stop reading Reddit,’’ he added.

If TAPs felt slightly gimmicky, like rat-maze training for adults, other techniques seemed more profound. My favorite was propagating urges, the one that focused on motivating yourself to reach long-term goals. What makes things like weight loss or learning to play the violin difficult, Salamon said, is that they often conflict with System One-driven urges (wanting a nap, craving a cookie). And because long-term goals typically require sticking it out through a series of unpleasant intermediate steps (eating less, practicing the violin), it can be easy to lose the original motivation. ‘‘Things we feel rewarded by, we do automatically,’’ Salamon added. ‘‘When I want Thai food, I’ll drive there, look at the menu, go inside, order. I don’t have to convince myself to take those steps. But in other cases, the connection is lost.’’

The solution, she said, is finding a way to make long-term goals feel more like short-term urges, especially because our brains are wired to associate actions and rewards that follow closely in time. (To discourage bad habits, conversely, you should stretch out the time between an action and its reward. ‘‘If you want to stop reading stuff online instead of working, have the pages load more slowly,’’ Salamon advised.) Because of this powerful association, small but immediate negative experiences can have disproportionate impact: the aversive moment of getting into a cold swimming pool can overwhelm the delayed rewards of doing morning laps. To override that resistance, she said, you need to associate the activity with a powerful feeling of reward, one with a stronger neurochemical kick than the virtuous goals (‘‘being healthier’’) that we normally aspire to. The next step is to come up with a mental image that vividly captures that feeling and that you can summon in moments of weakness. ‘‘It has to be a very sticky image,’’ Salamon said. ‘‘If it isn’t, you won’t experience that gut-level surge of motivation.’’ She told a story about how Smith overcame his aversion to doing push-ups, which made him feel unpleasantly hot and sweaty, by tapping into his obsession with longevity: now he pictured the heat from the exercise as a fire that burned away cell-damaging free radicals.

What made propagating urges so compelling, at least for me, was that it cut to the heart of a fundamental internal struggle: the clash between the shortsighted impulses that drive our daily behavior (checking email until it becomes ‘‘too late’’ to go to the gym) and the long-term aspirations that might make us genuinely happier if we could only persuade the petulant toddler in our minds to get onboard.

In a practice session, I paired up with Brian Raszap, a programmer at Amazon with a gentle smile and empathic manner. The aim of the exercise was to troubleshoot a long-term goal that we had each been struggling with and then create a new, sticky image to use as motivation. Raszap went first. He explained that he and some co-workers go to a Brazilian jujitsu class during lunch, usually once or twice a week. ‘‘When I go, I love it,’’ Raszap told me. ‘‘I feel so good. But half the time, I don’t go.’’

We talked through the problem for a while, then I asked Raszap to describe the feeling that he got from the class. He brightened. After working out, he told me, he is very relaxed, filled with a deeply pleasurable lassitude. When I asked if he could tap into that for motivation, Raszap nodded. ‘‘Maybe that would work,’’ he said. ‘‘I usually think about wanting to get better at jujitsu. But maybe instead, I can think about feeling really good this afternoon.’’

Many of CFAR’s techniques resemble a kind of self-directed version of psychotherapy’s holy trinity: learning to notice behaviors and assumptions that we’re often barely conscious of; feeling around to understand the roots of those behaviors; and then using those insights to create change. But there was something unsettling about how CFAR focused on superficial fixes while overlooking potentially deeper issues. While talking with Raszap, I began by asking why, if he truly wanted to go, he often skipped the jujitsu class. Raszap listed practical obstacles: Sometimes he doesn’t want the interruption; sometimes he just has a lot to do. But he also said that even the idea of attending the class more regularly makes him feel anxious. ‘‘It’s a feeling of not doing enough,’’ Raszap told me. Perversely, the workout only heightened his fear of failing, of missing the next class. This was coupled with a claustrophobic sense of obligation, what Raszap called ‘‘a fear of foreverness’’ — ‘‘Like, if I go today, I’ll have to keep going forever.’’

When I told Raszap that these last anxieties sounded like the sort of thing that might benefit more from psychotherapy than from behavior-modification techniques, he agreed. ‘‘I do have a good therapist, and we do talk about this,’’ he told me. ‘‘But it’s a different approach. Therapy is more about grand life narratives. Applied rationality is more practical, like, ‘What if you went to jujitsu in the evening, rather than at lunch?’ ’’

Yet applied rationality doesn’t typically acknowledge this gap. Proponents of rationality tend to talk about the brain as a kind of second-rate computer, jammed full of old legacy software but possible to reprogram if you can master the code. The reality, though, is almost certainly more complex. We often can’t see our biggest blind spots clearly or recognize their influence without outside help.

Several weeks after the workshop, I asked Salamon whether CFAR was intended to be a kind of D.I.Y. therapy, because that seemed to be how some participants were using it. She demurred, saying that the instructors have occasionally recommended counseling to participants who exhibit truly alarming behaviors and beliefs. But she considered therapy-grade problems to be relatively rare. ‘‘Ninety percent of the time, when people aren’t remembering to fill out their expense forms, there’s nothing deep there,’’ Salamon said. Even when a participant does have a deep-seated issue, she added, the techniques can still be effective. ‘‘You just have to give things a bit more space,’’ she said. ‘‘And not expect that they’ll yield to hacks.’’

Shortly before the CoZE exercise began on Saturday, I skipped the group dinner to hide in my room. After two days in Rationality House, I was feeling strung out, overwhelmed by the relentless interaction and confounded by the workshop’s obfuscatory jargon. ‘‘Garfield errors’’ were shorthand for taking the wrong steps to achieve a goal, based on a story about an aspiring comedian who practiced his craft by watching Garfield cartoons. ‘‘Hamming problems’’ signified particularly knotty or deep issues. (The name was a reference, Salamon explained, to the Bell Labs mathematician Richard Hamming, who was known for ambushing his peers by asking what the most important problem in their field was and why they weren’t working on it.)

And while some exercises seemed useful, other parts of the workshop — the lack of privacy or downtime, the groupthink, the subtle insistence that behaving otherwise was both irrational and an affront to ‘‘science’’ — felt creepy, even cultish. In the days before the workshop, I repeatedly asked whether I could sleep at home, because I lived just a 15-minute drive away. Galef was emphatic that I should not. ‘‘People really get much more out of the workshop when they stay on-site,’’ she wrote. ‘‘This is a strong trend ... and the size of the effect is quite marked.’’

As it turns out, I wasn’t the only one to find the workshop disorienting. One afternoon, I sat on the front steps with Richard Hua, a programmer at Microsoft who was also new to CFAR. Since the workshop began, Hua told me, he had sensed ‘‘a lot of interesting manipulation going on.’’

‘‘There’s something about being in there that feels hypnotic to me,’’ he added. ‘‘I wouldn’t say it’s a social pressure, exactly, but you kind of feel obliged to think like the people around you.’’ Another woman, who recently left her software job in Portland, Ore., to volunteer with CFAR, said her commitment to rationality had already led to difficulties with her family and friends. (When she mentioned this, Smith proposed that she make new friends — ones from the rationalist community.)

But there was also the fact that the vibe was just a little strange, what with the underlying interest in polyamory and cryonics, along with the widespread concern that the apocalypse, in the form of a civilization-destroying artificial intelligence, was imminent. When I asked why a group of rationalists would disproportionately share such views, people tended to cite the mind-expanding powers of rational thought. ‘‘This community is much more open to actually evaluating weird ideas,’’ Andrew told me. ‘‘They’re willing to put in the effort to explore the question, rather than saying: ‘Oh, this is outside my window. Bye.’ ’’ But the real reason, many acknowledged, was CFAR’s connection to Yudkowsky. Compulsive and rather grandiose, Yudkowsky is known for proclaiming the imminence of the A.I. apocalypse (‘‘I wouldn’t be surprised if tomorrow was the Final Dawn, the last sunrise before the earth and sun are reshaped into computing elements’’) and his own role as savior (‘‘I think my efforts could spell the difference between life and death for most of humanity’’).

When I asked Galef and Smith whether they worried that the group’s association with Yudkowsky might be off-putting, they seemed genuinely mystified. Galef said the group designed its own curriculum, without consulting Yudkowsky, and also worked hard to remain ‘‘value neutral,’’ emphasizing the techniques of rational thought rather than focusing on MIRI. Smith was more direct. Yudkowsky, he said, is ‘‘entangled in our origins.’’ Then he shrugged. Newton was a jerk, he pointed out, ‘‘but that doesn’t affect physics.’’

As the workshop drew to a close, the fear of falling back into old mental habits seemed to haunt participants. ‘‘I think that if I actually did these things, my life would be measurably better,’’ Hua told me. ‘‘But I can already predict that I’m going to slack off after the workshop ends. There’s a very big mental load around tackling these problems.’’

To keep people on track, CFAR holds online practice sessions for 10 weeks after a workshop and also assigns ‘‘accountability buddies’’ to encourage participation. The center is debating whether to develop an online version of its workshops that anyone can access. At the same time, it is also considering whether it would be ‘‘higher impact’’ to focus on teaching rationality to a small group of influential people, like policy makers, scientists and tech titans. ‘‘When I think about the things that have caused human society to advance, many of them seem to stem from new and better ways of thinking,’’ Galef added. ‘‘And while the self-help function of the workshops is great, I wouldn’t be devoting my life to this if that was all that I thought we were doing.’’

I hadn’t planned to practice the techniques myself, but in the weeks after the workshop ended, I found myself using them often. I began to notice when I was avoiding work — ‘‘finishing’’ a section of the newspaper (unit bias!) or doing other unproductive foot-dragging — and then rationalizing the lost time as mental ‘‘preparation.’’ I also found myself experimenting more and noting the results: working in a library rather than a coffee shop (more effective); signing up and paying for spin classes in advance (ditto); going to a museum on the weekend rather than doing something outdoors (so-so). Against all odds, the workshop had cracked open a mental window: Instead of merely muddling through, I began to consider how my habits might be changed. And while it was hard to tell whether this shift was because of the techniques themselves or simply because I had spent four days focusing intensely on those habits, the effect was the same. Instead of feeling stuck in familiar ruts, I felt productive, open and willing to try new things. I even felt a bit happier.

When I emailed some of the other participants, most reported a similar experience. Mike Plotz, the juggler turned coder, told me that he had recently done ‘‘a flurry of goal-factoring.’’ Among other things, he wanted to understand why he spent so much time checking Facebook every morning before work. Plotz said that he knew the Facebook habit wasn’t helping him and that he often ended up running late and feeling harried. After goal-factoring the problem, Plotz said, he realized that what he really wanted was autonomy: the feeling of being able to choose what he did each morning. Now, he said, rather than passively resisting work through Facebook, he gets up an hour earlier and does whatever he wants. ‘‘This morning I got up, made coffee and listened to ‘Moby-Dick,’ ’’ Plotz said when we spoke. ‘‘So I’d say that, so far, it’s going well.’’

I asked Plotz if he could tell whether the changes he made were due to the applied-rationality techniques or simply the product of a more active, problem-solving mind-set. ‘‘In some ways, I think the techniques are that: a way to kick you into a more productive state of mind,’’ he told me. But he also noted that they supplied a framework, a strategy for working through the questions that such a mind-set might raise. ‘‘It’s one thing to notice your thoughts and behaviors,’’ Plotz said. ‘‘Turning that into a technique that actually lets you accomplish stuff? That’s hard.’’
Has anyone followed Less Wrong? Read Yudkowsky's writings? It's quite interesting to see how his school of human rationality being examined by the mainstream media.
User avatar
Ziggy Stardust
Sith Devotee
Posts: 3114
Joined: 2006-09-10 10:16pm
Location: Research Triangle, NC

Re: "The Happiness Code" A new approach to self-improvement is taking off in Silicon Valley: cold, hard rationality.

Post by Ziggy Stardust »

So telling people that procrastinating is bad counts as a "school of human rationality", now?

I don't particularly feel like researching this guy and institution in more detail, but from the article it just sounds like a bunch of common sense precepts mixed with pop-sci psychology being sold for almost $4,000 a pop. I mean, like any self-help thing there will always be people that can benefit from that (because people are, by and large, really fucking stupid, even when they're well-educated), but it still reeks of a money-grabbing scheme more than anything else.
User avatar
K. A. Pital
Glamorous Commie
Posts: 20813
Joined: 2003-02-26 11:39am
Location: Elysium

Re: "The Happiness Code" A new approach to self-improvement is taking off in Silicon Valley: cold, hard rationality.

Post by K. A. Pital »

Ziggy Stardust wrote:So telling people that procrastinating is bad counts as a "school of human rationality", now?
$4k for that? :lol: They would be better served by a life situation where there is no time for procrastination left.

As for LessWrong, I have no idea where Eliezer stands personally, but his flock seems to be a bunch of misogynistic racist shithead losers who can't get laid, otherwise known as "Dark Enlightenment". :lol:
http://techcrunch.com/2013/11/22/geeks-for-monarchy/

I don't want to play Captain Obvious, but it looks like - and therefore quite likely is - just another way of killing time for rich yuppie motherfuckers. Fuck them.
Lì ci sono chiese, macerie, moschee e questure, lì frontiere, prezzi inaccessibile e freddure
Lì paludi, minacce, cecchini coi fucili, documenti, file notturne e clandestini
Qui incontri, lotte, passi sincronizzati, colori, capannelli non autorizzati,
Uccelli migratori, reti, informazioni, piazze di Tutti i like pazze di passioni...

...La tranquillità è importante ma la libertà è tutto!
Assalti Frontali
User avatar
Terralthra
Requiescat in Pace
Posts: 4741
Joined: 2007-10-05 09:55pm
Location: San Francisco, California, United States

Re: "The Happiness Code" A new approach to self-improvement is taking off in Silicon Valley: cold, hard rationality.

Post by Terralthra »

LessWrong is pretty explicitly anti-reactionary. The #2 highest-karma poster on LW is the author of this, after all.
Adam Reynolds
Jedi Council Member
Posts: 2354
Joined: 2004-03-27 04:51am

Re: "The Happiness Code" A new approach to self-improvement is taking off in Silicon Valley: cold, hard rationality.

Post by Adam Reynolds »

Julia Galef did give a somewhat interesting talk at Skepticon about the idea of The Straw Vulcan and the false potrayal of rationality with Spock, but I don't see myself ever paying $4000 bucks for a session of seeing their ideas presented when it really is a mix of obvious ideas and popularized behavioral economics. Just like almost every other field, real behavioral economics is extremely complex and distilling the ideas down to a level in which they can be understood in an afternoon is problematic.

On some level I wonder if this is any different than fire walking or quantum happiness(Deepok Chopra's version) for a different audience. Disguising self help sessions with the trappings of rationality and behavioral economics. While the idea of encouraging rationality is commendable, I suspect it will be impossible. Especially since no rational person would spend $4,000 for a session when they could get the same information from a $15 book. Even though almost all of the dozens of self help books one could find are mostly bullshit, at least they are cheap bullshit that makes one feel better as they commute to an overpaid job so that they can afford to live in an overpriced bay area house.
K. A. Pital wrote:I don't want to play Captain Obvious, but it looks like - and therefore quite likely is - just another way of killing time for rich yuppie motherfuckers. Fuck them.
The problem with self-help crap in America is that Americans are increasingly realizing that they aren't attaining "The American Dream" and are not willing to recognize that there are deep structural problems that are not being addressed.

Though these are also likely the same people that spent $70 for a car wash during the California drought that used plant based products cleaners and thus avoided wasting water. Meanwhile a normal car wash down the street would cost $10 and recycled its water anyway. Wonderfully rational.
Simon_Jester
Emperor's Hand
Posts: 30165
Joined: 2009-05-23 07:29pm

Re: "The Happiness Code" A new approach to self-improvement is taking off in Silicon Valley: cold, hard rationality.

Post by Simon_Jester »

Ziggy Stardust wrote:So telling people that procrastinating is bad counts as a "school of human rationality", now?
Trying to figure out how to stop procrastinating is a difficult enough exercise, that being a very rational person might be a requirement.

I wouldn't pay four thousand dollars for something like this even if I had four thousand spare dollars kicking around, but I know there are a lot of people who are profoundly frustrated by the persistent bad choices in their lives.

Four thousand dollars is a lot of money, but for some people who feel like they've lost control of their lives due to "grr, stupid brain!" moments, it might well be worth it just to step back and take a clear look at what is going on to make their brain so 'stupid' and how they might combat it.
I don't particularly feel like researching this guy and institution in more detail, but from the article it just sounds like a bunch of common sense precepts mixed with pop-sci psychology being sold for almost $4,000 a pop. I mean, like any self-help thing there will always be people that can benefit from that (because people are, by and large, really fucking stupid, even when they're well-educated), but it still reeks of a money-grabbing scheme more than anything else.
I actually wouldn't be surprised if it is to a large extent a money-grabbing scheme, although having seen some of the people behind it talking, I also suspect they sincerely think they're giving good value for the money. I also suspect they sincerely think they're spending the money for a good cause, namely preventing the robot uprising, which is at least more of a threat than Xenu disintegrating our brains or whatever.

Also, I'll say this. If an adult wants to get actual training in the logic of good decision-making they'd probably have to at least take a couple of college courses to get what they need, and $4000 is at least on the same order of magnitude as what those courses would cost. It's a lot more expensive than reading a few dozen books... but most people who read books don't learn much from them.

If they can teach what they purport to teach, then they're only overcharging in the sense that all adult education is exorbitantly overpriced.
K. A. Pital wrote:$4k for that? :lol: They would be better served by a life situation where there is no time for procrastination left...
I would think it desirable, if we could, you know, fix the world rather than drowning everyone in the shit?

I've read enough of the website on which the people behind this air their ideas that I think the narrative goes something like this...

One starts out frustrated by how stupid people are, by how we manage to systematically fail to create an efficient technological paradise for all, even in nations that explicitly set out with this very goal in mind.* Time and time again, we make the same dumb mistakes, we ignore things we ought to pay attention to, we fall into traps, we pledge allegiance to sheer blinding ignorance and folly.

One thinks "maybe, the world can be fixed, but only if people stop being so stupid."

The question then is "how do we teach people to stop being so stupid? We are but a handful, and the world is vast!"

This is the evolution of their attempt to answer that question.
_______________________________

*E.g. the USSR; that was supposed to be paradise for the people, not just the elite, dammit! That was an explicit design goal! Didn't happen. And whatever was good about it, by the end of the 1980s the economy was so bankrupt that the country had a choice between total economic collapse or being held together by pure military force, the government didn't go to the pure force road, and the economy collapsed as a result- with the government itself disintegrating in the process. Why? What went wrong? Imagine someone who never personally experienced the USSR trying to ask themselves this question. And asking themselves that without stopping at a stupid childish answer like "Communism sucks, capitalism is the awesome!"
As for LessWrong, I have no idea where Eliezer stands personally, but his flock seems to be a bunch of misogynistic racist shithead losers who can't get laid, otherwise known as "Dark Enlightenment". :lol:
http://techcrunch.com/2013/11/22/geeks-for-monarchy/
Uh... no. I think you jumped to a few more conclusions than you should have, there.

Adam Reynolds wrote:Julia Galef did give a somewhat interesting talk at Skepticon about the idea of The Straw Vulcan and the false potrayal of rationality with Spock...
Eh, to be quite honest, Spock as Spock was portrayed pretty damn well. He was nearly always right or nearly right about the situation, he kept his head while others around him were freaking out, his advice and reasoning was almost always helpful. The list of problems where you would think "you know, Spock being here would help" was a lot longer than the list of problems where you would think "you know, Spock would be unhelpful in this situation."

The problem is then how everyone else took the idea of a highly logical character, and turned it into parodies that were less functional, often a vehicle for pre-existing stereotypes against 'nerds' and people who 'think too much.'

Contrast Spock (who is actually logical) to, oh, Sheldon Cooper from The Big Bang Theory. Sheldon is very intelligent, but he's a big walking bag of neuroses and there are entire categories of factual information (such as human behavior) where he exhibits tremendous, willful ignorance.

Spock is logical. Sheldon is parodic-logical. And consequently, Spock is generally a helpful guy you want around, while Sheldon often isn't.
but I don't see myself ever paying $4000 bucks for a session of seeing their ideas presented when it really is a mix of obvious ideas and popularized behavioral economics. Just like almost every other field, real behavioral economics is extremely complex and distilling the ideas down to a level in which they can be understood in an afternoon is problematic.
See, that's the problem that I think these guys are at least trying to solve, even if they're overcharging for their perceived solution.

We KNOW how not to be stupid people. We have massive libraries full of philosophy and science about how to make good decisions. And yet, as a practical matter... the overwhelming majority of this information is simply not being used. We know so much, but 90% or more of us don't use what the human race already knows, and most of the remaining minority only use it occasionally, and we all suffer as a result.
On some level I wonder if this is any different than fire walking or quantum happiness(Deepok Chopra's version) for a different audience. Disguising self help sessions with the trappings of rationality and behavioral economics. While the idea of encouraging rationality is commendable, I suspect it will be impossible. Especially since no rational person would spend $4,000 for a session when they could get the same information from a $15 book.
This, this last sentence, is probably the crux of the problem. :D

Although, again, the thesis of these guys seems to be that there are a lot of people who aspire to be sensible in their daily lives, and who are not. That their problem is not blocked-up emotional mojo or a crisis of confidence or some such, it's that they act in ways that are not reasonable, are not logical, sensible. And that spending some time just bathing in a basic study of how sense and logic can affect your decision-making might in that situation help.

Remember the part of the article with the analogy of the monkey riding an elephant. The part of your brain that actually does sensible things is like a monkey riding an elephant-sized chunk of Stone Age biases. That part, I think, is probably true.

Their target audience seems to be people whose monkey is frustrated with inability to control their elephant, and with failure to understand their elephant...

And who have four thousand dollars burning a hole in their pocket. ;)
K. A. Pital wrote:I don't want to play Captain Obvious, but it looks like - and therefore quite likely is - just another way of killing time for rich yuppie motherfuckers. Fuck them.
The problem with self-help crap in America is that Americans are increasingly realizing that they aren't attaining "The American Dream" and are not willing to recognize that there are deep structural problems that are not being addressed.

Though these are also likely the same people that spent $70 for a car wash during the California drought that used plant based products cleaners and thus avoided wasting water. Meanwhile a normal car wash down the street would cost $10 and recycled its water anyway. Wonderfully rational.
So naturally, someone decides to market a four thousand dollar course on how to stop spending money on such dumb things!

I find it hilarious, because while it actually makes sense on some level, the result is worryingly similar to any other New Age self-help thing. Although that raises the question...

If some individual really DID stumble upon the true secrets of human enlightenment, truths that we could use to bring about a Golden Age... Exactly how would they go about promulgating those truths? They might be forced into a model like this almost by default. It's not like they could go to CNN and convince them to give air time to spread these ideas to the masses or anything.
This space dedicated to Vasily Arkhipov
User avatar
Starglider
Miles Dyson
Posts: 8709
Joined: 2007-04-05 09:44pm
Location: Isle of Dogs
Contact:

Re: "The Happiness Code" A new approach to self-improvement is taking off in Silicon Valley: cold, hard rationality.

Post by Starglider »

$4K is not a lot if you are in the phase of your life where you have started earning a Sillicon Valley salary, but haven't yet built up any serious commitments e.g. mortgage, family. I mean, it's a two week overseas holiday. Not a big deal for the target market. Which is not to say I personally would spend any money or more importantly time on it, but I'm sure it's helpful for some individuals. The use of formal logic terminology for pop-sci things is vaugely annoying, in that anything that is practical for humans to do on a day-to-day basis is not very close to the actual logic, but I guess as long as they're teaching people to overcome problematic cognitive biases, it's ok.

As for the 'dark enlightenment' guys, I have not been paying close attention to this community as late, but from what I gather they're a small and fairly insignificant minority. They just get a disproportionate amount of press coverage (i.e. any at all) because they're a great source of liberal-trolling one-liners.
User avatar
K. A. Pital
Glamorous Commie
Posts: 20813
Joined: 2003-02-26 11:39am
Location: Elysium

Re: "The Happiness Code" A new approach to self-improvement is taking off in Silicon Valley: cold, hard rationality.

Post by K. A. Pital »

Starglider wrote:$4K is not a lot if you are in the phase of your life where you have started earning a Sillicon Valley salary, but haven't yet built up any serious commitments e.g. mortgage, family.
Usually $4K is either 100 or 50% of the person's savings (First Worlders suck at saving and have ridiculous savings norms, as we found out already). Ergo, it is for the rich only.
I mean, it's a two week overseas holiday.
You are surely kidding? A month of smart-planned overseas holiday (any destination) for a normal person is $2k. Unless he/she has kids, that is. That is two times lower than your figure.
Lì ci sono chiese, macerie, moschee e questure, lì frontiere, prezzi inaccessibile e freddure
Lì paludi, minacce, cecchini coi fucili, documenti, file notturne e clandestini
Qui incontri, lotte, passi sincronizzati, colori, capannelli non autorizzati,
Uccelli migratori, reti, informazioni, piazze di Tutti i like pazze di passioni...

...La tranquillità è importante ma la libertà è tutto!
Assalti Frontali
Grumman
Jedi Council Member
Posts: 2488
Joined: 2011-12-10 09:13am

Re: "The Happiness Code" A new approach to self-improvement is taking off in Silicon Valley: cold, hard rationality.

Post by Grumman »

It's also worth remembering it's only $4k if it works. If it doesn't work it's $4k this month, $4k next month for some other self-help guru, $4k the month after that....
User avatar
Starglider
Miles Dyson
Posts: 8709
Joined: 2007-04-05 09:44pm
Location: Isle of Dogs
Contact:

Re: "The Happiness Code" A new approach to self-improvement is taking off in Silicon Valley: cold, hard rationality.

Post by Starglider »

K. A. Pital wrote:Usually $4K is either 100 or 50% of the person's savings (First Worlders suck at saving and have ridiculous savings norms, as we found out already). Ergo, it is for the rich only.
The average software engineer's salary in Sillicon Valley is $134K. The younger end of that group is representative of the target market (and in fact probably is a significant chunk of the target market). This is pretty much in the impulse purchase category for plenty of people I know in that 'upper middle class income but no family commitments yet' bracket. I mean, these are the guys who donnate $5K to a gaming kickstarter because they were feeling particularly geeky that month.
I mean, it's a two week overseas holiday.
You are surely kidding? A month of smart-planned overseas holiday (any destination) for a normal person is $2k. Unless he/she has kids, that is. That is two times lower than your figure.
I don't mean 'cost of Stas humblebragging his frugalisms', I mean the cost that middle class California residents pay on their typical four-point-five star European or Asian getaways. If they are middle-middle-class rather than upper-middle-class they might have to finance such outlays, but that is fully normalised, acceptable and even expected behaviour in modern America.

Anyway the point is that they are not expecting cult-like devotion and sacrifice of all savings for their pep talk. The target audience is people for whom $4K is a month or two's disposable income. In the local context this is 'well off' but not exactly 'rich'; Sillicon Valley has plenty of actually 'rich' people earning seven figures for whom a sports car or a $100K charity donation is an impulse decision.
User avatar
K. A. Pital
Glamorous Commie
Posts: 20813
Joined: 2003-02-26 11:39am
Location: Elysium

Re: "The Happiness Code" A new approach to self-improvement is taking off in Silicon Valley: cold, hard rationality.

Post by K. A. Pital »

$143k is quite rich as it implies over $10k a month (so $4k is less than 1m of work). I don't exactly see where I made an error by calling it just another paid time-killing service for rich yuppies. People who donate $5k to kickstarter game campaigns very likely are empathy-lacking sociopaths, too, in which case no amount of counseling (even very expensive) is gonna help them actually feel empathy; the most they can hope for is the behaviour of a highly active sociopath who mimics empathy because he was taught that is the way people behave. I think that distinguishing between the 1% and the 0,1% is also not very important in that particular case. They are all in the ultra-rich or at least very rich bracket by the standards of the world. The target demographic is clearly the very rich and the rich. Working class people also do not "typically" choose five-star getaways, only spoilt rich people do.

I will get to Simon's idea that these are just well-meaning dudes who think they can "teach" the world logic a bit later.
Lì ci sono chiese, macerie, moschee e questure, lì frontiere, prezzi inaccessibile e freddure
Lì paludi, minacce, cecchini coi fucili, documenti, file notturne e clandestini
Qui incontri, lotte, passi sincronizzati, colori, capannelli non autorizzati,
Uccelli migratori, reti, informazioni, piazze di Tutti i like pazze di passioni...

...La tranquillità è importante ma la libertà è tutto!
Assalti Frontali
Channel72
Jedi Council Member
Posts: 2068
Joined: 2010-02-03 05:28pm
Location: New York

Re: "The Happiness Code" A new approach to self-improvement is taking off in Silicon Valley: cold, hard rationality.

Post by Channel72 »

Average software engineers make on average around $90K - $100K in the US, depending on location, but many software engineers who specialize in certain niches can make anywhere from $150 to $200K per year. Software engineers working in the financial sector in particular can even make upwards of $300K and beyond, depending on various factors.

$4K is a lot by the world's standards, but people living in San Francisco/Silicon Valley Bay Area likely don't see it that way. I wouldn't say that throwing around money at useless shit makes them sociopathic, just oblivious.

Regardless, I likely wouldn't pay $4.00 for a self-help book. You can read all the Less-Wrong stuff online if you care to. I don't really know much about this group other than that they're associated with AI (even though I've used many AI libraries, but have never seen any software actually contributed from "Future of Humanity Institute" - do these guys actually write any code or do they just spend all day blogging??). But I also don't know anything about this "dark enlightenment" thing... the only thing I recall reading on Less-Wrong was a layman's introduction to Bayesian logic and some articles about selection bias or other cognitive biases, which I though were pretty decent.
Simon_Jester
Emperor's Hand
Posts: 30165
Joined: 2009-05-23 07:29pm

Re: "The Happiness Code" A new approach to self-improvement is taking off in Silicon Valley: cold, hard rationality.

Post by Simon_Jester »

K. A. Pital wrote:$143k is quite rich as it implies over $10k a month (so $4k is less than 1m of work). I don't exactly see where I made an error by calling it just another paid time-killing service for rich yuppies. People who donate $5k to kickstarter game campaigns very likely are empathy-lacking sociopaths, too...
Do you know, the guy who's chiefly behind this whole group has actually complained about that very thing on his unreadably-hypertextified website... :D

Specifically, he has observed/complained that people spend their money in ways that don't make sense from any utilitarian point of view.

The philosophy that is underlying your constant scorn of everyone who makes more than, oh, $30000 a year or whatever, and yet does not spend all the surplus money on... vaguely defined something other than personal self-gratification... That is an extreme form of the utilitarian position that every cent of your money should be spent on what objectively maximizes utility for the world.

Except, perhaps unsurprisingly given that he is able to convince people to give him money and you (so far as I know) are not, he has a good enough grasp of human nature to make an observation you have not.

People who do this thing you despise... they are not broken.

They are in fact working pretty much the way they evolved to work, in an environment very different from the one they evolved to work in. They're often quite generous to those personally close to them, then they spend their resources sustaining their own existence, with the (considerable) perceived surplus going to whatever trips their "this is a good idea" impulses on a given day. Which is pretty much how humans are adapted to live, except we lived in conditions of much less surplus, so that the counterproductive effect of "just spend the surplus however you feel like" was less obvious.

If people who do this are "sociopaths" who are 'incapable of empathy,' then humans are sociopaths by default, in which case there is no reason to use the word 'pathic' in 'sociopathic.' It would make no sense to do so, any more than we call a cow deformed for having four legs instead of two.
in which case no amount of counseling (even very expensive) is gonna help them actually feel empathy; the most they can hope for is the behaviour of a highly active sociopath who mimics empathy because he was taught that is the way people behave. I think that distinguishing between the 1% and the 0,1% is also not very important in that particular case. They are all in the ultra-rich or at least very rich bracket by the standards of the world. The target demographic is clearly the very rich and the rich. Working class people also do not "typically" choose five-star getaways, only spoilt rich people do.
I think you've got this interesting thing going where it is an axiom in your mind that "no one who is rich, is anything but a sociopath."

That's really not true.

Even if we were grant the basic premise that the self-identified capitalists are sociopaths by default, which I suspect is where you started from...

There are a lot of places and times in society where a person who has empathy can in fact end up with a large income. An income that is three or even four times higher than the minimum required to thrive in a developed society, and one or two orders of magnitude higher than the minimum required to survive in an undeveloped society.

In particular, this includes people with special skills that are hard to obtain (physicians, engineers, software programmers of certain types).

Calling such people "sociopaths" and 'incapable of empathy' is objectively incorrect. They use money in ways that are not consistent with your hyper-logical and hyper-globalized perspective of how money should most efficiently be spent. But if 'empathy' is a tool they do not possess, then virtually no one on the Earth possesses it. Because virtually no one uses money the way you seem to want to demand that all middle-class people in the developed world use it.
I will get to Simon's idea that these are just well-meaning dudes who think they can "teach" the world logic a bit later.
I've been reading their blog posts on and off for five years, Stas; I'm not just making this up.
Channel72 wrote:Average software engineers make on average around $90K - $100K in the US, depending on location, but many software engineers who specialize in certain niches can make anywhere from $150 to $200K per year. Software engineers working in the financial sector in particular can even make upwards of $300K and beyond, depending on various factors.

$4K is a lot by the world's standards, but people living in San Francisco/Silicon Valley Bay Area likely don't see it that way. I wouldn't say that throwing around money at useless shit makes them sociopathic, just oblivious.
I would not be surprised if one of the secret "I wish they'd learn to do this" goals of the seminar operators is "I wish people were less oblivious about spending their money.

Although since they're convinced they can save the world, they've probably convinced themselves that at least they will devote the $4000 to a useful cause.

Rationality, or rationalization?
Regardless, I likely wouldn't pay $4.00 for a self-help book. You can read all the Less-Wrong stuff online if you care to.
Yes. If they're selling anything, it's socialization. There's a big psychological difference between what you learn by personally experiencing something, and what you learn by reading a book (or a massively hypertextified blog). Whether what they're selling is remotely worth the money to the people buying it... dunno. I can actually imagine the answer being 'yes' for them, but there are no guarantees.
I don't really know much about this group other than that they're associated with AI (even though I've used many AI libraries, but have never seen any software actually contributed from "Future of Humanity Institute" - do these guys actually write any code or do they just spend all day blogging??).
:D

Do you know, that's an interesting question that I would very much like to know the answer to. Although I gather that their basic thesis is that developing AIs of human or greater intelligence is folly given the current state of AI research. Their reason being that we (by definition) cannot predict what such a clever machine will do, and have no means of ensuring that the machine wants what we want it to want.

This leads to nightmare scenarios like a computer programmed to "make people smile" which ends up taking over the world's resources, breeding the maximum possible number of humans, and trapping us all in pens with powerful drugs and lots of little manipulator arms to make us smile.

[An oversimplification, but I'm fitting this into a few paragraphs here]

So they're not trying to write smarter AIs, and indeed would probably see that as counterproductive. If they're going to do anything it'd be very abstract, theoretical research into developing ways to make an artificial intelligence seek out the goals we would actually want it to seek.
But I also don't know anything about this "dark enlightenment" thing... the only thing I recall reading on Less-Wrong was a layman's introduction to Bayesian logic and some articles about selection bias or other cognitive biases, which I though were pretty decent.
Doubt you could find any of the "dark enlightenment" stuff except by following links from links from Less Wrong. There may be some fundamental similarities between the two crowds, like "we are just plain so much smarter than The Herd that we can afford to casually reject axioms The Herd takes for granted." But they're pretty incompatible in a lot of ways.

The Less Wrong crowd tend to be somewhat anti-political rather than prescribing a specific political organization or set of political views. Or at least that is my perception. I could be wrong.
This space dedicated to Vasily Arkhipov
User avatar
K. A. Pital
Glamorous Commie
Posts: 20813
Joined: 2003-02-26 11:39am
Location: Elysium

Re: "The Happiness Code" A new approach to self-improvement is taking off in Silicon Valley: cold, hard rationality.

Post by K. A. Pital »

Simon_Jester wrote:Specifically, he has observed/complained that people spend their money in ways that don't make sense from any utilitarian point of view.
That certainly will not be helped by an $4k self-help course that is meant to teach you how to behave outside your comfort zone. I have seen a great many people who made decisions that sucked, from a long-term investment or self-development point of view, and they kept making them despite spending obscene cash on counselling.
Simon_Jester wrote:The philosophy that is underlying your constant scorn of everyone who makes more than, oh, $30000 a year or whatever, and yet does not spend all the surplus money on... vaguely defined something other than personal self-gratification... That is an extreme form of the utilitarian position that every cent of your money should be spent on what objectively maximizes utility for the world.
I only have that scorn because people who make more than $30k are supposed to deserve this somehow (in reality, everything is determined by the concentration of capital and thus even completely undeserving people can exist in this paradise world). But it does not seem like they really are smart. This is especially evident when it comes to long-term planning.
Simon_Jester wrote:Except, perhaps unsurprisingly given that he is able to convince people to give him money and you (so far as I know) are not, he has a good enough grasp of human nature to make an observation you have not.
I fully agree that the person who can make richies give out 4 thousand for essentially nothing is smart. Most people in the luxury services/goods market are smart, and have a good grasp of psychology. I mean, if I was a rich US yuppie with a corresponding background that did not include questionable anti-government activities that could easily ruin a person's life forever, I would quite likely enjoy basically scamming my less intelligent brethren by offering them feelgood services with a "science" fleur.
People who do this thing you despise... they are not broken. .... If people who do this are "sociopaths" who are 'incapable of empathy,' then humans are sociopaths by default, in which case there is no reason to use the word 'pathic' in 'sociopathic.' It would make no sense to do so, any more than we call a cow deformed for having four legs instead of two.
Maybe you are also lacking empathy, which is why you consider this normal. I know I do and that I suffer from sociopathy, but I also know this is not normal and I try to maintain my highly integrated sociopath persona to the best of my ability. But it is true that in the modern world, knowing full well just how much even a tiny income buff can change halfway across the world, people choosing to just waste this instead are, indeed, sociopathic. I know that a great many normal people around me seek to find a way to donate the surplus if it is generated on a constant basis, and they seek to donate it in a maximum-impact fashion. These are not highly skilled engineers who generally know a lot about the world, these are people whose erudition is severly lacking and yet they spend effort to find a way to help others. They certainly do it before wasting a good chunk of their salary on overpriced luxury services, unless they are, indeed, rich - at which point it seems like a tiny expenditure to them.
I think you've got this interesting thing going where it is an axiom in your mind that "no one who is rich, is anything but a sociopath." That's really not true. Even if we were grant the basic premise that the self-identified capitalists are sociopaths by default, which I suspect is where you started from... There are a lot of places and times in society where a person who has empathy can in fact end up with a large income. An income that is three or even four times higher than the minimum required to thrive in a developed society, and one or two orders of magnitude higher than the minimum required to survive in an undeveloped society. In particular, this includes people with special skills that are hard to obtain (physicians, engineers, software programmers of certain types). Calling such people "sociopaths" and 'incapable of empathy' is objectively incorrect.
I know I am a sociopath because my behaviour patterns are different from others around me who are more emotional and have responses more in line with "the norm". The normal people are also more tolerant of mass media consumption, they enjoy narratives and they do not really dwell much on the facts, they experience a lot of emotion even when seeing remote things happen to remote people. I have also seen that engineers and especially capitalists have an even more muted emotional and empathic response. Which led me to the observation that these people are indeed, perhaps due to the trade or class requirements, more prone to sociopathy.
I've been reading their blog posts on and off for five years, Stas; I'm not just making this up
I do not think you are. I just think that people who sell luxury services to a niche market are hardly something globally relevant. I also think that their goals are, given the price tag, first and foremost to make money. There are a lot of charities and social organizations that offer good counselling and further education for free or at mininal cost (especially in the First World). Some nations even have educational leave mandated by law. It is usually cheap relative to the income. And it is professional services, too, with professors offering courses in psychology, politics, sociology, behavioural interaction and so on and so forth.
Lì ci sono chiese, macerie, moschee e questure, lì frontiere, prezzi inaccessibile e freddure
Lì paludi, minacce, cecchini coi fucili, documenti, file notturne e clandestini
Qui incontri, lotte, passi sincronizzati, colori, capannelli non autorizzati,
Uccelli migratori, reti, informazioni, piazze di Tutti i like pazze di passioni...

...La tranquillità è importante ma la libertà è tutto!
Assalti Frontali
Simon_Jester
Emperor's Hand
Posts: 30165
Joined: 2009-05-23 07:29pm

Re: "The Happiness Code" A new approach to self-improvement is taking off in Silicon Valley: cold, hard rationality.

Post by Simon_Jester »

K. A. Pital wrote:
Simon_Jester wrote:Specifically, he has observed/complained that people spend their money in ways that don't make sense from any utilitarian point of view.
That certainly will not be helped by an $4k self-help course that is meant to teach you how to behave outside your comfort zone. I have seen a great many people who made decisions that sucked, from a long-term investment or self-development point of view, and they kept making them despite spending obscene cash on counselling.
The stated goal of the overall program is to help people overcome persistent irrational habits by teaching them to think about, and analyze, their own decision-making processes in ways most people are not accustomed to doing.

If they can achieve the stated goal, then it actually might, over the long run, help people spend money in more utilitarian-sensible ways, to a level that might even justify a four-digit expense as an opening investment. With this particular group, I believe they are actually trying to accomplish the stated goal. Whether or not they can achieve it, is another question.

The comfort-zone thing was one of numerous separate activities; I cannot speak to how it fits into the overall program. It may be something that makes sense in a planned context, or it may be them doing something irrelevant and dumb that turns out to be a waste of the $50-100 people are paying for each hour of the activity.
Simon_Jester wrote:The philosophy that is underlying your constant scorn of everyone who makes more than, oh, $30000 a year or whatever, and yet does not spend all the surplus money on... vaguely defined something other than personal self-gratification... That is an extreme form of the utilitarian position that every cent of your money should be spent on what objectively maximizes utility for the world.
I only have that scorn because people who make more than $30k are supposed to deserve this somehow (in reality, everything is determined by the concentration of capital and thus even completely undeserving people can exist in this paradise world). But it does not seem like they really are smart. This is especially evident when it comes to long-term planning.
And because of this, I question whether your notion of "smart" matches what "smart" even looks like.

I also get very frustrated by the way that you use this scorn regardless of who you are speaking to, instead of only to people who actually believe that only virtuous, deserving people make huge amounts of money. It would be understandable if you were doing it that latter way. Doing it to people who don't make that assumption just makes you tiresome and repetitive to talk to.

It also sometimes leads you to the opposite extreme- assuming that because undeserving individuals can exist in a paradise, the individuals who exist in a paradise are undeserving, whenever they happen to show character defects that would be shared by virtually everyone in the world if they and their ancestors had lived in a similar paradise for a few generations.
Simon_Jester wrote:Except, perhaps unsurprisingly given that he is able to convince people to give him money and you (so far as I know) are not, he has a good enough grasp of human nature to make an observation you have not.
I fully agree that the person who can make richies give out 4 thousand for essentially nothing is smart. Most people in the luxury services/goods market are smart, and have a good grasp of psychology. I mean, if I was a rich US yuppie with a corresponding background that did not include questionable anti-government activities that could easily ruin a person's life forever, I would quite likely enjoy basically scamming my less intelligent brethren by offering them feelgood services with a "science" fleur.
This particular insight does not have direct value when it comes to scamming people. It is, nonetheless, relevant.

I think your prejudices, however honestly you may have come by them, are making it hard for you to understand the motives and psychology of middle-class and lower-upper-class people in the developed world.

You perceive such people, and you perceive them as having "sociopathy" in the sense of some kind of sickness or insanity. And you do this when what you are actually looking at is people with a human-normal amount of empathy, who do not automatically react as an organized bloc in the most logically efficient way to address all types of human suffering.
This space dedicated to Vasily Arkhipov
User avatar
K. A. Pital
Glamorous Commie
Posts: 20813
Joined: 2003-02-26 11:39am
Location: Elysium

Re: "The Happiness Code" A new approach to self-improvement is taking off in Silicon Valley: cold, hard rationality.

Post by K. A. Pital »

I think I have made an important point about the nature of this program: it is overpriced, and its target audience is the rich. Whether or not they are true sociopaths is a tangent. Even if I am interested in discussing this, it is not the core argument.

I consider this particular overpriced luxury service is targeting the rich who have a poor grasp of sound investment decisions, and also that subscribing to it is itself irrational - just like many other overpriced counselling services are. In some ways even open and well-exposed scams like scientology are a similar waste of money, but on a far greater level.

There are good public counselling services that are offered for free; there are free or cheap group interaction courses. People who are intelligent enough to earn at least 4k a month should know about that. If they don't know that, they must be in some sort of information vacuum or very selective information flow (which itself raises questions), and if they know but still opt for the overpriced course, they clearly are exhibiting the very irrationality they want to defeat by subscribing to this course.

I hope that is clear.

As for my comments on sociopathy, you missed the key observation that there are a lot of people who act irrationaly, but they do so in a manner that is incosistent with sociopathy (for example, they seek to donate a huge fraction of their money to charities even when their livelihood does not appear to be fully secured, and they spend a lot of time figuring out how charities work and which countries/people the assistance would go to, etc.), and they do so when their own incomes are quite average and do not allow spending $4 lightly. So far from being an organized bloc that acts to optimally extinguish suffering, they act with errors and mistakes, but their errors are in choosing different ways of charitable or political spending (some believe charity is counterproductive to political or industrial reform, in which case they seek to fund the latter two). Their errors are not in spending tons of money on self-development courses for themselves. Which I think separates them from people like the socially maladapted engineers who opt for this course that prefer to waste money on themselves or some silly kickstarter campaigns.
Lì ci sono chiese, macerie, moschee e questure, lì frontiere, prezzi inaccessibile e freddure
Lì paludi, minacce, cecchini coi fucili, documenti, file notturne e clandestini
Qui incontri, lotte, passi sincronizzati, colori, capannelli non autorizzati,
Uccelli migratori, reti, informazioni, piazze di Tutti i like pazze di passioni...

...La tranquillità è importante ma la libertà è tutto!
Assalti Frontali
User avatar
K. A. Pital
Glamorous Commie
Posts: 20813
Joined: 2003-02-26 11:39am
Location: Elysium

Re: "The Happiness Code" A new approach to self-improvement is taking off in Silicon Valley: cold, hard rationality.

Post by K. A. Pital »

Truth be told, after reading more about LessWrong, I also remain quite uncertain that they are progressive or anti-reactionary, too.

It seems to have attracted some "polyamory" types, it used to have neoreactionary members - I think, because maladapted individuals tend to be drawn in by something that is apparently apolitical, but promises intellectual elitism ("I am smarter than the dumb average!") and ways to attract women ("polyamory" that appeals to young asocial males is basically polygyny due to sexual frustrations, and in the worst case crypto-misogyny).

"How I hacked myself to be polyamorous" :lol:

Indeed, my suspicions were correct. What was only a gut feeling of an elitist rich kid club is now almost certainly true.
Lì ci sono chiese, macerie, moschee e questure, lì frontiere, prezzi inaccessibile e freddure
Lì paludi, minacce, cecchini coi fucili, documenti, file notturne e clandestini
Qui incontri, lotte, passi sincronizzati, colori, capannelli non autorizzati,
Uccelli migratori, reti, informazioni, piazze di Tutti i like pazze di passioni...

...La tranquillità è importante ma la libertà è tutto!
Assalti Frontali
Simon_Jester
Emperor's Hand
Posts: 30165
Joined: 2009-05-23 07:29pm

Re: "The Happiness Code" A new approach to self-improvement is taking off in Silicon Valley: cold, hard rationality.

Post by Simon_Jester »

I would not be surprised if the site has attracted pathetic and creepy members over the past few years; I paid more attention to them in 2010-12 or so, and was probably not noticing everything that went on there at that time either.

The actual blogger, Yudkowsky, definitely seems to have been running up his creepiness/cultist score over the past half-decade, as his blog attracted more and more attention from people more and more inclined to foolishness or to cult-like reverence of his ideas.

That would include, by the way, most of the polyamory stuff, which I never really paid attention to. Sexual freedom isn't a bad thing and there's no reason to condemn a group or movement purely because it practices said freedom, but it certainly can attract creeps and loonies.

On the one hand, a lot of what Yudkowsky originally posted on the blog stood on its own merits. It was not idiotic crap like Dianetics.

And yet, I would honestly not be surprised to see that LessWrong and its associated organizations* are devolving into something which is for all practical purposes a lot like Scientology. It's an interesting look at the dynamics of cult formation. You start with a charismatic individual who sincerely believes they're onto something**. They start talking about it, and say some interesting things, interesting to someone anyway, which may or may not be true.

Other people join the discussion. Some are capable of functioning on the same level as the original charismatic, and participate in the discussion of spiritual/intellectual affairs. Others are deeply damaged or flawed individuals who find some sort of strength or meaning or other value in the teachings of the charismatic. What happens after that is unpredictable. The charismatic may be a cynical manipulator in it for the money. Or they may go insane from the adulation of the damaged people. Or they may have been insane to begin with. Or the damaged people may somehow end up "running the asylum" after the charismatic leaves. Sometimes the cult commits mass suicide, sometimes it peters out after a lot of people waste a lot of time and money, sometimes the cult becomes a major world religion.

At the moment I'm guessing the LessWrong crowd either sees Yudkowsky go crazy (continue to go crazy?), or sees the lunatics running the asylum.

On a side note... Interesting take on the site itself here:
http://rationalwiki.org/wiki/LessWrong
_________________

*(i.e., those which are run largely by people who frequent that website)
**I honestly think Yudkowsky was onto some things, and was at least doing a decent job of explaining some other things that were not original to him.
This space dedicated to Vasily Arkhipov
ndryden
Redshirt
Posts: 9
Joined: 2013-11-09 11:47am

Re: "The Happiness Code" A new approach to self-improvement is taking off in Silicon Valley: cold, hard rationality.

Post by ndryden »

Channel72 wrote:I don't really know much about this group other than that they're associated with AI (even though I've used many AI libraries, but have never seen any software actually contributed from "Future of Humanity Institute" - do these guys actually write any code or do they just spend all day blogging??).
That would be because, as far as I can tell, they don't really write code, but you can take a look at their publications. The recent ones seem decent, and include a couple top AI/ML conferences or their associated workshops (like AAAI and NIPS). The relevant papers all seem more focused on theory. The publications from MIRI follow a similar trend.

At this point, both seem like fairly typical academic groups trying to do basic research, that just happen to have a lot of popular press and online presence. They're not writing software.
User avatar
K. A. Pital
Glamorous Commie
Posts: 20813
Joined: 2003-02-26 11:39am
Location: Elysium

Re: "The Happiness Code" A new approach to self-improvement is taking off in Silicon Valley: cold, hard rationality.

Post by K. A. Pital »

Simon_Jester wrote:I paid more attention to them in 2010-12 or so, and was probably not noticing everything that went on there at that time either.
I paid almost zero attention to them as a collective until now. I've read some of Yudkowsky's writings (found them through ways entirely unrelated to the community); I found his fanfiction amusing, while his non-fiction seemed a bit self-indulgent (at that time I had no idea to what extent, too).
Simon_Jester wrote:Sexual freedom isn't a bad thing and there's no reason to condemn a group or movement purely because it practices said freedom, but it certainly can attract creeps and loonies.
I'm condemning the lunatic component and the yuppie component. I'm definetely not saying sexual freedom is a bad thing - but the cultural insulation and inherent elitism combined with such values typically only reinforce maladaption. They can't cure it...

Even the OP itself is quite damning, if you look at it without actually thinking about peer pressure, self-imposed isolation plus non-stop guided interaction that makes everyone behave "as others do" in positive terms. We rarely do so when discussing churches or cults, after all.
Lì ci sono chiese, macerie, moschee e questure, lì frontiere, prezzi inaccessibile e freddure
Lì paludi, minacce, cecchini coi fucili, documenti, file notturne e clandestini
Qui incontri, lotte, passi sincronizzati, colori, capannelli non autorizzati,
Uccelli migratori, reti, informazioni, piazze di Tutti i like pazze di passioni...

...La tranquillità è importante ma la libertà è tutto!
Assalti Frontali
Simon_Jester
Emperor's Hand
Posts: 30165
Joined: 2009-05-23 07:29pm

Re: "The Happiness Code" A new approach to self-improvement is taking off in Silicon Valley: cold, hard rationality.

Post by Simon_Jester »

The only thing this really has going for it that a lot of cults don't is a nominally sensible starting goal: teach people to recognize their own cognitive biases and live better/smarter/more productive lives by doing so.

Aside from that, yep, cult. A bit more sympathetic to me than the average cult because I feel like they're actually... trying... to be sane-ish. But cult.
This space dedicated to Vasily Arkhipov
Channel72
Jedi Council Member
Posts: 2068
Joined: 2010-02-03 05:28pm
Location: New York

Re: "The Happiness Code" A new approach to self-improvement is taking off in Silicon Valley: cold, hard rationality.

Post by Channel72 »

ndryden wrote:At this point, both seem like fairly typical academic groups trying to do basic research, that just happen to have a lot of popular press and online presence. They're not writing software.
Yeah, and that's why I don't find them very interesting. I don't see any of their ideas actually being put into use, commercially or via contributions to open source. They can blather all they want about singularities and basilisks or whatever the fuck, but they're not bringing us any closer to having an actual proto-AI to actually experiment with, and test our hypotheses against. I'd rather listen to Peter Norvig.

Meanwhile nobody is really even close to making any kind of general AI, because it's really fucking hard and nobody is sure how to synthesize all the various AI research into a single piece of software that suddenly wakes up and starts trying to take over the world or whatever. Instead we have highly-specific approximations of certain subsets of general intelligence, like "fuzzy pattern recognition" shit like classifiers (neural nets, SVMs, etc.) and various "NLP" algorithms, which are basically just straightforward probabilistic/statistical approaches. Basically AI has mostly become a Google Map-Reduce Bayesian/Big Data thing that is more like statistical analysis than anything like actual human intelligence, because nobody actually knows how to implement a cerebral cortex in Python. (But it probably involves Numpy or something.)
Grumman
Jedi Council Member
Posts: 2488
Joined: 2011-12-10 09:13am

Re: "The Happiness Code" A new approach to self-improvement is taking off in Silicon Valley: cold, hard rationality.

Post by Grumman »

Channel72 wrote:
ndryden wrote:At this point, both seem like fairly typical academic groups trying to do basic research, that just happen to have a lot of popular press and online presence. They're not writing software.
Yeah, and that's why I don't find them very interesting. I don't see any of their ideas actually being put into use, commercially or via contributions to open source. They can blather all they want about singularities and basilisks or whatever the fuck, but they're not bringing us any closer to having an actual proto-AI to actually experiment with, and test our hypotheses against. I'd rather listen to Peter Norvig.
I'd also argue that the singularities and basilisks and so on are evidence of an irrational fear of AI. A more sensible stance is to not use software without a human in the loop - either the software must be designed by humans or the software must play an advisory role to a human, or both. That's not because you're worried about unleashing a hypermalevolent AI or Simon's human happiness farm, but just for the same reason you wouldn't go eating random weeds out of your garden without making sure they're not bad for you.
Simon_Jester
Emperor's Hand
Posts: 30165
Joined: 2009-05-23 07:29pm

Re: "The Happiness Code" A new approach to self-improvement is taking off in Silicon Valley: cold, hard rationality.

Post by Simon_Jester »

Channel72 wrote:
ndryden wrote:At this point, both seem like fairly typical academic groups trying to do basic research, that just happen to have a lot of popular press and online presence. They're not writing software.
Yeah, and that's why I don't find them very interesting. I don't see any of their ideas actually being put into use, commercially or via contributions to open source. They can blather all they want about singularities and basilisks or whatever the fuck, but they're not bringing us any closer to having an actual proto-AI to actually experiment with, and test our hypotheses against. I'd rather listen to Peter Norvig.
So... because they're worried about what the technology will do rather than about how to invent it faster, they're boring?

That strikes me as rather... short-sighted. If there is a potentially world-altering technology, I'd think you would want at least a little theoretical work being done in directions that help ensure it alters the world in non-horrible ways.
Meanwhile nobody is really even close to making any kind of general AI, because it's really fucking hard and nobody is sure how to synthesize all the various AI research into a single piece of software that suddenly wakes up and starts trying to take over the world or whatever. Instead we have highly-specific approximations of certain subsets of general intelligence, like "fuzzy pattern recognition" shit like classifiers (neural nets, SVMs, etc.) and various "NLP" algorithms, which are basically just straightforward probabilistic/statistical approaches. Basically AI has mostly become a Google Map-Reduce Bayesian/Big Data thing that is more like statistical analysis than anything like actual human intelligence, because nobody actually knows how to implement a cerebral cortex in Python. (But it probably involves Numpy or something.)
There's AI in the sense of code that crunches numbers and statistically analyzes data.

And there's AI in the sense of self-modifying or 'learning' software.

But people do continue to work on self-modifying software, and we're approaching the level of computer technology at which it becomes possible to build computers that can do as much computation as the human brain (if only by brute force emulation of brains).

So having someone out there doing research into "if you create a self-modifying program with a goal set, how does it react to those goals, how can we predict what it will do, or monitor its evolution" seems wise to me.
This space dedicated to Vasily Arkhipov
Channel72
Jedi Council Member
Posts: 2068
Joined: 2010-02-03 05:28pm
Location: New York

Re: "The Happiness Code" A new approach to self-improvement is taking off in Silicon Valley: cold, hard rationality.

Post by Channel72 »

Simon_Jester wrote:So... because they're worried about what the technology will do rather than about how to invent it faster, they're boring?

That strikes me as rather... short-sighted. If there is a potentially world-altering technology, I'd think you would want at least a little theoretical work being done in directions that help ensure it alters the world in non-horrible ways.
Most of what I've seen from them is hardly a rigid theoretical framework, but more like quasi-religious tenets about what they think a super-intelligent AI would do. But perhaps I haven't read enough of Yudkowsky.
And there's AI in the sense of self-modifying or 'learning' software.

But people do continue to work on self-modifying software, and we're approaching the level of computer technology at which it becomes possible to build computers that can do as much computation as the human brain (if only by brute force emulation of brains).
Learning vs self-modifying code are very different. There's few applications of self-modifying code in practical use, outside of JITs. Machine-learning software typically just mutates data or runtime state, not code. (Back in the day LISP used to make it fashionable to write self-modifying programs by blurring the line between code and data, but these days self-modifying code is pretty rare in actual real-world usage.)
Post Reply