Discussion about this post

User's avatar
Based If True's avatar

America will be the first to sell you something that actually gives you weight loss but it will most certainly shorten your life just as much if not more as obesity.

Fager 132's avatar

What I really like about this stack is that every single issue makes me go look up things I’ve never heard of before. Like “GLP-1 drugs.” I also read the ResearchGate paper about PAM and neural decoding, minus the part with math symbols. I know the authors’ intended audience is people in AI/computer science, not English majors, but the argot is really irritating and by the time I finished the paper I was in no mood to grant them or anyone else in the field the benefit of the doubt. They are *not* going to all that effort to help the handicapped.

Yes, I sound like the 21st Century equivalent of the people who sneered at the idea of flying machines. I get it. But if the last four years haven’t been Exhibit A for the basket case that is human morality, then what else has it been? The fact that the EvolutionaryScale scientists used to work for Meta, where their former co-workers are still busy deleting my complimentary comments about people’s cat pictures—or their AI is doing it for them—just makes it worse. So does the fact that the researchers spent just four sentences acknowledging (without really addressing) “concerns about mental privacy and the potential misuse of technology.” Considering what I know about how Meta views its users’ privacy, and considering what I know about the government’s involvement in social media (Murthy, anyone?), the fact that *for now* they’re relying on “full and constant subject cooperation” doesn’t help. And really, given Murthy’s thousands of pages of proof that Meta’s in bed with the intelligence services, the research its once and future scientists are doing should be setting off the world’s loudest warning sirens. Tech employees are in the dictionary under “fungible” for their ability to transition from Silicon Valley to government jobs and back.

The writers devote one line at the end of the paper to mentioning the technology’s use in *neuro* prosthetics. For a while I thought they were working on technology that could be applied to artificial limbs a paraplegic could control by thinking about it. The AI would “learn” the brain’s signals for “walk now” and make walking as automatic for a paraplegic as it is for anyone else. Instead, what I got out of the whole article is that as soon as they figure out how to reliably reproduce what someone sees, they’ll reverse-engineer that into their “brain-computer interfaces” to do exactly what they say it can’t do now: “reliably apply it to other subjects” and “reconstruct” internally-generated “imagery and dreams.” They want to pry into our brains.

Sure, maybe the research can “revolutionize their understanding of how complex information is processed and interpreted.” But so could asking the subject. And anyway, what the hell have biologists been doing sticking electrodes in monkey brains for the last hundred years? These guys have been trying to “revolutionize their understanding of how complex information is processed” for at least that long, so if they don’t know by now they either never will or they’re being disingenuous about how they're going to use that knowledge. Put me down for B. Too many tech billionaires and “globalists” are on record promoting “trans-humanism,” however they define it. Their real motive for wanting a window into people’s brains is malignant. Combine that with everything they outright promote about social credit, digital currency, and the literal criminalization of opinions (thought crimes) already practiced in Europe, and you have a techno-fascist’s wet dream.

The risks posed by AI can be mitigated only by advancements in morality. Leaving aside meteors and volcanoes, the risks facing humanity are almost 100% man-made. The enemy is us.

4 more comments...

No posts

Ready for more?