America will be the first to sell you something that actually gives you weight loss but it will most certainly shorten your life just as much if not more as obesity.
Drugs have risks and sometimes tradeoffs! There are not long-term studies for GLP-1s. However, they seem to be powerful for facilitating weightloss, which is a huge benefit to many. I think optimizing dosage will be important, and these oral formulations may enable smaller daily dosing rather than a bolus weekly dose.
Oh I’m still super interested in following your quest for discovering a weight loss pill with no downsides. I think my biggest fear is some people will abuse it out of insecurity. Regulations will probably be very necessary.
What I really like about this stack is that every single issue makes me go look up things I’ve never heard of before. Like “GLP-1 drugs.” I also read the ResearchGate paper about PAM and neural decoding, minus the part with math symbols. I know the authors’ intended audience is people in AI/computer science, not English majors, but the argot is really irritating and by the time I finished the paper I was in no mood to grant them or anyone else in the field the benefit of the doubt. They are *not* going to all that effort to help the handicapped.
Yes, I sound like the 21st Century equivalent of the people who sneered at the idea of flying machines. I get it. But if the last four years haven’t been Exhibit A for the basket case that is human morality, then what else has it been? The fact that the EvolutionaryScale scientists used to work for Meta, where their former co-workers are still busy deleting my complimentary comments about people’s cat pictures—or their AI is doing it for them—just makes it worse. So does the fact that the researchers spent just four sentences acknowledging (without really addressing) “concerns about mental privacy and the potential misuse of technology.” Considering what I know about how Meta views its users’ privacy, and considering what I know about the government’s involvement in social media (Murthy, anyone?), the fact that *for now* they’re relying on “full and constant subject cooperation” doesn’t help. And really, given Murthy’s thousands of pages of proof that Meta’s in bed with the intelligence services, the research its once and future scientists are doing should be setting off the world’s loudest warning sirens. Tech employees are in the dictionary under “fungible” for their ability to transition from Silicon Valley to government jobs and back.
The writers devote one line at the end of the paper to mentioning the technology’s use in *neuro* prosthetics. For a while I thought they were working on technology that could be applied to artificial limbs a paraplegic could control by thinking about it. The AI would “learn” the brain’s signals for “walk now” and make walking as automatic for a paraplegic as it is for anyone else. Instead, what I got out of the whole article is that as soon as they figure out how to reliably reproduce what someone sees, they’ll reverse-engineer that into their “brain-computer interfaces” to do exactly what they say it can’t do now: “reliably apply it to other subjects” and “reconstruct” internally-generated “imagery and dreams.” They want to pry into our brains.
Sure, maybe the research can “revolutionize their understanding of how complex information is processed and interpreted.” But so could asking the subject. And anyway, what the hell have biologists been doing sticking electrodes in monkey brains for the last hundred years? These guys have been trying to “revolutionize their understanding of how complex information is processed” for at least that long, so if they don’t know by now they either never will or they’re being disingenuous about how they're going to use that knowledge. Put me down for B. Too many tech billionaires and “globalists” are on record promoting “trans-humanism,” however they define it. Their real motive for wanting a window into people’s brains is malignant. Combine that with everything they outright promote about social credit, digital currency, and the literal criminalization of opinions (thought crimes) already practiced in Europe, and you have a techno-fascist’s wet dream.
The risks posed by AI can be mitigated only by advancements in morality. Leaving aside meteors and volcanoes, the risks facing humanity are almost 100% man-made. The enemy is us.
I'm sure you will hear more about GLP-1 drugs as time goes on. They will become the best selling pharmaceutical in 2024. It seems that they likely have many effects beyond apetite suppression, even improving general discipline and longterm planning. Peptides as drugs (short polypeptide sequences) are really making a plash. I'll try to keep new and interesting things coming your way.
As for the mind reading AI - I think you have a great take on this. Perhaps an important distinction between flying machines and mind reading AI is that one of these enables liberty/freedom while the other is clearly authoritarian. The more I read about it the closer my view is coming to yours, i.e., these tools are more dangerous that helpful. I can't help but worry that learning to read the brain will similarly enable the ability the ability to write to users brains. Now that is truly a scary thought. I think this may be worth a writeup on why this technology should not be pursued.
Humanity certainly has work to do, but I would challenge the premise that morality is the key to building safe artificial general intelligence. It's perfectly possible for a virtuous people to build an entity that has goals that diverge from our own. What happens if those values are in competition? Even if they don't come into conflict, this is a centralizing technology that will deprive ordinary individuals from providing value to the economy. My belief is that advancing the human mind is a way to keep us in the drivers seat, even if this may seem naive.
Just to be clear: I realized since leaving this comment that I conflated two different items in this article. The Meta scientists aren't doing the PAM research. Other than not publicly outing myself as an idiot on that count I wouldn't change anything in my comment, though.
America will be the first to sell you something that actually gives you weight loss but it will most certainly shorten your life just as much if not more as obesity.
Drugs have risks and sometimes tradeoffs! There are not long-term studies for GLP-1s. However, they seem to be powerful for facilitating weightloss, which is a huge benefit to many. I think optimizing dosage will be important, and these oral formulations may enable smaller daily dosing rather than a bolus weekly dose.
Oh I’m still super interested in following your quest for discovering a weight loss pill with no downsides. I think my biggest fear is some people will abuse it out of insecurity. Regulations will probably be very necessary.
What I really like about this stack is that every single issue makes me go look up things I’ve never heard of before. Like “GLP-1 drugs.” I also read the ResearchGate paper about PAM and neural decoding, minus the part with math symbols. I know the authors’ intended audience is people in AI/computer science, not English majors, but the argot is really irritating and by the time I finished the paper I was in no mood to grant them or anyone else in the field the benefit of the doubt. They are *not* going to all that effort to help the handicapped.
Yes, I sound like the 21st Century equivalent of the people who sneered at the idea of flying machines. I get it. But if the last four years haven’t been Exhibit A for the basket case that is human morality, then what else has it been? The fact that the EvolutionaryScale scientists used to work for Meta, where their former co-workers are still busy deleting my complimentary comments about people’s cat pictures—or their AI is doing it for them—just makes it worse. So does the fact that the researchers spent just four sentences acknowledging (without really addressing) “concerns about mental privacy and the potential misuse of technology.” Considering what I know about how Meta views its users’ privacy, and considering what I know about the government’s involvement in social media (Murthy, anyone?), the fact that *for now* they’re relying on “full and constant subject cooperation” doesn’t help. And really, given Murthy’s thousands of pages of proof that Meta’s in bed with the intelligence services, the research its once and future scientists are doing should be setting off the world’s loudest warning sirens. Tech employees are in the dictionary under “fungible” for their ability to transition from Silicon Valley to government jobs and back.
The writers devote one line at the end of the paper to mentioning the technology’s use in *neuro* prosthetics. For a while I thought they were working on technology that could be applied to artificial limbs a paraplegic could control by thinking about it. The AI would “learn” the brain’s signals for “walk now” and make walking as automatic for a paraplegic as it is for anyone else. Instead, what I got out of the whole article is that as soon as they figure out how to reliably reproduce what someone sees, they’ll reverse-engineer that into their “brain-computer interfaces” to do exactly what they say it can’t do now: “reliably apply it to other subjects” and “reconstruct” internally-generated “imagery and dreams.” They want to pry into our brains.
Sure, maybe the research can “revolutionize their understanding of how complex information is processed and interpreted.” But so could asking the subject. And anyway, what the hell have biologists been doing sticking electrodes in monkey brains for the last hundred years? These guys have been trying to “revolutionize their understanding of how complex information is processed” for at least that long, so if they don’t know by now they either never will or they’re being disingenuous about how they're going to use that knowledge. Put me down for B. Too many tech billionaires and “globalists” are on record promoting “trans-humanism,” however they define it. Their real motive for wanting a window into people’s brains is malignant. Combine that with everything they outright promote about social credit, digital currency, and the literal criminalization of opinions (thought crimes) already practiced in Europe, and you have a techno-fascist’s wet dream.
The risks posed by AI can be mitigated only by advancements in morality. Leaving aside meteors and volcanoes, the risks facing humanity are almost 100% man-made. The enemy is us.
I'm sure you will hear more about GLP-1 drugs as time goes on. They will become the best selling pharmaceutical in 2024. It seems that they likely have many effects beyond apetite suppression, even improving general discipline and longterm planning. Peptides as drugs (short polypeptide sequences) are really making a plash. I'll try to keep new and interesting things coming your way.
As for the mind reading AI - I think you have a great take on this. Perhaps an important distinction between flying machines and mind reading AI is that one of these enables liberty/freedom while the other is clearly authoritarian. The more I read about it the closer my view is coming to yours, i.e., these tools are more dangerous that helpful. I can't help but worry that learning to read the brain will similarly enable the ability the ability to write to users brains. Now that is truly a scary thought. I think this may be worth a writeup on why this technology should not be pursued.
Humanity certainly has work to do, but I would challenge the premise that morality is the key to building safe artificial general intelligence. It's perfectly possible for a virtuous people to build an entity that has goals that diverge from our own. What happens if those values are in competition? Even if they don't come into conflict, this is a centralizing technology that will deprive ordinary individuals from providing value to the economy. My belief is that advancing the human mind is a way to keep us in the drivers seat, even if this may seem naive.
Just to be clear: I realized since leaving this comment that I conflated two different items in this article. The Meta scientists aren't doing the PAM research. Other than not publicly outing myself as an idiot on that count I wouldn't change anything in my comment, though.