"Another annoyance, OpenAI quietly killed off their older models the moment GPT‑5 launched. Without warning or transition, they yanked GPT‑4.5, O3, O4-Mini; all the previously available models vanished from ChatGPT overnight. People who relied on those for specific workflows were left high and dry."
AI is starting to look like a test the way a wife-beating husband is a test: How much hatred and disrespect can a company show for its paying customers before the customers realize they're being abused and walk away?
I know you asked for legitimate ways that people are using AI in their labs and businesses, not more negativity, but negativity is my only skill. I'm just really suspicious about (and annoyed by) the massive disconnect between the hype and the real-life utility, and nothing I've seen written about it anywhere has done anything but fuel that suspicion.
Of particular interest to me would be from a medical or health prospective. In the article, it was mentioned that agreement between those evaluating medical situations differed and was debatable at times. Wondering what kind of results could be obtained using these same scenarios on different AI platforms, and an evaluation by AI of these comparative results (sort of a debate among the platforms).
Austin Starks: "PhD-level intelligence? Bitch please."
"Another annoyance, OpenAI quietly killed off their older models the moment GPT‑5 launched. Without warning or transition, they yanked GPT‑4.5, O3, O4-Mini; all the previously available models vanished from ChatGPT overnight. People who relied on those for specific workflows were left high and dry."
AI is starting to look like a test the way a wife-beating husband is a test: How much hatred and disrespect can a company show for its paying customers before the customers realize they're being abused and walk away?
“I can change him...”
On a serious note, they’ve been fairly helpful for me with some medical issues.
Lol.
I know you asked for legitimate ways that people are using AI in their labs and businesses, not more negativity, but negativity is my only skill. I'm just really suspicious about (and annoyed by) the massive disconnect between the hype and the real-life utility, and nothing I've seen written about it anywhere has done anything but fuel that suspicion.
Of particular interest to me would be from a medical or health prospective. In the article, it was mentioned that agreement between those evaluating medical situations differed and was debatable at times. Wondering what kind of results could be obtained using these same scenarios on different AI platforms, and an evaluation by AI of these comparative results (sort of a debate among the platforms).