- 19 Posts
- 24 Comments
StopTech@lemmy.todayOPMto
Stop Tech@lemmy.today•"Cancel ChatGPT" movement goes mainstream after OpenAI closes deal with U.S. Department of War — as Anthropic refuses to surveil American citizensEnglish
1·3 hours agoThe “Cancel ChatGPT movement” doesn’t appear to be mentioned in the article, but other outlets say hashtags like #CancelChatGPT are trending on X.
I think you’re right that stock trading has enabled a lot of bad and perhaps shouldn’t have been allowed. At least on a large scale beyond a single town or county. Paper certificates for money may have been a bad idea too. Even the use of a common currency like gold may have been a net negative. I think a barter system has positives over a common currency in that it requires people to work together and form communities.
I understand that lethality makes a pathogen less effective at spreading. But this will not be the case as much for artificial pathogens specifically designed to spread undetected until they suddenly activate and kill. Being on the lookout and staying indoors and washing your hands definitely won’t save you if the pathogen is sufficiently well designed. That didn’t even prevent most people from getting COVID. I was imagining a pathogen that can infect plants and animals as well as humans, so even a person stranded on a remote island would catch it. And noticing the disease won’t matter if there’s no cure, which is to be expected if this pathogen comes out of nowhere and appears totally harmless at first, until people suddenly die at once. Especially if it’s also designed to survive even an immune system that has been vaccinated with a deactivated version.
Even if it is impossible for a pathogen to do all that because we are able to immediately develop a 100% effective vaccine for any pathogen we discover (which is very unrealistic imo), we’d have to be mass inoculating people every time some psycho releases a new potentially dangerous pathogen. We wouldn’t have time to test these vaccines for safety and no doubt there would be some adverse health effects from injecting so many vaccines. People would also have to put a great deal of trust in whoever is making and providing these vaccines (probably the government), as a malicious entity could use the excuse of a new pathogen to persuade or coerce people into taking bad substances. These could be to reduce fertility and in the future such substances could probably be used to alter behaviors or even deliver nanoparticles that can be controlled remotely to deliver electric shocks or biological changes. There’s just so many ways for this to go horribly bad that I don’t think it can possibly end well if pathogen modification becomes capable by individuals or small groups (using AI or other means).
And pathogen modification is just one of the ways we’re at risk of going extinct. There’s also the other ones I mentioned, ones I didn’t mention (like mirror life) and probably a lot more we haven’t thought about. When developing atomic weapons there was a concern the atmosphere could be set on fire. It turned out nuclear weapons don’t set the atmosphere on fire, but maybe some other technology could or find some other way of causing oxygen depletion. Or maybe there’s a way to generate so much ionizing electromagnetic radiation that it damages all DNA on earth to the point where our fertility drops and we go extinct in 3 generations.
But even if we just stick to the ways we already know about it’s almost certain that we will soon have technology capable of killing everyone that nobody would be able to defend against. The only protection therefore is to limit its availability. But some technologies are very hard to limit the availability of - such as AI which any intelligent person with access to AI research papers and the ability to write computer programs could make. And why limit its availability to governments and big corporations that can abuse it to subjugate the public (and based on experience and incentives will abuse it)? Surely it’s much better to limit the availability to nobody. Hence the project of Stop Tech.
We’ve seen it many times. TV, teflon, social media, cars, AI…
as a biologist: the “100% extermination virus” is impossible
Have you heard of Clarke’s three laws? Specifically the first one?
no known ways
So there could be a way we don’t know of yet? Isn’t that what science would discover for us? What law of the universe prevents such a thing being possible? Why couldn’t we program a virus to have a long incubation period once we can use DNA/RNA like we use programming languages?
The rest of your comment seems to ignore what I already covered in my essay. Yes it’s about access, but you either have wide availability and we all die or narrow availability and totalitarianism. Materials and equipment costs also go down with improvements in production and once AI is able to design its own equipment from first principles it may be possible to have AI robots build all the equipment from raw materials.
All this reads like a fearful response to change
If the change we’re talking about is humans being replaced as the dominant species on the planet and the invention of weapons that can kill us all, I’d say to be unafraid is completely irrational. It’s wishful thinking to say it will work out despite all the trends and incentives saying it won’t.
Agreed but I haven’t heard of that
Have you seen Gattaca?
I wouldn’t trust those funds myself. Plenty of oil companies say they’re all about reducing CO2 and as I remember ESG was playing favorites rather than reflecting carbon emissions. Even companies that are trying to reduce emissions can still be invading people’s privacy, lobbying (bribing) for bad legislation and doing other evil things.
It is reality. And unfortunately any “ethical” funds usually just focus on avoiding oil companies or military companies but are just fine with AI companies, surveillance companies, eugenics companies and so on. Nobody agrees on what is ethical I’m afraid. One man’s unethical practice is another man’s unethical-to-avoid practice.
I think they will give us the cancer cure which may even be cheap, but it will come with lots of other downsides for society and your individual physical and mental health. Technology is like black magic that solves the problem you asked it to but gives you a thousand new issues that end up being worse than the original situation.
Genetic engineering every little detail could become dirty cheap, but it will still be terrible for humanity because it will remove diversity, we’d be messing with forces we don’t understand that could lead to diseases or greater population-wide susceptibilities and the government would also like to have its say on how your baby is made so that they will be a good little order follower
As well as the military contractors, insurance companies, big food, big media, big think tanks and consultancy, etc
StopTech@lemmy.todayOPtoNeo-Luddites@lemmy.today•But AI causing human extinction is just corporate PR, right?
1·1 day agoThis is 100% true
No you appear to be recalling something you read incorrectly. The NSA was allegedly concerned Furbys could record sensitive conversations and they were banned from Fort Meade. The idea that they recorded sound was incorrect, but the concern wasn’t about Furbys learning or having artificial intelligence. Besides, bringing this up is a distraction from verifiable facts that computers can already identify targets in real time camera feeds and make decisions on whether to pursue and shoot them. You’re in denial my friend.
StopTech@lemmy.todayOPtoNeo-Luddites@lemmy.today•But AI causing human extinction is just corporate PR, right?
1·1 day agoSomeone didn’t read the news about the Pentagon threatening Anthropic because they want to use AI for fully autonomous weapons
StopTech@lemmy.todayOPtoNeo-Luddites@lemmy.today•But AI causing human extinction is just corporate PR, right?
12·2 days agoPeople do talk about writing things that “the compiler can understand” so it’s nothing new. Also I think you meant to say regular expressions understand strings, not patterns - or that regular expression engines understand patterns.
StopTech@lemmy.todayOPtoNeo-Luddites@lemmy.today•But AI causing human extinction is just corporate PR, right?
16·2 days agoThis depends on the definition of understanding. If by understanding you mean mental processing then obviously AI can never do that because it has no mind, it only simulates the behaviors of a mind. But if instead understanding is understood (pun intended) to mean the process of extracting accurate information from something and responding to it in a rational way, then yes AIs do understand lots of things.
StopTech@lemmy.todayOPtoNeo-Luddites@lemmy.today•But AI causing human extinction is just corporate PR, right?
34·2 days agoArguably if you give AI access to the nuclear launch system then it can cause human extinction “by itself”. Every “by itself” extinction scenario requires some pre-existing circumstances so this has a right to qualify as one of those scenarios.
Contrary to before we now have general purpose AIs that can understand all types of scenarios and make decisions in them. This means they can cause extinction with less human guidance. And there’s no strong reason to doubt AI could become as intelligent and autonomous as humans, probably in a decade or two. Then it’s pretty much bye bye humans.








The “Cancel ChatGPT movement” doesn’t appear to be mentioned in the article, but other outlets say hashtags like #CancelChatGPT are trending on X.