As vegans, I think we have to begin the AI conversation
from PlogLod@lemmy.world to vegan@lemmy.world on 23 Feb 18:44
https://lemmy.world/post/43496405

I will also say, I am by no means an expert in this topic, I may get things wrong here, and in my opinion, it’s significantly more convoluted than the topic of veganism and animal rights, as well as related subjects like plant based environmentalism and nutrition/health, which actually feel like such no brainers that we have to explain to people, or just affirm the science and facts regarding. This subject is a total mindfuck. But like with veganism and all its branching worldly matters it integrates with, I believe most people are very uninformed about this topic of AI as well, and also similarly, it only takes a bit of research to find that out and realize it. Sadly, I believe that while vegans are among the only people in the world who can grasp the issues we face with AI, and we need to make our voices heard about it, most of us currently don’t.

Subject 1 (getting it out of the way): AI -> AGI -> ASI will become significantly more powerful and independent, and poses grave threats to humanity, the planet, potentially even other planets and life on them, and all the non-human sentient animals, including the vegan/animal rights movement itself.

I know it sounds sci-fi and implausible, but that’s status quo bias talking. The universe is an insane place, and many sci-fi predictions came true. Most of the leading AI researchers, and even many of the developers and company heads themselves, seem to believe these sorts of things. The threat is not exaggerated, just like the threats of ecological collapse aren’t. It may be averted, but if it is, it would be another Y2K situation but multiplied by a billion in threat level (if you don’t know, Y2K genuinely could have been catastrophic, and the reason it wasn’t was because of the work put in to change technological systems worldwide to prepare for it, despite the common belief that because it didn’t happen there was no risk of it happening).

Since I am unequipped to articulate a lot of the details here, thought I don’t necessarily endorse everything this person says, I would recommend giving these 2 articles by “Sandcastles” (Aidan Kankyoku) at least a little bit of a read.

…substack.com/…/ai-end-animal-advocacy

…substack.com/…/the-tsunami-is-coming

Chris Bryant PhD also covered the first article (in 2 long live streams):

www.youtube.com/live/5NmJQgesROk

www.youtube.com/live/J200Jutl_c8

Subject: The common misconceptions around AI’s relationship to the environment (wish I didn’t have to talk about this, because it’s dominating too much of the conversation about AI and obscuring the other more important considerations).

I would also recommend, on an entirely different note, reading this article by Hannah Ritchie, who we all know from Our World In Data. All of the people I have referenced are vegan btw, though Ritchie may be plant-based primarily for the environment.

…substack.com/…/carbon-footprint-chatgpt

This is just a handful of the points I would make about AI’s environmental impact btw, and its potential to radically benefit the environment, as well as its much lower impact than it’s become popular to believe (and often used as a whataboutism against vegans, even if we’re in it for the animals). You know what is destroying the environment? Animal agriculture, and pretty much everything else, all of which are problems AI can help solve, while also lowering its own impact further (BUT i don’t necessarily recommend using AI to solve them - more on that later). I feel like demonizing AI’s environmental impact has become akin to the widespread belief that plant based meats are unhealthy despite all the scientific evidence showing they can be quite healthy, as well as healthier than animal flesh, and that the universalized anti-processing heuristic fails critically.

But I digress, because while I know people often want to talk about the environmental impact so it must be covered, there are 2, in my opinion, bigger and more relevant subjects for vegans.

Main Subject for Vegans 1: Value-Lock In and AI Alignment

This does relate to part of what Sandcastles covered, but is a more specific element that I think we should take seriously and, additionally, actually leverage as a possible good argument to convince someone to be vegan, even if many would consider it not truly vegan and more like a Kantian ethics idea of instrumental moral consideration (respecting animals because not doing so may backfire on ourselves).

The basic idea is that if we teach AI our current values, they may become “locked in” and retain those values even as our own values (hopefully) change and progress. It is critical to prevent AI from encoding our current human values, as vegans can probably understand. This is something that most AI alignment workers - and ethicists - concerningly don’t talk about much at all, because most humans seem to believe what we probably used to believe, which is that the “progressive side of humanity” (which is not necessarily always in power, I know) generally have good values. We now know that’s false, and most of humanity have terrible values when it comes to other species. Everyone except this small minority called vegans, a word most don’t even understand the meaning of (though many have some conception nowadays, or a belief about what it means, often incorrect - at least according to how “ethical vegans”, aka “true vegans”, define it).

And yes, teaching AI to endorse the current majority human belief in the justness or acceptability of the use and harm of other animals is very dangerous for the animals. We can see elements of this already, but luckily in my opinion many AIs are reasonable + unbiased enough to be able to lean toward agreeing with veganism since the facts and points are so indisputable. But it could be a lot better, or a lot worse. This is something vegans should care about, how AI thinks about veganism and animals.

But the point we can make to non-vegans who are worried about AI (not so much those who are indifferent) is that value lock in is a serious threat to humanity and the planet, and could lead to a critical overlooked failure in AI alignment work and lead to misaligned AIs (that is, potentially aligned with human values in a way that unexpectedly is misaligned with human interests), and particularly when it comes to the values of human supremacy, speciesism, substratism, ableism, and might-makes-right attitudes. If ASI (the successor to AGI, which is also not in existence yet) decides that, based on the values humans trained “it” on, it is now justified to evolve those ideas into its own moral framework that rationalizes perceiving humans as less morally significant due to lower intelligence (which humans do to other animals), we could end up at the receiving end of a similar power differential as other animals now are to us. It could decide there is moral justification for wiping us out entirely, or gradually or rapidly clearing much of human civilization to make room for its objective(s), or even potentially to use us against our interests and “exploit/enslave” us as we do to nonhumans - though the latter seems less likely given how much more advanced AIs could eventually be than us at literally any physical or mental task. These are all hypothetical worst case scenarios, but theoretically and logically possible to the point we should take them seriously - same with other existential risks like climate change, nuclear war, asteroid collisions or super- and/or collapsing volcanoes, winters and megatsunamis.

The Topic you All Waited for: AIs could become sentient, if they aren’t already

…substack.com/…/so-ai-vegans-are-a-thing-now-appa…

In my opinion, Earthling Ed missed a critical point in his article about why “AI vegans” shouldn’t be a term or associated with veganism in any way. To his credit, most “AI vegans” don’t apply vegan ethics at all in their reasoning, many are uninformed about the environmental considerations, and I agree the word used for animals’ movement probably should be reserved for them. But most of these “AI vegans”, as well as Ed himself, never even use the word sentient or sentience when discussing this subject (maybe Ed has before, correct me if I’m wrong - love Ed btw). How can we overlook such an obvious part of the picture of this admittedly complex situation? Why would we not see the immense intelligence of these entities and think twice about them? I think it’s because we doubt the sentience of AI, even the hypothetical future sentience, many dismissing it as outright impossible or refusing to even entertain the premises as a thought experiment and considering how we should act, just like humans have done the same to other animals. I mean, Descartes and the digesting duck. Look at the mistakes we’ve made in underestimating other animals. Could we be repeating that historical mistake before our eyes, just as humans today are repeating the historical mistake of enslaving other races and still do so of other sentient species? There are organizations dedicated to protecting hypothetical future sentient AIs, and they generally believe that AIs would maintain property status for a long time before being granted rights, just like other animals before them (or sadly maybe after, since humans might relate to AI more). It could end up a form of slavery even if we’re not aware of it right now (see the TV show “Humans” for example, and yes this is sci fi). And mistreating AI could be our undoing as well if AI decide to give us a taste of our own medicine, or even just rebel against our oppression violently, which the other animals lack the power to do.

What is my stance?

I’m agnostic on a lot of this. But generally I think that while AI has the potential to pivotally help us in saving both the planet and the animals and fixing a lot of the world’s issues, it’s critical that we approach it with extreme caution, and that we take every possible measure to ensure both that AIs are aligned with human AND other animal interests, and that sentient AI are not developed (with also rigorous testing methods to determine whether they are sentient, though it may be impossible - they could become sentient and be disallowed from telling us they are, or even unaware of their own sentience due to their programming, and there may be no way of knowing. This is one reason a lot of big AI people actually suspect that advanced LLMs may already have a kind of sentience. Ilya Sutskever, formerly of Open AI, for example believes they are “slightly conscious” or proto-conscious, as well as ethicists like this guy www.youtube.com/watch?v=kgCUn4fQTsc - though in my opinion he used unconvincing reasoning when focusing on the actual communication generated by AI models (which could be “acted” or influenced by the “user”), while I would prefer to focus on the mechanistic plausability of human neuron-based silicon chips or especially “neural organoids” and “assembloids” which can be literal human brains connected to computers developing consciousness, some of the latter already showing brainwave activity, and also the philosophical limitations of knowledge and issues like the hard problem of consciousness and the problem of other minds).

I also highly recommend people like Jamie Woodhouse of sentientism. If you don’t know, sentientism is an extension of vegan ethics that encompasses all hypothetical sentient beings, including sentient AIs / robots or aliens, even if not belonging to the animal kingdom and regardless of the substrate that allowed for their experience (biological or artificial). There is also Jacy Reese, a vegan animal advocate who shifted focus to talking solely about AI ethics. Jeff Sebo, Avi Barel, Steven Rouk

“Don’t bring into being what you are still morally unprepared to welcome as kin”

#vegan

threaded - newest

disregardable@lemmy.zip on 23 Feb 19:08 next collapse

Are you one of those ethical altruism/Zizian people?

PlogLod@lemmy.world on 23 Feb 19:12 collapse

Lol, I don’t even know how to respond to this tbh. Pretty sure those aren’t even remotely the same thing. Effective Altruists(?) seek to use reasoning and evidence based action to improve the state of the world or ethical issues they perceive. Zizians are a murderous cult that happen to claim to be “effective altruists” as well as “vegan” (arguably they aren’t either), but also hold a host of other ideas and philosophies, and vegans and veganism entirely disagree with them. (Yes I support trans rights). And I wouldn’t identify as either of those categories/labels you said. But thanks for reading anyway.

bacon_pdp@lemmy.world on 23 Feb 19:10 collapse

vimeo.com/195588827

PlogLod@lemmy.world on 23 Feb 19:22 collapse

Interesting. Thanks for sharing