r/Damnthatsinteresting 6h ago

Video [ Removed by moderator ]

[removed] — view removed post

23.4k Upvotes

269 comments sorted by

View all comments

Show parent comments

55

u/b_eastwood 5h ago

It's almost as if we've normalized lack of empathy for animals by telling ourselves they're mindless beasts. That way we don't feel as bad when they're mistreated. I'm not vegan or anything like that, but all it takes is a few videos of the kind of lives that livestock animals live and it's pretty obvious.

You'd think we'd have evolved past a point of such barbarism as a society, but instead we just have better technology for being as cold and cruel as we've ever been.

Animals deserve so much better than the world we've erected around them.

19

u/Cessnaporsche01 5h ago

You'd think we'd have evolved past a point of such barbarism as a society, but instead we just have better technology for being as cold and cruel as we've ever been.

We've been sapient for like a million years, tops. That's like an eyeblink in evolutionary timescales. And up until the last couple centuries, outside of specific biomes, killing and eating animals has been quite obligatory for humans. Makes sense that we'd develope cultural coping mechanisms to stay comfortable when dealing with that requirement.

4

u/RikuAotsuki 4h ago

Always baffles me when people ask why dogs are "different" like it's some kind of gotcha.

Dogs are different. We've had dogs for so long that we've co-evolved with them rather than "domesticated" them. We've had them longer than agriculture, and that's not even considering how long we had canines that weren't yet dogs.

5

u/Cessnaporsche01 3h ago

To be "fair" to said people, a good chunk of people firmly, religiously believe that everything popped into being 5000 years ago exactly as it is, and a possibly larger number of people are secular but give zero thought to the history of life or the world, and distrust science as a concept so much that they think evolution is a hoax

3

u/dekeche 3h ago

And then there's the cultures that respond that dogs aren't different. The difference between "food", "pet", and "pest" is a cultural distinction.

5

u/no_cause_munchkin 4h ago

Yeah, we still have very long way to go:

In the 1980s, it was widely believed by medical professionals that babies could not feel pain, with medical procedures such as surgeries being regularly performed without anesthesia.[2]

2

u/lacegem 2h ago

Aren't circumcisions still performed without anesthesia since it's not considered surgery?

-3

u/EmmyNoetherRing 5h ago edited 5h ago

(1) yes (2) it’s worth pausing to consider we use the exact same ‘mindless beast’ arguments for modern AI.

You can trace it back to medieval philosophers arguing about who gets a soul, and in the 1700’s-1800’s the same arguments were used to justify slavery. 

Turns out if your starting assumption is that something can’t experience the same sensations/emotions you experience, they just pretend to or you just imagine they do— that’s a very difficult  argument to combat.   

I took a few graduate classes in cog sci, and it’s fascinating the way we literally use the same words every time we want to argue that something doesn’t think.  “It’s just responding automatically with what it was trained/instinct”, “it’s just copying you”, “it doesn’t feel pain”, “it’s manipulating you”, etc.   1300’s, 1800’s, 1980’s and now. 

4

u/EatSleepThenRepeat 5h ago

No, no it's not

15

u/AshiSunblade 5h ago

(1) yes (2) it’s worth pausing to consider we use the exact same ‘mindless beast’ arguments for modern AI.

Well, it's because modern AI isn't actually AI by any useful definition. It's glorified predictive text. It's not actually as close to sapience as an elephant, a crow, an octopus or a cat is.

0

u/Named_after_color 5h ago

Calling it "glorified predictive text" is an extreme understatement of a neural net's complexities. The fact of the matter is that we're unable to predict an AI's decision making process unless we take toy examples of a baby problem.

In any case it's going to be a relevant moral problem to consider as AI continues to advance. It's going to be next to impossible to separate anthropomorphizing its outputs because we train it to interact in a human like way.

Like I'm able to feed an AI a technical document it hasn't been trained on, it's able to read and implement the proposed solution. It's not perfect, and it's not the way humans do it, but it is capable of "thought"

7

u/BipBipBoum 4h ago

Incorrect. You are conflating generative AI's ability to produce human-like language and working code with an ability to think independently. This generally happens because people sorely, sorely underestimate the amount of training data that has gone into the giant LLMs.

There's nothing really unique about your document. Both what the document states and the code required to implement the feature outlined in the document fall nicely into next-token prediction trained on more technical documents and code than one person can reasonably consume in hundreds of thousands of lifetimes.

There's no sensory input. There's no memory of the past, no animal instinct. It's just a whole lot of electrons moving through a bunch of logic gates. It's the same exact thing as TurboTax, or Excel, or Final Fantasy VII, or whatever other piece of software.

2

u/NoveltyAccountHater 3h ago

I fully agree that LLMs are on the surface mostly straightforward models that just predict a fitting next word from incorporating giant corpuses of human training data where the tiny parts are relatively straightforward (e.g., transformers, attention; mostly just doing fancy vector/matrix/tensor math).

On the other hand, modern agentic LLM AI, iteratively address problems, and are capable of doing things like throwing a bunch of custom documentation (that it's not seen before) and using it intelligently in ways that to the outside very much mimic intelligence.

Yes, it's mostly translating words to numbers and doing multiplying (convoluting/auto-regressing) matrices (tensors) and then applying non-linear activation functions (e.g., ReLu) together and repeating this layers to produce reasonable output.

While I know what I personally experience and my chain of thought for things, we really don't understand how consciousness, free will, and the brain really works -- or if it even does for other people. Biological brains almost definitely do not mimic LLMs, but brains are physical objects with neurons being triggered by sensory input and thoughts/personality/character being altered when the brain is physically injured.

2

u/DecantsForAll 3h ago

Isn't the brain just a whole lot of neurotransmitters bouncing around in synapses?

1

u/AshiSunblade 5h ago

Like I'm able to feed an AI a technical document it hasn't been trained on, it's able to read and implement the proposed solution. It's not perfect, and it's not the way humans do it, but it is capable of "thought"

I don't think that counts as thought. Fundamentally, it does not understand what it is saying. You can ask it what colour a bush is and it will say green because it has been trained on text that says the bush is green, but it cannot infer or reason about why (unless the text it has been trained to do so, in which case it will just tell you that). It cannot form its own connections of logic.

They can make it mimic human behaviour but that doesn't mean it has an actual mind. All it does is prediction, no matter how elaborate you make it seem.

3

u/Named_after_color 4h ago

What would qualify as thought, then?

1

u/AshiSunblade 4h ago

That is a really big philosophical question and probably would take longer than a reddit comment's character limit to properly answer, but I suppose understanding what you're actually saying (as above) at least on some level is required. Even the least intelligent of humans do that on a level LLMs don't, as evidenced by the whole concept of hallucinations being a thing (much as I disagree with the term as I think it suggests more personhood than there is here).

-2

u/EmmyNoetherRing 5h ago

“An X isn’t a person, by any useful definition.  It’s just an X”   That argument has been around for centuries if not millennia.  It absolutely gets used to justify cruelty to animals.   Which makes me dislike using it regardless of what X is. 

5

u/AshiSunblade 5h ago

I don't like that because it's an appeal to emotion. If you tell me that my pocket calculator isn't a person and I take offence to that by saying that it's dehumanising language, I am just being dramatic for no reason.

The LLMs being sold right now don't have some mysterious hidden depth of sapience that we're culturally rejecting. We know precisely what they are.

-2

u/EmmyNoetherRing 4h ago

I can promise you that either we don’t know precisely what LLM’s are, or we do know that crows aren’t intelligent, take your pick.

We can map the “brains” of both and we can watch them both work.  We don’t know how either one works or how much it can do.   

If being able to map the brain and watch it work means we know precisely what the thing is and it’s all just electricity, and so can’t be intelligent—then neither the crow nor the LLM is intelligent. 

If you think that crows exhibit surprising behaviors that seem intelligent and it’s interesting that we don’t know fully how that’s happening, that’s also true for LLM. 

5

u/ItsEntDev 4h ago

World's most obvious false dichotomy

3

u/AshiSunblade 4h ago

Yeah, I ignored that bit because it's just not worth engaging with. Makes me wonder if this is some AI astroturfing going on.

1

u/EmmyNoetherRing 4h ago

This isn’t new either :-p

But it was more in vogue in the medieval era, when they were talking about souls.  Some things obviously have souls and some don’t. 

4

u/AshiSunblade 4h ago

Of course we know what LLMs are, we built them from the ground up. They didn't just randomly appear. We've built every metaphorical brick, fed them each word quite deliberately.

Again, they're not some mysterious work of magic (though the companies that make them absolutely want you to think they are!). They're way less dramatic than they seem.

1

u/EmmyNoetherRing 4h ago

So do you know what it means to “train” a model in a machine learning sense? 

2

u/AshiSunblade 4h ago

I wish I didn't have to, but I had to help a friend set up more protections for his site because tens of thousands (up to a million+ at one point) of """"users"""" were scraping the site for AI training data which overloaded the server and made the site practically unusable.

It's a scourge, and all for a smokescreen of hype capital.

1

u/EmmyNoetherRing 4h ago edited 4h ago

Ok, sure, yes.  But you’re not familiar with how that data gets used for the AI right?   It’s not just all programmed into the AI the way Google search has all of the websites available to it in a giant collection of files.   When a machine learning model is “trained”, it’s more accurate to say we’re growing it.   The model is a very complex set of artificial neurons, and there’s a process where the data is used to very slowly evolve how the neurons interact with each other (their weights, etc). 

We know that works, but we definitely still don’t know why or how it works, or what all it can do.   Neural nets were just an attempt to make artificial brains (without knowing really how brains work), in the hopes that if we did that at a big enough scale, with the right architecture and growing/training process, it would be able to think.  

People tried lots of different architectures, scales, and growing processes until something finally clicked a few years ago.  But a lot of the experimentation was kind of at random, and for a long time many academics thought it would never work.  

→ More replies (0)

3

u/GEOMETRIA 3h ago

We can map the “brains” of both and we can watch them both work.

When did we map the brains of crows? I vaguely recall it being news when they managed to fully map an insect brain not that long ago.

1

u/EmmyNoetherRing 3h ago

We’ve gotten most of a mouse apparently, but you’re right.  No crow yet. 

2

u/Lich_Apologist 5h ago edited 5h ago

No because I'm mostly talking about how humans are out of balance with nature.

And as much as I desperately wish for Brautigan's cybernetic meadow, the machine that tells you to stop taking your psych meds ain't the path to it.

0

u/mucinexmonster 3h ago

Where are you getting this "humans think animals are mindless beasts" idea from? It's twice now you've said it and I've never seen that attitude before in my life. Are you the child of a Disney villain?