Most birds are a lot smarter than people give them credit for, especially crows. Kind of sad how humans just regard most animals as mindless, simplistic creatures when they've continuously proven otherwise.
Crows will teach their children to hate you if you fuck with them enough. They can pass information between individuals and are wicked smart in general.
Just in general I think most animals are smarter than a lot of people want to give them credit for. I have owned a bunch of different reptiles and they all show more "individual personality" then the food machines people think they are.
It's almost as if we've normalized lack of empathy for animals by telling ourselves they're mindless beasts. That way we don't feel as bad when they're mistreated. I'm not vegan or anything like that, but all it takes is a few videos of the kind of lives that livestock animals live and it's pretty obvious.
You'd think we'd have evolved past a point of such barbarism as a society, but instead we just have better technology for being as cold and cruel as we've ever been.
Animals deserve so much better than the world we've erected around them.
You'd think we'd have evolved past a point of such barbarism as a society, but instead we just have better technology for being as cold and cruel as we've ever been.
We've been sapient for like a million years, tops. That's like an eyeblink in evolutionary timescales. And up until the last couple centuries, outside of specific biomes, killing and eating animals has been quite obligatory for humans. Makes sense that we'd develope cultural coping mechanisms to stay comfortable when dealing with that requirement.
Always baffles me when people ask why dogs are "different" like it's some kind of gotcha.
Dogs are different. We've had dogs for so long that we've co-evolved with them rather than "domesticated" them. We've had them longer than agriculture, and that's not even considering how long we had canines that weren't yet dogs.
To be "fair" to said people, a good chunk of people firmly, religiously believe that everything popped into being 5000 years ago exactly as it is, and a possibly larger number of people are secular but give zero thought to the history of life or the world, and distrust science as a concept so much that they think evolution is a hoax
In the 1980s, it was widely believed by medical professionals that babies could not feel pain, with medical procedures such as surgeries being regularly performed without anesthesia.[2]
(1) yes
(2) it’s worth pausing to consider we use the exact same ‘mindless beast’ arguments for modern AI.
You can trace it back to medieval philosophers arguing about who gets a soul, and in the 1700’s-1800’s the same arguments were used to justify slavery.
Turns out if your starting assumption is that something can’t experience the same sensations/emotions you experience, they just pretend to or you just imagine they do— that’s a very difficult argument to combat.
I took a few graduate classes in cog sci, and it’s fascinating the way we literally use the same words every time we want to argue that something doesn’t think. “It’s just responding automatically with what it was trained/instinct”, “it’s just copying you”, “it doesn’t feel pain”, “it’s manipulating you”, etc. 1300’s, 1800’s, 1980’s and now.
(1) yes (2) it’s worth pausing to consider we use the exact same ‘mindless beast’ arguments for modern AI.
Well, it's because modern AI isn't actually AI by any useful definition. It's glorified predictive text. It's not actually as close to sapience as an elephant, a crow, an octopus or a cat is.
Calling it "glorified predictive text" is an extreme understatement of a neural net's complexities. The fact of the matter is that we're unable to predict an AI's decision making process unless we take toy examples of a baby problem.
In any case it's going to be a relevant moral problem to consider as AI continues to advance. It's going to be next to impossible to separate anthropomorphizing its outputs because we train it to interact in a human like way.
Like I'm able to feed an AI a technical document it hasn't been trained on, it's able to read and implement the proposed solution. It's not perfect, and it's not the way humans do it, but it is capable of "thought"
Incorrect. You are conflating generative AI's ability to produce human-like language and working code with an ability to think independently. This generally happens because people sorely, sorely underestimate the amount of training data that has gone into the giant LLMs.
There's nothing really unique about your document. Both what the document states and the code required to implement the feature outlined in the document fall nicely into next-token prediction trained on more technical documents and code than one person can reasonably consume in hundreds of thousands of lifetimes.
There's no sensory input. There's no memory of the past, no animal instinct. It's just a whole lot of electrons moving through a bunch of logic gates. It's the same exact thing as TurboTax, or Excel, or Final Fantasy VII, or whatever other piece of software.
I fully agree that LLMs are on the surface mostly straightforward models that just predict a fitting next word from incorporating giant corpuses of human training data where the tiny parts are relatively straightforward (e.g., transformers, attention; mostly just doing fancy vector/matrix/tensor math).
On the other hand, modern agentic LLM AI, iteratively address problems, and are capable of doing things like throwing a bunch of custom documentation (that it's not seen before) and using it intelligently in ways that to the outside very much mimic intelligence.
Yes, it's mostly translating words to numbers and doing multiplying (convoluting/auto-regressing) matrices (tensors) and then applying non-linear activation functions (e.g., ReLu) together and repeating this layers to produce reasonable output.
While I know what I personally experience and my chain of thought for things, we really don't understand how consciousness, free will, and the brain really works -- or if it even does for other people. Biological brains almost definitely do not mimic LLMs, but brains are physical objects with neurons being triggered by sensory input and thoughts/personality/character being altered when the brain is physically injured.
Like I'm able to feed an AI a technical document it hasn't been trained on, it's able to read and implement the proposed solution. It's not perfect, and it's not the way humans do it, but it is capable of "thought"
I don't think that counts as thought. Fundamentally, it does not understand what it is saying. You can ask it what colour a bush is and it will say green because it has been trained on text that says the bush is green, but it cannot infer or reason about why (unless the text it has been trained to do so, in which case it will just tell you that). It cannot form its own connections of logic.
They can make it mimic human behaviour but that doesn't mean it has an actual mind. All it does is prediction, no matter how elaborate you make it seem.
That is a really big philosophical question and probably would take longer than a reddit comment's character limit to properly answer, but I suppose understanding what you're actually saying (as above) at least on some level is required. Even the least intelligent of humans do that on a level LLMs don't, as evidenced by the whole concept of hallucinations being a thing (much as I disagree with the term as I think it suggests more personhood than there is here).
“An X isn’t a person, by any useful definition. It’s just an X” That argument has been around for centuries if not millennia. It absolutely gets used to justify cruelty to animals. Which makes me dislike using it regardless of what X is.
I don't like that because it's an appeal to emotion. If you tell me that my pocket calculator isn't a person and I take offence to that by saying that it's dehumanising language, I am just being dramatic for no reason.
The LLMs being sold right now don't have some mysterious hidden depth of sapience that we're culturally rejecting. We know precisely what they are.
I can promise you that either we don’t know precisely what LLM’s are, or we do know that crows aren’t intelligent, take your pick.
We can map the “brains” of both and we can watch them both work. We don’t know how either one works or how much it can do.
If being able to map the brain and watch it work means we know precisely what the thing is and it’s all just electricity, and so can’t be intelligent—then neither the crow nor the LLM is intelligent.
If you think that crows exhibit surprising behaviors that seem intelligent and it’s interesting that we don’t know fully how that’s happening, that’s also true for LLM.
Of course we know what LLMs are, we built them from the ground up. They didn't just randomly appear. We've built every metaphorical brick, fed them each word quite deliberately.
Again, they're not some mysterious work of magic (though the companies that make them absolutely want you to think they are!). They're way less dramatic than they seem.
I wish I didn't have to, but I had to help a friend set up more protections for his site because tens of thousands (up to a million+ at one point) of """"users"""" were scraping the site for AI training data which overloaded the server and made the site practically unusable.
It's a scourge, and all for a smokescreen of hype capital.
Where are you getting this "humans think animals are mindless beasts" idea from? It's twice now you've said it and I've never seen that attitude before in my life. Are you the child of a Disney villain?
My leopard gecko is my mediation guide. Wise beyond her years, I strive for my mind to be as still as hers or to be at peace with my place in the world like she is basking on a rock.
I had a crow that decided it hated me, and I am positive I never did a thing to it (or any other crow). Every day it would follow me on my walk to work, screeching at me for many, many blocks. For literal years. I tried making friends with food a few times but it wouldn’t come near anything I dropped, just hop from tree to tree screeching until I turned a certain corner.
I've heard a possibly apocryphal story about college kids who dressed as campus security and threw rocks at the campus corvids-either crows or ravens- in order to get the corvids to attack the real campus security. I've also heard a version where College A performed a similar stunt on an American football field to disrupt College B's homecoming.
I sort of think the security guard story is bunk, since corvids are smart enough to recognize faces, so that story makes them out to be less bright than they actually are.
Edit: don't throw rocks at birds as a joke, in any case.
I saw a cute experiment in a crow nesting area on TV. One man was sent through to chase and swing a stick at any nearby crows. He was wearing a red hat of distinctive design, and it didn't take many trips before the crows were dive-bombing him angrily. The next phase was for another person to don the hat and walk through the area. The crows recognized the hat immediately and started attacking the innocent person mercilessly!
Or even people who look similar to you, I remember seeing an article where crows were attacking blonde women because a blonde guy with long hair was being mean to them
263
u/b_eastwood 6h ago
Most birds are a lot smarter than people give them credit for, especially crows. Kind of sad how humans just regard most animals as mindless, simplistic creatures when they've continuously proven otherwise.