• 0 Posts
  • 6 Comments
Joined 3 months ago
cake
Cake day: March 31st, 2025

help-circle
  • I think there is a substantial difference though. Meat processing is done in a measured, considered way for a benefit (meat) that cannot be obtained without killing the animal. It is done in isolated facilities away from people who find the process disturbing. Just because people find something gross doesn’t mean it shouldn’t be done - we have sewage maintenance done out of the public eye too - but it does maybe mean it should be done where people don’t have to see it. The only benefit this man gets from killing the animal is some sort of “revenge”. But this is in principle completely contradictory to meat processing, where animals are seen as less capable of higher order experiences and therefore more acceptable to kill. To seek revenge, you would need to be assigning more higher order experience to the seagull than we typically see it as having. You have to see the seagull as selfish, stealing, criminal, rude, etc., even though in reality a more reasonable person understands that it’s just an animal looking for food. Meat processing is not done out of some emotional vendetta against the animals, rather it is the cold detachment of it that is exactly what makes it acceptable. Can you imagine if we killed the same amount of chickens every day, not to eat them, but just because we hate them? This is much more horrifying! Because that would mean we think chickens are having complex enough inner experiences to warrant hatred, yet still we kill them.

    Meat processing maybe isn’t great, but it’s still much better than this seagull killer. It isn’t impulsive, it isn’t disproportionate in response to the situation, it acknowledges and conceals its own horrors; thereby paying respect to important social codes. The actions of this man, though, disregarded the well-being of children and others around him, in an impulsive and disproportionate response - your average meat-eater is indeed better than that, I think. When I have a craving for some meat, I don’t drag a calf down to the nearest playground, cut it in half and spray blood over the children, and proceed to mock the calf’s weakness and inferiority as I beat it to tenderize it before consumption. I just want some food, dude. But what’s this guy’s beef? It’s not beef, and it’s not even seagull meat, but rather some frightening notion of swift and decisive revenge, which reveals that he is just waiting for any excuse to get away with brutalizing things around him.




  • Sorry, I can see why my original post was confusing, but I think you’ve misunderstood me. I’m not claiming that I know the way humans reason. In fact you and I are on total agreement that it is unscientific to assume hypotheses without evidence. This is exactly what I am saying is the mistake in the statement “AI doesn’t actually reason, it just follows patterns”. That is unscientific if we don’t know whether or “actually reasoning” consists of following patterns, or something else. As far as I know, the jury is out on the fundamental nature of how human reasoning works. It’s my personal, subjective feeling that human reasoning works by following patterns. But I’m not saying “AI does actually reason like humans because it follows patterns like we do”. Again, I see how what I said could have come off that way. What I mean more precisely is:

    It’s not clear whether AI’s pattern-following techniques are the same as human reasoning, because we aren’t clear on how human reasoning works. My intuition tells me that humans doing pattern following seems equally as valid of an initial guess as humans not doing pattern following, so shouldn’t we have studies to back up the direction we lean in one way or the other?

    I think you and I are in agreement, we’re upholding the same principle but in different directions.


  • But for something like solving a Towers of Hanoi puzzle, which is what this study is about, we’re not looking for emotional judgements - we’re trying to evaluate the logical reasoning capabilities. A sociopath would be equally capable of solving logic puzzles compared to a non-sociopath. In fact, simple computer programs do a great job of solving these puzzles, and they certainly have nothing like emotions. So I’m not sure that emotions have much relevance to the topic of AI or human reasoning and problem solving, at least not this particular aspect of it.

    As for analogizing LLMs to sociopaths, I think that’s a bit odd too. The reason why we (stereotypically) find sociopathy concerning is that a person has their own desires which, in combination with a disinterest in others’ feelings, incentivizes them to be deceitful or harmful in some scenarios. But LLMs are largely designed specifically as servile, having no will or desires of their own. If people find it concerning that LLMs imitate emotions, then I think we’re giving them far too much credit as sentient autonomous beings - and this is coming from someone who thinks they think in the same way we do! The think like we do, IMO, but they lack a lot of the other subsystems that are necessary for an entity to function in a way that can be considered as autonomous/having free will/desires of its own choosing, etc.