26-Jan-2012, 07:42 PM
(Archiving continued from previous post)
UN Wrote:I am still not getting how inter-subjectivity is possible without empathy.
Priyabrata Mahapatro Wrote:Here is an earlier conversation that happened in one of the FB threads regarding 'tear-jerking emotions' and 'moral reasoning' . I'm pasting the question that I asked and the response that Arvind gave.
"Can tear-jerking emotions in a person, especially if it triggers for all kinds of social injustice irrespective of any group and not a temporary one, be a base for moral reasoning?
Also, would reasoning on such a premise be rational or irrational ?"
//Can tear-jerking emotions in a person, especially if it triggers for all kinds of social injustice irrespective of any group and not a temporary one, be a base for moral reasoning?... Also, would reasoning on such a premise be rational or irrational ? //
If a particular state of affairs causes 'tear-jerking emotions' in many of us, then it is evidently related with some aspect of human well-being. Concern for and maximization of human well-being is foundational to at least one major school of moral reasoning, namely utilitarianism. If the 'tear-jerking emotions' are shared and experienced by more or less everyone, then such considerations can be treated as foundational in an 'intersubjective moral system'. As long as we clearly acknowledge and explicitly state these intersubjective premises which we base later moral decisions on, such an approach cannot be dismissed as irrational.
Priyabrata Mahapatro Wrote:^ Apologies for the digression but I felt it necessary.
Bobby Krishna Wrote:it looks like you guys are going to put me off meat completely!
Kanad Kanhere Wrote:UN - I am not sure why you are concluding that there is absence of empathy in the example that I have given. They might have empathy but do not choose that as a standard for deciding their morality. Maybe they are bunch of compulsive gamblers, who understand the perils but enjoy gambling so much that can't keep away from it.
In anycase, as I said before, it was just hypothetical case to understand more about moral premises.
UN Wrote:Kanad - You began by a premise where aliens inter-subjectively "decide" to treat people inequally in the society, for which I said that power of empathy enables us to see the inequities.
You replied back saying that aliens may not have empathy...and my response was that in such a case they may not agree on such a premise in the first place.
I feel your premise is logically incoherent. How can inter subjectiveness exist without empathy. If they do have empathy, why would they not see the problem in their ethics?
If you are going to say that they understand that someone else is suffering and yet they keep doing it, then they are not ethical when looked at from inter-subjective morals which promote equal opportunities.
It also depends on what they feel when someone acts in a way that denies them everything. What would they do? Will they accept that as a legitimate way of life or will they protest?
Kanad Kanhere Wrote:Priyabrata Mahapatro - That wasn't digression at all (atleast IMO). Its spot on with what I was trying to put forth when I said that emotions are equally important
Kanad Kanhere Wrote:(Interestingly facebook didn't post my last comment: I didn't know the quality was that bad , although I do agree it was written in a lot of haste)
UN - My bad. Let me give this another try.
Firstly lets consider what is ethics. Ethics is coming with up with "what is wrong and what is right", basically the philosophy of morality. Now consider that there is just one rational agent. In that case that agent can decide what is right and wrong by just analyzing his long term goals and then evaluating his actions based on whether they lead them towards or away from those goals. Just like any logic system, the agent has to come up with moral prioris using instrumental rationality, and then apply logic to evaluate his actions. The complications start when there are multiple agents. These agents might have conflicting wants and then deciding prioris becomes tricky. In such case prioris have to be decided intersubjectively, i.e. with consensus (weak definition of intersubjectivity). In the strong sense, shared ideas/notions etc, thus subjectively true across most rational agents, become the base for prioris. e.g. Hypocrisy is wrong will be an intersubjective truth for Humans.
The main point is that "betterment of everybody" is not a necessary priori in ethics, which is probably what you are presuming. And which is what I was trying to clarify in that example of aliens. Now trying to rationalize why the aliens wouldn't choose to improve everybody's life might be a tricky business, because we are humans and can only think in our way. The aliens might have empathy or might not. Even if they have it, they might choose to not act on it etc. These are just rationalizations from our angle.
Kanad Kanhere Wrote:I just went through this thread again, and strongly feel that Arvind's comments were, as usual, brilliant an need to be preserved. Probably we can create a doc, or archive this nirmukta.net. I can do the later, the former not sure (because of my pathetic language skills). So any suggestions, takers?