美文网首页
毕业研究笔记2_机器学习公平性定义

毕业研究笔记2_机器学习公平性定义

作者: 木子木夕 | 来源:发表于2018-12-13 18:03 被阅读9次

    This section is a summary of fairness definitions and approaches from week1 reference, as well as references listed in paper <Fairness Definition defined>. The content of this section will be updated each time new findings/thoughts appear.

    Note: Since definitions are collected from different resources, examples/case analysis are not consistent through the following definitions.

    1. Naïve approach – Fairness through blindness/Fairness through unawareness --- Naive

    Concept: A classifier satisfies this definition if no sensitive attributes are explicitly used in the training processdecision-making process.[Ref5, def4.2] This naive approach to fairness come from the sense that how could a algorithm be discriminate if the algorithm simply doesn’t look at protected attributes such as race, color, religion, gender, disability, or family status. (Ref1 and Ref3-Data as a social mirror also mentioned this concept)

    Representation: X: other attributes except sensitive attributes
    d: prediction result (0 or 1 for binary classification)
    Xi = Xj -> di = dj

    Problem of this approach is: “it fails due to the existence of redundant encodings. There are almost always ways of predicting unknown protected attributes from other seemingly innocuous features.” For example, if one deals with users from a shopping website, one can still predict users’ gender based on shopping history (a user always buying clothes is more likely to be a female user) even though gender information is removed.

    Example: Someone proposed that gender-related feature, for example, can be removed together with gender feature. However, many gender-related features are not obvious for researchers. It is, again, a subjective work to identify gender-related features, and could be biased when picking these features.

    One experiment: In paper <Fairness Definitions Explained>, they claimed that they removed all sensitive attributes-related features, and trained on this new dataset. The classifier is fair if the classification outcome are the same for applicants i and j who have the same attributes.

    Conclusion: 1) Moritz’s claim on this concept: “In fact, when the protected attribute is correlated with a particular classification outcome, this is precisely what we should expect. There is no principled way to tell at which point such a correlation is worrisome and in what cases it is acceptable.” Therefore, one cannot simply remove sensitive attributes because they are important. 2) If one removed all sensitive attribute related features, the accuracy would drop as expected. Removing features/train with unawareness is not a good approach.

    2. Demographic Parity[Ref1]/Group Fairness[Ref5.def3.1.1] --- Not good

    Concept: Demographic parity requires that a decision—such as accepting or denying a loan application—be independent of the protected attribute.[Ref1] Or a classifier satisfies this definition if subjects in both protected and unprotected groups have equal probability of being assigned to the positive predicted class.[Ref5.def3.1]

    Representation: G: gender attribute (m: male; f: female) (Let gender be sensitive feature)
    d: prediction result(0 or 1 for binary classification)
    P(d=1|G=m) = P(d=1|G=f) [Ref5]

    Problem1: Demographic parity doesn’t ensure fairness[Ref1] / The sample size disparity[Ref3]

    Let me take an example to illustrate this: if there is 100 male applicants and 10 male applicants, and 90 male applicants are predicted as positive (d=1) while 9 female applicants are predicted as positive (d = 1), then the fraction of applicants being predicted as positive are the same (90/100 = 9/10 = 90%). However, only 9 female applicants receive positive results comparing to 90 male applicants.

    The above scenario can arise naturally when there is less training data available about a minority group. Therefore as Moritz claimed “I would argue that there’s a general tendency for automated decisions to favor those who belong to the statistically dominant groups”

    Problem2: Demographic parity cripples machine learning[Ref1] / The competition between accuracy and fairness

    In real situations, “the target variable Y usually has some positive or negative correlation with membership in the protected group A. This isn’t by itself a cause for concern as interests naturally vary from one group to another.” The above statement tells that it is reasonable in many cases that P(d=1|G=m) != P(d=1|G=f) as you would expect different results from different groups. Thus simply adopting demographic parity as a general measure for fairness is misaligned with the fundamental goal of achieving higher prediction accuracy. The general trend is that if the classifier learnt how to be fairness (how to fit minority groups), its accuracy would decrease.

    To conclude, demographic parity is not suitable to be a general measure of fairness because demographic differences are expected and reasonable. Therefore, simply imposing this definition doesn’t improve fairness in real and logical sense, and also would hurt accuracy.

    Example of problem 2: When predicting medical conditions, gender attribute is important to consider. The incidence of heart failure happens more often in men than in women, thus it is necessary to expect difference in two groups prediction outputs. It is neither realistic nor desirable to prevent all correlation between the predicted outcome and group membership.

    One experiment: In paper <Fairness Definitions Explained>, they calculated the probability of male being predicted as 1 and the probability of female being predicted as 1, and check if this two probabilities are similar within a reasonable range or not.

    Conclusion: This definition doesn’t ensure equal spread of positive results to different groups (as explained in problem1), and this definition cannot be applied to many real cases since different performance between different groups are expected.

    3. Conditional statistical parity [Ref5.def3.1.2] -- An extension of group fairness (defintion2)

    Concept: The principle is similar with definition 2, but the sampling group are shrunken by filtering out the samples dissatisfy legitimate factors L.

    Representation: G: gender attribute (m: male; f: female) (Let gender be sensitive feature)
    d: prediction result(0 or 1 for binary classification)
    L: legitimate factors, a subset of non-sensitive attributes.
    P(d=1|L=l, G=m) = P(d=1| L=l,G=f) [Ref5]

    Problems: Same as problems of group fairness definition.

    One experiment: In paper <Fairness Definitions Explained>, the legitimate factors are chosen to be credit amount, credit history, employment and age.

    When to apply this definition? / Why we need legitimate factors L?

    4. Confusion Matrix [Ref5. Def3.2]

    Concept: A confusion matrix is a table with two rows and two columns that reports the number of false positives (FP), false negatives (FN), true positives (TP), and true negatives (TN). This is a specific table layout that allows visualization of the performance of an algorithm. With four basic metrics (TP, FP, FN, TN), 8 more metrics can be developed.

    condusion matrix

    For more detailed meaning of these 12 metrics, please check section3 – Statistical Metrics in paper <Fairness Definitions Explained> page 2 and page 3.

    There are 7 fairness definitions and measurements defined with the help of confusion matrix, which are explained in section 3.2 in paper <Fairness Definitions Explained>. However, there are few definitions are worthy to be analyzed here.

    Reference: Moritz Hardt, Eric Price, and Nati Srebro. 2016. Equality of Opportunity in Supervised Learning. In Advances in Neural Information Processing Systems

    4.1 Equalized odds (Equal TPR and FPR)

    Concept: Individuals, in either positive label group or negative label group, should have a similar classification, regardless of their sensitive feature.

    ** Representation**: d: prediction result (0 or 1 for binary classification)
    G: gender attribute (m: male; f: female) (Let gender be sensitive feature)
    Y: True label (0 or 1 for binary classification)
    P(d = 1 |Y = y, G = f) = P(d = 1 |Y = y, G = m) y = 0 or 1

    4.2 Equalized opportunity (Equal TPR)

    Concept: We are often interested in positive groups (Y = 1), such as ‘not defaulting on a loan’, ‘receive promotion’, ‘admission’. This is a relaxation of Equalized odds which limit the base to positive outcome groups. A classifier is fair under equalized opportunity if the fractions of individuals in positive groups being predicted to be positive are the same, regardless of their sensitive feature. “This approach is the idea that individuals who qualify for a desirable outcome should have an equal chance of being correctly classified for this outcome.”[Moritz]

    ** Representation**: d: prediction result (0 or 1 for binary classification)
    G: gender attribute (m: male; f: female) (Let gender be sensitive feature)
    Y: True label (0 or 1 for binary classification)
    P(d = 1 |Y = 1, G = f) = P(d = 1 |Y = 1, G = m), y = 0 or1

    Conclusion: The above definitions, equalized odds and equalized opportunity, especially equalized opportunity, are very important concepts in model fairness. The advantage and disadvantage are mentioned in the conclusion of the paper < Equality of Opportunity in Supervised Learning >.

    Advantage: 1. This measure is performed in a post-processing manner. Therefore, it is more simple and efficient than pre-processing measures, such as fairness through unawareness, where sensitive attributes should be removed before training. This property also ensure privacy-preserving.

    2. From the representation above, a better classifier (better accuracy and fairness by equalized odds/opportunity) can be built by collecting features that more capture the target, and it is unrelated to its correlation with the protected attribute. Thus it is fully aligned with the ceral goal: building higher accuracy classifiers.

    3. This measure avoids the conceptual shortcomings of demographic parity.

    Disadvantage: 1. ‘label data’ is not always available. Thus, this measure is applied to supervise learning. However, the broad success of supervised learning demonstrates that this requirement is met in many important applications.

    1. ‘label data’ is not always reliable. The measurement of the target variable might in itself be unreliable or biased. Thus, this measure is more trustworthy when label dataset is more trustworthy.

    5. Causal Discrimination [Ref5.def4.1]

    Reference: Sainyam Galhotra, Yuriy Brun, and Alexandra Meliou. 2017. Fairness Testing: Testing Software for Discrimination. In Proc. of ESEC/FSE’17.

    Concept: A classifier is fair under causal discrimination if it predicts the same classification results for any two subjects with the exact same attributes X. (X: all other attributes except sensitive ones)

    Motivation: The work of <Fairness Testing: Testing Software for Discrimination> is motivated by the limitations of group discrimination.

    Group discrimination: Fairness is satisfied with respect to an input characteristic (input features) when the distribution of outputs for each group are similar. For example, if 40% purple people are classified with positive outcome, and 40% green people are classified with positive outcome, then the result is fair. However, if the 40% purple people are chosen randomly while the 40% green people are chosen from the top of most-saving groups, then this fairness notion cannot detect this situation.

    To address the limitations of group discrimination above, Sainyam suggested causal discrimination, which says that to be fair with respect to a set of characteristics (sensitive features), the classifier must produce the same output for every two individuals who differ only in sensitive features. For example, if two people are identically same except for race feature (one green, one purple), then switch their race feature and see if the outcome has changed with the change of sensitive feature. If the two outcomes are the same, then this classifier is the same.

    Representation: X: other attributes except sensitive attributes
    G: gender attribute (m: male; f: female)
    d: prediction result (0 or 1 for binary classification)
    ( Xf = Xm ^ Gf != Gf ) -> df = dm

    6. Fairness through awareness [Ref5.def4.3]

    Reference: Cynthia Dwork, Moritz Hardt, Toniann Pitassi, Omer Reingold, and Richard Zemel. 2012. Fairness Through Awareness. In Proceedings of the 3rd Innovations in Theoretical Computer Science Conference.

    Concept: This is an individual-based fairness. The central principle of this definition is “two individuals who are similar with respect to a particular task should be classified similarly” [Fairness Through Awareness]. The similarity between individuals are defined by a distance metric. The choice of distance metric is assumed to be ‘public and open to discussion and continual refinement’. And of course, the choice of distance metric is essential to the result of Fairness through awareness.

    Representation: The formalization of this fairness condition is Lipschitz condition. In their approach, a classifier is a randomized mapping from individuals to outcomes, or a distributions over outcomes. The Lipschitz mapping is defined below:

    Lipschitz mapping

    With the help of Lipschitz mapping, the classifier is fair if the distance between M(x) and M(y) is smaller than individual distance d(x,y).

    Advantage: This definition give a fairness metric, an individual-based fairness metric, which fills the blanks left by group fairness. Moreover, fairness through awareness can be extended to measure group fairness by giving conditions on the similarity metric.

    Related open questions: Three open questions are stated in paper <Fairness Through Awareness>. Read the paper page 19 for details. I want to state one thing from first question in my research note. It would be nice if both individual samples i and j are from the same group, however, it could happen that individual sample i and j are from different groups (one from protected group, another one from unprotected group). When the second situation appear, “we may need human insight and domain information” to set distance metric.

    相关文章

      网友评论

          本文标题:毕业研究笔记2_机器学习公平性定义

          本文链接:https://www.haomeiwen.com/subject/vgvbhqtx.html