fairgrad.torch.cross_entropy
- class fairgrad.torch.cross_entropy.CrossEntropyLoss(weight=None, size_average=None, ignore_index=-100, reduce=None, reduction='mean', y_train=None, s_train=None, fairness_measure=None, epsilon=0.0, fairness_rate=0.01)
This is an extension of the CrossEntropyLoss provided by pytorch. Please check pytorch documentation for understanding the cross entropy loss. Here, we augment the cross entropy to enforce fairness. The exact algorithm can be found in Fairgrad paper <https://arxiv.org/abs/2206.10923>.
- Parameters
weight (Tensor, optional) – a manual rescaling weight given to each class. If given, has to be a Tensor of size C
size_average (bool, optional) – Deprecated (see
reduction
). By default, the losses are averaged over each loss element in the batch. Note that for some losses, there are multiple elements per sample. If the fieldsize_average
is set toFalse
, the losses are instead summed for each minibatch. Ignored when reduce isFalse
. Default:True
ignore_index (int, optional) – Specifies a target value that is ignored and does not contribute to the input gradient. When
size_average
isTrue
, the loss is averaged over non-ignored targets.reduce (bool, optional) – Deprecated (see
reduction
). By default, the losses are averaged or summed over observations for each minibatch depending onsize_average
. Whenreduce
isFalse
, returns a loss per batch element instead and ignoressize_average
. Default:True
reduction (string, optional) – Specifies the reduction to apply to the output:
'none'
|'mean'
|'sum'
.'none'
: no reduction will be applied,'mean'
: the weighted mean of the output is taken,'sum'
: the output will be summed. Note:size_average
andreduce
are in the process of being deprecated, and in the meantime, specifying either of those two args will overridereduction
. Default:'mean'
y_train (np.asarray[int], Tensor, optional) – All train example’s corresponding label
s_train (np.asarray[int], Tensor, optional) – All train example’s corresponding sensitive attribute. This means if there are 2 sensitive attributes, with each of them being binary. For instance gender - (male and female) and age (above 45, below 45). Total unique sentive attributes are 4.
fairness_measure (string) – Currently we support “equal_odds”, “equal_opportunity”, and “accuracy_parity”.
epsilon (float, optional) – The slack which is allowed for the final fairness level.
fairness_rate (float, optional) – Parameter which intertwines current fairness weights with sum of previous fairness rates.
Examples:
>>> input = torch.randn(10, 5, requires_grad=True) >>> target = torch.empty(10, dtype=torch.long).random_(2) >>> s = torch.empty(10, dtype=torch.long).random_(2) # protected attribute >>> loss = CrossEntropyLoss(y_train = target, s_train = s, fairness_measure = 'equal_odds') >>> output = loss(input, target, s, mode='train') >>> output.backward()
- forward(input, target, s=None, mode='train')
Defines the computation performed at every call.
Should be overridden by all subclasses.
Note
Although the recipe for forward pass needs to be defined within this function, one should call the
Module
instance afterwards instead of this since the former takes care of running the registered hooks while the latter silently ignores them.