Skip to content

Custom weights normalization in PlackettLuce model #179

@iT-Drake

Description

@iT-Drake

Description

I'm using PlackettLuce model to calculate player ratings. In each match I measure players performance and calculate normalized weights. When I put those weights into a rate method of the model they go through a second normalization:

if weights:
            weights = [_normalize(team_weights, 1, 2) for team_weights in weights]

These value boundaries of _normalize method lead to a twice as high mu gain for a player with highest weight on a team, no matter how good their performance was. At the same time a player with worst performance will still have a standard mu gain equal to what they will have if I didn't specify the weights.

Example:

from openskill.models import PlackettLuce

model = PlackettLuce()

team1 = [model.rating(name=str(index)) for index in range(1, 4)]
team2 = [model.rating(name=str(index)) for index in range(1, 4)]

teams = [team1, team2]
ranks = [1, 2]

# Example 1
weights = [[0.5, 1, 2], [1, 1, 1]]

new_ratings = model.rate(teams=teams, ranks=ranks, weights=weights)

print("Team 1:")
for old, new in zip(team1, new_ratings[0]):
        print(f"mu: {new.mu - old.mu}, sigma: {new.sigma - old.sigma}")
        # mu: 1.634389124660622, sigma: -0.10918353120293567 
        # mu: 2.179185499547497, sigma: -0.14604295978000614 
        # mu: 3.268778249321244, sigma: -0.22026418245867596

print("Team 2:")
for old, new in zip(team2, new_ratings[1]):
        print(f"mu: {new.mu - old.mu}, sigma: {new.sigma - old.sigma}")
        # mu: -1.634389124660622, sigma: -0.10918353120293567
        # mu: -1.634389124660622, sigma: -0.10918353120293567
        # mu: -1.634389124660622, sigma: -0.10918353120293567

# Example 2
weights = [[0.95, 1, 1.05], [1, 1, 1]]

new_ratings = model.rate(teams=teams, ranks=ranks, weights=weights)

print("Team 1:")
for old, new in zip(team1, new_ratings[0]):
        print(f"mu: {new.mu - old.mu}, sigma: {new.sigma - old.sigma}")
        # mu: 1.634389124660622, sigma: -0.10918353120293567
        # mu: 2.451583686990933, sigma: -0.16453504304744015
        # mu: 3.268778249321244, sigma: -0.22026418245867596        

print("Team 2:")
for old, new in zip(team2, new_ratings[1]):
        print(f"mu: {new.mu - old.mu}, sigma: {new.sigma - old.sigma}")
        # mu: -1.634389124660622, sigma: -0.10918353120293567
        # mu: -1.634389124660622, sigma: -0.10918353120293567
        # mu: -1.634389124660622, sigma: -0.10918353120293567

You can see that rating change is the same in both examples for players on team1. You can argue that within the match 3rd player on team1 side had best performance, but in second example his performance is barely above global mean value.

Possible Solutions

Having a way to customize boundaries (e.g. 0.8 - 1.2 instead of 1.0 - 2.0) can provide a more predictable mu change.
For example, store these values (1 and 2) as class instance variables, so they can be changed.

The other way of doing it is to give an option to disable second normalization if weights are pre-normalized.

Alternatives

No response

Metadata

Metadata

Assignees

No one assigned

    Labels

    enhancementNew feature or request

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions