About the project
team
Let's kick things off by saying a big hello from Copenhagen, Denmark! We're Wojtek, Tomek, Kacper, and Bartek - four friends who aren't just teammates at work but also in the thrilling arenas of League of Legends. In the professional realm, we immerse ourselves in data science, data engineering, and web development, crafting projects with curiosity and zeal. While we might occasionally get a little too carried away in our gaming sessions, we bring that same passion and camaraderie to every project we tackle! 🚀🎮

Wojciech Ciok
@Rambeige (EUNE)

Tomasz Nawrocki
@TheGedoba (EUNE)

Kacper Żyła
@Ender5224 (EUNE)

Bartłomiej Granat
@QLH Top (EUNE)
Testing guide
- Global ranking is always visible on the website. You can select to see the top 20, 50, or 100 teams at once.
- To see the tournament-specific ranking scroll down the page to the Tournament Ranking section. Select any tournament available either by typing in the name, or selecting from the dropdown of all tournaments. Next, you can select the ranking before each stage of the tournament if it is available.
- To see the current ranking including only custom-chosen teams scroll down to the bottom of the page and type in or select from the dropdown a number of team names and browse their ranking.
- Additionally, our solution is available as an API under /api path as specified in the submission requirements.
Submission Video
Architecture
Let's dive into our project's methodology, where quality and user experience take the center stage! We believe that the best solutions are not only sturdy and reliable but also easy and enjoyable to use 🤓.
“Anything Worth Doing Is Worth Doing Right.”
~ Camille
In the spirit of the hackaton our goal was to design a solution which would make the most of AWS services. The architecture of the solution is presented in the diagram below. The whole code used in this project is available in this private github repository.

1. Streamlined Data Processing
We begin with raw data, which is processed using an AWS Glue job. By focusing only on the data we need, and saving it in a dedicated S3 bucket, we've managed to reduce data sizes significantly—by about 30 times! This makes further calculations much faster 🔥.2. Analyzing with Precision
Next up, another AWS Glue job takes the stage to analyze the data 📈, calculating Elo rankings for each team and creating objects for every tournament. At each stage, teams' Elo, based on prior match data, is saved. Dive into the "Methodology" section for specifics on how we calculate the rankings. The analyzed data finds its home in two DynamoDB tables: "global-rankings" and "tournaments".3. Effortless Data Fetching
The data stored in DynamoDB is smoothly fetched by serverless functions, crafted as API routes in the Next.js framework. Below, we outline the endpoints that have been implemented in this solution.Endpoint | Description |
---|---|
/api/global_rankings?number_of_teams=[number] | Returns current global ranking. |
/api/tournament_rankings/[id]?stage=[stage slug] | Returns the ranking of teams before a specific stage of a tournament. |
/api/team_rankings?team_ids=[id]&team_ids=[id]&... | Returns the ranked list of the provided teams. |
/api/teams | Returns names of all teams. |
/api/tournaments | Returns a list of all tournaments and their stages. |
4. Seamless Web Hosting
To round things off, our Next.js web app 🌐 is hosted securely and reliably in AWS Amplify. We've aimed for a setup that is not only efficient and reliable but also straightforward and easy to understand.Methodology
Our solution is based on the ELO ranking system, which is used in, for instance, FIFA's global rankings. During development, we brainstormed numerous ways to enhance the system. In this discussion, we'll go over each implemented idea in detail and, ultimately, present the final configuration. We will also explain our decisions, the challenges we encountered, and how we addressed them.
To effectively manage historical tournament results at specific stages, we have implemented a system to capture snapshots of outcomes for each tournament and corresponding stage. These precomputed results are stored to facilitate efficient front-end delivery, ensuring a seamless and expedient user experience.
To establish an up-to-date global ranking, it's imperative to focus on active teams. Consequently, for the global ranking feature, we exclude teams that have not participated in a game within the past two years.
Ranking requirements
Oracle Lens Elo Ranking
In order to generate a ranking we needed to come up with a method of comparing the teams. We’ve implemented a customized elo method based on a revision of the FIFA / Coca-Cola World Ranking. This method adjusts a team's existing points based on the match result and its importance.
Originally, it only considers whether the result is a win, loss, or tie (in football).
We have taken into account the following parameters.
The formula for calculating the new rating of the team is as follows:
R = c_r + i * s * t_m * (W - W_e)
Where:
R is the new rating of the team
c_r is the current rating of the team
i is the importance of the match
s is the stomp multiplier
t_m is the time multiplier
W is the result of the match
W_e is the expected value of the match
- Importance - calculated based on the stakes, with higher importance for international tournaments and even greater importance if it’s a knockout stage of an international tournament.
- Stomp multiplier - calculated based on the quantiles of gold difference per minute.
- Time multiplier - calculated based on the time difference between the tournament we're calculating for and the match being considered. This is necessary to handle historical data gracefully. As a result, when calculating Elo for a tournament in 2018, matches from 2014 will carry less weight than matches from 2017.
- Result of the match - calculated based on the result of the series. We treat a 3-0 stomp and a 3-2 result differently. The multiplier is higher for a greater difference in games within the series and lower for a smaller difference.
- Expected value is calculated in the same way as in the revised FIFA Coca-Cola World Ranking. Essentially, we expect a team with higher Elo to be more likely to win, so it loses more Elo if it loses to an opponent with a similar Elo before the match.
- We do not deduct Elo for the knockout stages of international tournaments. Doing so would decrease the Elo of, for example, the 1st team in the national league whereas the 2nd team, which did not advance to MSI or Worlds would stay at the same level.
- We've also implemented individual Elo ratings for players, which contribute to the composite Elo rating of the entire team. You can find more information about this in the individual elo section.
The formula for calculating the new rating of the team is as follows:
R = c_r + i * s * t_m * (W - W_e)
Where:
R is the new rating of the team
c_r is the current rating of the team
i is the importance of the match
s is the stomp multiplier
t_m is the time multiplier
W is the result of the match
W_e is the expected value of the match
The simplified flow of calculation:
- Iterate over matches in chronological order.
- Obtain information about match stakes (importance and whether to deduct Elo on loss), as well as the time multiplier.
- Retrieve the current ratings of Team A and Team B.
- Get the result of the match and the stomp multiplier.
- Calculate the new ratings for both teams.
- Assign the new ratings, including the distribution of points to individual players.
The primary challenge of this approach is the "cold start." We know that assigning the same Elo to all teams on their first match is not ideal, as a new team could join leagues at different competitive levels or be composed of experienced players.
Base Elo Per League
Early in the development process, while testing our solution, we realized the need to implement a mechanism that would amplify the differences in Elo ratings across various leagues. This discrepancy would naturally emerge if there were more frequent and regular matches between teams from different leagues.
To address this issue, we introduced a mechanism that involves assigning dynamic base Elo ratings to teams. When determining a team's base Elo, we analyze all international matches played by teams within a specific league. Teams joining a league with a strong international performance history will receive a higher initial Elo rating compared to those entering a weaker league.
To elaborate we calculate Elo ratings for each year, taking into account all historical international matches up to that point. Then, we group teams' Elo ratings by their respective leagues. This process results in an Elo ratings dictionary for each league on an annual basis. This dictionary is used to look up base Elo corresponding to a team's league and year of their first match.
A team that we see for the first time while calculating the base needs to have an initial value assigned. However in the competitive scene there are big discrepancies between the teams from the main leagues that take part in the international tournaments and the ones from the smaller national ones. Because of that we decided to start those two groups from different initial elo values. All in all this means that we successfully amplify the differences in Elo ratings across various leagues.
Individual Elo
Another approach to deal with the cold starts is individual elo. Given that a team's roster may change periodically, we devised an inventive system for individual Elo ratings. The core idea is straightforward: the composite Elo rating of the team should match the team's Elo without the application of the individual Elo method. The composite Elo is determined by averaging six components, including the Elo ratings of five individual players and the coach/chemistry Elo. By employing this method, any substitution of a player with a higher Elo rating is accurately reflected in the team's composite Elo.
So, what makes this approach innovative?
After the match is finished we don't distribute the elo to individual players uniformly. Instead we distribute it inversely proportional to the contribution of a given component to the composite elo.
This ensures that if the individual elo ratings are different in the beginning the elo of individual players will converge the more the team plays together. Another assumption we enforced was that the sum of individual components' elo (of both teams) is 0 - the same as for the basic elo system.
In this case we've had some issues with the mapping data as it was crucial for determining which players have actually played in a given game. The data in tournaments.json also included subs and sometimes contained outdated information about the team's roster. For the matches where we were missing individual players we distributed substituted missing components with the coach / chemistry component.
In the end, we've decided not to use the individual elo as it only helped with small national leagues. Our theory is that this limitation is attributed to the absence of complete mapping data, with the LPL - one of the most significant leagues - being particularly affected.
Stomp Metric
Another issue considered when developing an elo system is to track not only which team won a game but also by how much. For example, FIFA elo rating uses goal difference to distinguish between even matches and those that ended with large advantages. To further improve our ELO system we decided to enhance it with game statistics. After considering different metrics we came up with a “stomp metric” measuring how big a difference was between two teams in a specific match.
To both include game-specific data and keep it consistent with the elo methodology we reduced the difference between two teams in a match to a single number that was used as a multiplier during the standard ELO calculation. We decided that the best approximation of how much one team won over the other is based on the gold difference between them at the end of the game. We define our “stomp metric” for each match in a following way:
- For each game calculate game difference of the winning team divided by game length (in minutes)
- Clip all the differences lower than 0 to be 0 indicating no “stomp”
- Each gold difference per minute number is converted into a number from 0 to 0.5 based on the quantiles of our gold difference ranges. The gold differences per minute list is 100 numbers representing gold difference per minute ranges in historic games. The list is static (does not change through time). Then we divide the quantile by half to get the range between 0 and 0.5.
- For a match in a format of “best of 1” we add a “stomp multiplier” to our elo calculation. The multiplier is created by adding 1 to the result from the previous step so that the elo gain or loss will always be multiplied by a number from 1 to 1.5 keeping the multipliers consistent in the system.
- If the match was in a format of “best of 3” or “best of 5” for “stomp multiplier” we add all the values of “stomp metric” per game from point 3 for the winning team and subtract those coming from the lost games. To account for edge cases where this approach would exceed desired range we clip the “stomp multiplier” to be within range from 1 to 1.5.
Our approach was both simple in terms of methodology and effective in distinguishing close matches from “stomps”. In the evaluation process most of the metrics that we used for evaluation and described later in this document, improved. An issue with the method may occur when a team would be winning games in their national league by major differences but not achieve success internationally. Their games from national league games could result in higher than natural elo increase. One of the ways of dealing with this problem is the multiplier range shrinking implemented in our system. As the method brought game-specific data into our system and improved evaluation metrics we used it in our final elo calculation system.
Year-Weighted Elo
When creating an elo based system a common problem is that teams with great achievements in the past that would then stop playing or even existing would still remain at the top of the rankings as they are not able to lose points anymore. In the case of League of Legends Global Power Rankings it is especially important to keep the rankings up to date while keeping the past results consistent at the same time. We solved it by a simple weight of elo changes by year.
For each year from 2011 to 2023 forward we multiply the change in elo by a factor in a fixed range from 0.3 to 1.5, assigning the smallest value to the year 2011 and increasing it by 0.1 each year. In the problem the elo rating value is not important but the ranking itself. Although the changes in elo are smaller in the past the correct order of the teams is preserved making it possible to get relevant results for the tournaments at the very beginning of LoL esports development. At the same time the most recent games result in bigger changes in elo making the current rankings more adequate and responsive to occurring events.
Including year-based multiplier to our elo system improved evaluation metrics across almost all considered tournaments and the global ranking itself. An issue that this approach might lead to is a big discrepancy in the ratings for the teams that recently had an extraordinarily good or bad form resulting in bigger shifts in their elo than in the standard implementation.
Evaluation metric and Final Model
All these different parameters and features give us a lot of possible combinations of the system setup. With that we needed a method allowing us to compare two models to each other.
To do that we decided to use the Kendall tau rank distance. This method counts the number of pairwise disagreements between two rankings. The larger the Kendall tau distance, the more dissimilar they are.
Given two rankings, a pair of elements is said to be concordant if they have the same relative order in both rankings, and discordant if they have opposite orders. The Kendall tau distance is then calculated as the number of discordant pairs.
As the tournaments and rankings contain different numbers of teams we decided to use the normalized version of the Kendall tau distance to account for that. With that in mind the final formula for it can be defined as:
Kn = 2d / (n(n-1))
Where:
Kn is the normalized Kendall tau distance
d is the number of discordant pairs
n is the number of teams in the ranking
For example given two rankings:
Ranking 1: MAD, T1, G2, JDG
Ranking 2: T1, MAD, JDG, G2
The discordant pairs would be (MAD, T1) and (G2, JDG), as the order of these teams has changed between the two rankings. All other pairs (MAD, G2), (MAD, JDG), (T1, G2), and (T1, JDG) remain in the same relative order.
With the number of teams equal 4 the Kendall tau distance is:
Kn = 2*2/(4*(4-1)) = ⅓
Meaning ⅓ of the total number of pairs is in an incorrect order between the two rankings.
Another challenge in trying to evaluate our methods was the fact that there is no objective ground truth for the global power ranking of the LoL esport teams. To circumvent that we picked a handful of international tournaments and used their final result as their “true” power ranking to compare with the one produced by our method with the Kendall tau metric. The tournaments that we have picked were:
- Worlds 2022
- Worlds 2021
- Worlds 2020
- MSI 2023
- MSI 2022
- Ultraliga Summer 2022
With that we could calculate numerical performance for each of the different setups and compare them to each other for every one of these tournaments. With the significant number of possible combinations we decided to use a grid search method to find the optimal values of the parameters. The values that we tested included:
- Starting elo of the lower tier leagues
- Starting elo of the higher tier leagues
- Importance of the assigned based elo value
- International tournament importance multiplier
- Including individual elo
- Including year weights in elo
- Including stomp metric
In total we evaluated over 2800 combinations of these parameters to identify the ones that resulted in the best performance for the tested tournaments. In the figure we present the distributions of Kendall tau distance values for different combinations of our parameters. As we can see based on the chosen parameters the discrepancies in the results might be very high. After carefully evaluating each of the parameters and improvements separately and in combinations we decided on the final elo system to use.

Boxplots showing aggregated Kendall tau distance values for over 2800 different setups of the elo system
Ultimately the best performing model incorporates the stomp metric and year weighted elo with numerical parameters of:
The final performance of the chosen model on the tournaments picked for evaluation is:
- Starting elo of the lower tier leagues: 1000
- Starting elo of the higher tier leagues: 1400
- Importance of the assigned based elo value: 0.5
The final performance of the chosen model on the tournaments picked for evaluation is:
Tournament | Ranking distance before | Ranking distance after |
---|---|---|
Worlds 2022 | 0.173 | 0.105 |
Worlds 2021 | 0.225 | 0.095 |
Worlds 2020 | 0.190 | 0.086 |
MSI 2023 | 0.205 | 0.205 |
MSI 2022 | 0.290 | 0.254 |
Ultraliga Summer 2022 | 0.244 | 0.155 |
These results show that for big international tournaments, a ranking created by our model has at most 30% incorrect pair orders compared to the tournament final result. And for the Worlds 2022, the tournament with the most available data, the model achieves as low as 17% incorrect pairs.
The difference between the distance before and after the tournaments (with the exception of MSI 2023) shows how well the model adjusts to the new match results.
In the figure below we show an example of how the elo evolved through time for selected teams. The shades around the lines indicate more than one value for a given date resulting in multiple elo updates. The plot shows both how the elo varies more thanks to the time multiplier and how the specific weights for Worlds resulted in a big elo spike for DRX and a steady decline once they started losing games within their national leagues afterwards.

Elo evolution through time for selected teams
Thank you for considering our submission!

https://giphy.com/leagueoflegends