https://docs.google.com/document/d/1B2d ... =drive_webI went ahead and calculated averages to determine what winning scores would be on this new average. Obviously this would affect all scores.
How I did it-
I divided each era into a year, with the exception of 2014+ given the lack of LDCs in the new era. I only included the 29 main LDCs. The yearly segment was to go by WITBLO, and often to account for population refreshes- the retirement of old members and rise of new ones with new standards. This also gives a spread of time where the designer itself evolved and we theoretically became more or less impressed by what we saw.
Every year, I took the average of the top 5 of each LDC to get to the meat of memorable entrants. If less than 5 medaled, I'd just go with the ones mentioned in the Hall here. This got to the meat of memorable entrants without taking into account the outlying bad entrants, or LDCs with less spread due to less entrants.
From there took all the LDC averages of a year and averaged it down.
Then I took the average of all 29 LDCs and compared it to the year average. From there on out, I'd use percentages to find out how much of each score I'd take to get the updated score. For example, the 18 from the 2nd LDC would have 97.08% left, which gave us 17.41. I did this for all 31 winners.
Let me know what you think of the data! Is it fair, can it be improved, is it indicative? The pattern seems to be very up and down but very distinct.