Complaint from a scout: "I get no feedback on whether or not I evaluate well. I guess my reports are alright because my contract keeps getting renewed." He would also like to see reports from other scouts on the same players he writes up.
I have heard similar complaints from other scouts, and I do think there are two sides to this. For the scouts, they simply want to know how well their superiors think they evaluate players at both the amateur and pro levels. I think anyone would like to know how well they are performing, especially if they are doing a good job. The same scout that told me this said that more than anything, he just wants some new lingo. He feels like his reports get repetitive at times and when he hears some new terms, he'll think to himself, "that's the word I've been looking for" to describe a player.
From the organization's perspective, it makes a lot of sense to not share reports among the scouts for a few reasons. I think that first and foremost, teams want to avoid confirmation bias. If two scouts see the same player and have drastically different opinions, that would be important to know. I imagine there would also be fear that the scout leaves the organization and takes that proprietary knowledge with him, potentially using that information for trades in the future. The deeper reason for either minimal or no feedback, may be that there is not a system in place to make those evaluations or disperse reports.
Possible Solutions: After the season, give scouts an overall grade that is a mixture of objectivity and subjectivity. Scouts would ideally get continuous feedback on the verbiage in their reports throughout the season, much like Step 4 of the coach evaluation system I wrote about, making their end of year evaluation not a surprise. But at the end of the season, the scouting director or whoever is calling the shots should give a grade (I'm thinking 20-80 scale) on the quality of the writing. Not looking for any poetry, but mainly making sure the words used line up with the grades on the 20-80 scale for that player's tools.
The other contributors to the evaluation would include player performance from the season prior. This feedback would ideally be much more objective, combining both in-game stats and "process data" to give the player an updated grade on the 20-80 scale. The "process data" would consist of any additional data not available through games such as bat sensors, strength gains, development of a pitch, etc. For players outside the organization that the scout turned in reports on, it would be difficult to capture a "process grade," but I still believe game performance grades would be better than nothing. The scout would then get grades on each player, with a corresponding overall grade that takes all of his reports into account updated yearly. Ideally teams will be able to retain their scouts at a high rate and give them grades based on all of the years they have turned in reports.
While I don't think it would be a great idea to let scouts directly see their peers' reports, I do think they could get comparison feedback if multiple reports were turned in on a player. For example, the use of words like "aggressive," "conservative," or "neutral" compared to other reports on those players. Only time would tell if a player's future grade is actually "aggressive," but a scout knowing this would at least give them something to think about as far as giving a player too much or too little credit in a certain area.
Recap
The scout performance evaluation would include subjective and objective feedback. They would be given updated player grades on a yearly basis to compare to where the scout had projected that player to be. They would also be given feedback of how they viewed that player in comparison to other scouts, and a list of words that were used to describe that player to add to the ole' scouting word bank.
Comments