Give yourself some credit: you have a testable hypothesis. We could conceivably monitor the brain activity of Diablo 2 and 3 players and see if they match your predicted behavior. That we could either confirm or contradict your hypothesis means that we could actually provide support or falsify your theory.
OK, so suppose you measure metabolic activity in (say) nucleus accumbens, with pretty sensitive equipment, for a number of different players (experienced or naive? D2 vets or not?) as they somehow play this commercial computer game with their heads stabilized, for not very long periods of time. You aggregate the data across individuals, losing a ton of information.
Now what is the hypothesis - that D2 will drive significantly more metabolic activity than D3, because some blogger thinks that it is a better game?
This would tell us nothing of any scientific interest whatsoever. (Not to say you couldn't make a poster or even get grants for such rubbish, with the right connections)
If there is a testable hypothesis in here, it is so bizarrely specific as to have no practical value nor any value in distinguishing among meaningful theories about how the brain works.
The theory isn't about the brain itself, but about the enjoyment cycle of Diablo 2 and Diablo 3. The hypothesis is clearly stated in his post: it's the graphs he drew for the brain activity of Diablo 2 and 3. He is predicting a very specific reward-frustration cycle for each game.
A testable hypothesis whose result will either support or falsify a theory is the very definition of science.
Note that this theory has nothing to do with whether or not one likes the experience. Someone very well may like the experience with more frustration more, for whatever reason. The theory is not "This is why Diablo 2 is better than Diablo 3," but an explanation for why many people may feel less satisfaction playing Diablo 2 than Diablo 3.