"A study shows that it takes less effort for the brain to register predictable as compared to unpredictable images."The article frames this in the context of the scientific method, however, I see things in terms of micro-architecture (design of microprocessors).
In an advanced microprocessor, there is a lot of what is called "speculation". This includes the branch predictor, caches (which are can be seen as a form of "reuse prediction"), and other structures (load-hit predictor).
When the predictions are correct, everything runs at maximum performance (which often uses max power). When a prediction is wrong, often power consumption goes down as structures get reset.
"If it is wrong, massive responses are required to find out why it is wrong and to come up with better predictions."It's interesting that there is a nearly opposite effect in the brain. It's possible this is an interface issue. If a processor's caches suddenly stop being effective (due to "mispredicting" the access pattern), the memory bus will go from nearly idle to wildly active.
This property has been used in cryptography. A correct password uses less power to check than a wrong one. It can even take varying amounts of power, depending on how far off the password is.