The first date of the current Peer Review training tour was in Cardiff. Here I discussed how voluble the crowd was. In particular was the very strong assertion by some delegates that local firms were getting good results by comprehensively doctoring files before submission.
Now this is a topic we discuss regularly with firms and certainly it is neither something we advise nor do we think that practically much can be achieved if a file has chronologically irretrievable errors.
We are however now in possession of evidence which would seem to support the above contentions. Not that we will be changing our advice.
Share
What is the evidence please?
A sight of files arguably “below comptent”, “threshold competent” at best, from a firm with a “competent plus” peer reviewer.
That is not “evidence” of file tampering; it simply illustrates that if you took 2 batches of files from the same firm you could get 2 completely different results. It rasies for me what is a very important point in relation to sample sizes and selection. No different to costs compliance audit in that respect,
There is a bit more than the above to it, the detail of which I’d rather not put up here for obvious reasons. We have also now had a second and more conclusive (to our minds) example, mentioned above, but I have the same reporting constraints (will speak on the phone in confidence).
I think the PR sample size is ok if the methodology of looking for systeminc problems, and not single bad files, is consistently employed. The same ground for appeal ostensibly exists regarding CCA albeit only in non-published internal guidance. Nonethelss I share your concerns about the seriousness of these decisions being made on small numbers of files.