Ian Lynch's take on the BECTA fiasco

I have recently read an eye-opening email from Ian Lynch about what happened in the UK with BECTA.

I have received his permission to republish here his thoughts. I think his email speaks volumes about what happened.

Ian Lynch's email

Fundamentally, I'm not complaining that we were not successful in the tender - I have no idea how strong the winning bid was. I'm complaining that the tender process adopted was broken. This is despite the fact that 130 MPs signed an Early Day Motion in Parliament last year censuring BECTA for procurement frameworks that block out Open Source.

  • The tender was not fit for purpose.

Evidence: The tender document was not written specifically for this project. It made references to a research project which were indicated to be irrelevant in the e-mail feedback to questions. It is clear that the document was a hurried adaption of another tender document originated in 2005.

The mark scheme is generic and does not adequately reflect the title of the project. It's quite possible to arrive at combinations of marks that produce anomalous results in relation to the project title. eg Winning the tender with no experience in either schools or open source.

(I'm Chief Assessor at an OFQUAL accredited Awarding Body so I do know about assessment)

  • Vital information might not have been equally available to all bidders

Evidence: The details of how marks were allocated and their weighting were not available in the tender specification. Any company previously tendering for a similar BECTA project having kept a copy of this feedback with the guidance would be at an unfair advantage over those who had not had this information. This provides a barrier to entry to new companies and fuels accusations of cronyism. (I do not aim this at the winner, it is not their fault they won a bid, BECTA should have foreseen this potential risk and eliminated it.) All bidders should have been provided with the detailed mark scheme and guidance on allocation of marks from the outset. (Delay the process a week if necessary) Knowing what weight is given to different parts of the spec is important when less than 10 percentage points separates several candidates in the scoring. Tenders should not be about who can best guess what the procedures require, they should be about who is best qualified to do the work.

Look at this link and ask how there can be any doubt about meeting project time scales when at the outset the bid provides more than an order of magnitude more than the tender requirements?

The same for value for money. Only half marks, yet the project deliverables and a lot more are being made available on the day the project starts and with an additional 60k in committed private sector sponsorship gathered in a short space of time and with commitment for more to follow. Why would people motivated to do that not make best use of any further funding?

  • However well a pre-conceived process is followed, the outcome is what matters. We say to children taking tests, is the answer sensible? If not, what is wrong with your process?

Here are some suggestions for improvement.

  1. Ensure tender documents are prepared by someone knowledgeable in the field in which the tender is targeted ie School Open Source Communities and Open Source. (I would have done so if asked and withdrawn from the bidding process so it's not that such people don't exist - I am known to BECTA)

  2. The required deliverable outcomes reflect current available provision. Schoolforge UK/TLM's starting points are so far in advance of the targets set in the spec it shows complete lack of understanding of where things are at. (Incidentally given that, it's a mystery as to how we didn't score full marks on the ability to deliver since we already delivered to that level with no public funding)

  3. Ensure that the structure is more specific and with better guidance to reduce the need for repetition. I found the separation of timescales from targets strange. Seems much more sensible to integrate them since a target isn't a target without an end point.

  4. Provide the mark scheme and ensure how to achieve the marks is clear and transparent. This is simply just good assessment practice. I can provide training.

  5. Move away from simplistic numeric scoring and use "Essential", and "Desirable" to eliminate bids with key omissions and fine tune with numbers only in areas where that detail is meaningful. In a tender entitled Schools Open Source Project, extensive experience of working in schools and open source communities should both have been essential. Working in projects involving both, desirable, to reflect the title. I took this to be what was meant by "similar projects". Obviously the tender evaluators didn't interpret "similar" in the same way. That scope for ambiguity is a serious weakness in such an important area. Graduated scores can then be applied in those essential areas once the elimination has taken place.

  6. Phasing of funding is not a value for money issue, it is a technical mechanism to reduce risk. If you simply state the preferred method and ask the respondents to say if they agree, it avoids the tenderer trying to second guess what specific procedures are required and is much fairer. For example, in the SF bid and again at interview it was stated that the bidders were prepared to do anything BECTA wanted and that we would not pay anyone until work was complete keeping money in Escrow only released by authorisation of BECTA if that is what BECTA wanted. A mystery as to why that scored only a third of the marks on that aspect since any possibility of money being lost or paid out for substandard work was eliminated.

  7. Consider the difference between validity and accuracy in assessment. You might well use a numeric score system accurately but if the criteria behind the numbers don't accurately reflect the title of the tender the entire exercise becomes invalid. If the weighting of the numbers allows a technical issue like phasing funds or specific processes that might or might not be practically important to outweigh extensive experience in the subject matter of the tender, use a different method.

"Open Source projects demonstrate that some forms of control can be counter-productive and the key is to optimise the balance between control and freedom to motivate productivity. What matters is the overall outcome in terms of value for money." (quoted from the bid) This is why a bid about open source should look a the wider issue of quality assurance rather than simply control - this suggestion seemed to lose marks. Assessors need to get up to date with quality management in relation to open source projects.

License

Verbatim copying and distribution of this entire article are permitted worldwide, without royalty, in any medium, provided this notice is preserved.