There’s been quite a bit of online discussion around writing and marking of proposals in JISC’s recent 12/08 call, including discussion of how Twitter can help you prepare a bid and how it was used (and perhaps abused) during the marking process. Andy Powell has vented his frustration on some aspects of the process (and people who can’t stay within the page limits!) (Updated to add: I also intended to mention Lorna Campbell’s post, written earlier in the process before marking had begun – lots of good advice there about writing a proposal.)

Rating a JISC bid

Rating a JISC bid

The marking process isn’t a secret – it’s exposed on the JISC website, along with some concise guidance on what makes a good bid, and examples of past winning bids. This advice is reinforced at the town meetings that accompany large funding rounds, so none of us have any excuse for not knowing what to do. Yet we continue to see some bids that don’t provide the information requested, or fail to demonstrate how they meet the requirements of the call. (I will readily admit that I’ve been guilty of writing bids like this as well.) More openness about the process can’t hurt, although it may not help. So I’ll say a little about the way we (or rather I) mark, and then speculate a bit about things we might want to do to improve it. If you’ve marked JISC bids yourself, you probably want to skip the next bit and go straight to idle thoughts

Marking

Markers – at least those outside the JISC Executive – get anywhere between 1 and 10 bids to look at, and something like 2 weeks in which to do the marking. As well as the bids, they’ll get some guidance for markers and the original text of the call. They’ll also see a log of all bids received, which as well as assigning each bid a unique ID, tells them how much each strand is over-subscribed (that is, how much is being asked for against how much is available.) And they’ll have access to a closed email list to clarify issues with other markers and the Executive. In my experience these lists are not much used these days, although they can prove useful to clarify ambiguities in the call or the marking criteria. Some are probably using Twitter for this now, which is unfortunate as it means that not all the markers will be aware of the discussions.

At an early stage, markers are meant to double-check that they don’t have any conflicts of interest in the bids assigned to them. A bid in which your own institution is a partner definitely constitutes such a conflict, and the Executive usually spot these in advance. But they arise for other reasons, and every marking episode I’ve been involved with has had at least one person realising that they have a conflict, often very late in the process.

Then we begin the process of reading, evaluating, and assigning marks. I suspect everyone has a different approach to this, but the outputs are the same. We score each criterion on a 5-point scale that’s really a 10-point scale (because it has half marks.) The criteria include such things as “Appropriateness to the call” and “value” and the lowest rating implies that the bid fails that criterion in a way which just can’t be fixed, whereas the highest implies that it exceeds expectations significantly in a number of ways. Markers also make comments on each criterion to explain the thinking behind their marking. These comments will form the bulk of the feedback that you will receive if you ask for it.

It’s worth noting that you can (and should) ask for this feedback even if your bid is successful. Sometimes your programme manager will offer it to you unasked. Few bids are perfect, and most bids contain something of value. The feedback can tell you what you need to improve but it will usually also tell you what you did well. Both are good to know!

As well as the criteria-based marking, we are also asked to rate the bid overall as A, B or C. As the picture above shows, this means that we strongly recommend funding, weakly recommend it, or do not recommend funding. These marks are the most significant ones once all the bids are considered at the evaluation stage.

The marking process is now all done via the web, although some find this frustratingly slow if they have to copy their marking information from some other source to the web forms.

The Evaluation Panel

The evaluation process usually involves a face-to-face meeting of all the markers, or just those from the Executive. The exact conduct of the meeting will depend on the size and complexity of the call, the number of bids and the number of projects to be funded. Every bid will have been marked by at least 3 people. Typically, one that scores AAA will be approved without further discussion, and one that scores CCC is likely to be rejected without significant discussion, although the panel will make sure that there’s sufficient information in the comments to provide feedback for the CCC bid.

What happens next depends to a great extent on the degree of competition, the quality of the proposals and the way the evaluation panel chair chooses to work. If there are many more proposals than can be funded, it’s not unusual to try to pick off further outliers – ones that stand out from the rest as being particularly strong or weak – before examining the rest in detail. The marks in each criterion will often come into play here, either to choose between two bids with equal recommendations, or to compare (say) an ABC with a BBB. Markers may be asked to justify or clarify their comments, and opinions do change as the result of discussion at this stage. The Executive will also want to bring other considerations into the process – either to ensure a range of different types of project are funded, or to ensure that funding goes to a range of institutions. Similarly, bids are sometimes approved subject to (agreed) change. Scores of 3 or below for any criteria imply that the marker sees problems that can be, and should be, corrected before funding is approved. If one institution receives funding for a number of related projects, they may be asked to look for economies of scale between them.

When there’s less or even no competition (with the number of bids being less than or equal to the number of projects desired) then the evaluation will have a different focus. It will still be necessary to eliminate projects that are too weak to receive funding, even if that means that some funding will be unspent. (Sometimes that funding can be reallocated to another stream where it can be better spent.) For those that can be funded, any concerns that the markers had need to be turned into guidance for the programme managers or the projects themselves. The budget may need to be made clearer, the dissemination plan improved, or the project may need to take account of the work of a related project, for instance.

Idle Thoughts

So that’s the process, at least the bit of it I see. What might change and what else might we want to know? One area which interests me is inter-marker variance, which can take two forms. Some markers are harsher than others and tend to assign lower marks – there are also generous markers. If the markers are all agreed about the relative ranking of the bids then it’s possible to correct for the variations in absolute scores but at present this isn’t done. I did a brief and very unscientific experiment with bids marked by RPAG some years ago which showed significant inter-marker variability of this sort, although on that occasion I don’t think it had a significant impact on which bids were funded. There’s also the more interesting variance where the markers disagree about the relative merits of bids. We see quite a bit of this – rankings like ABC, AAC or ACC do appear, and evaluation panels will usually devote more time to understanding why such variation occurs. One thing we know nothing about is intra-marker variation – whether the same marker, given the same bid, will mark it the same way twice. In some fields, such as radiography, studies have shown significant variations of this type as well as inter-observer variation. This has led to pressure in some areas for increased use of machine assessment for X-Rays, since it’s repeatable even if it’s wrong.

There’s some interesting research that could be done on some of these areas, although I suspect it will be some time before we see automated marking of bids!

There’s always scope for using rules to improve consistency between markers. Andy Powell was looking for guidance on what to do with bids that are over the page limit, for instance. I think JISC have got clearer about this over time, but I’m wary of being over-prescriptive. It could be left to marker’s discretion as it is now. At the other extreme, such bids could be rejected before a marker ever sees them. Or they could be truncated at the page limit, so that the marking was done on the material within the limit. (For some bids, the material lost would not be significant – for others it would be crucial.)

And although the web-based process is a great improvement on it predecessor in many ways, it isn’t ideal if you aren’t always online. Something that allowed offline completion and online submission would be welcomed by some. Some parts of JISC are also experimenting with web-based bid submission as well. I’ve not had direct experience of this but it would be fascinating to hear from those who have.

I’m also interested to hear about perceptions of the process from the authors of bids, or from those who have considered writing bids but decided, for whatever reason, not to. What could be better? What’s already good and shouldn’t change? What barriers to bidding do people perceive? Could JISC commission work to improve the bidding process, the evaluation process, or both?

7 Comments

  1. According to a paper by Robert Williams, describing automated essay marking using the Bayesian-based Text Categorization Technique:

    When all the criteria for assessment were used the proportion of essays graded exactly the same as human graders was 0.60 and scores adjacent (a score one grade on either side) was 1.00. [...] The system performed remarkably well.

    Couldn’t the JISC employ this kind of technology (that it’s no doubt funding research into elsewhere, directly or indirectly)?

    (Of course they might then need Turnitin too, to make sure we’re not cheating…)

    Reply

  2. Interesting paper, Richard – perhaps the future is closer than I think. However, that paper looks at marking of material where a model answer exists, so the automated systems are effectively measuring how far the candidate document is from the ‘ideal’ document in some measure space. There isn’t a model response to a research funding call, although there are almost certainly some aspects of a bid (such as the budget) where models do exist and where automated assessment might well be practical, as well as fairer.

    Reply

  3. Another interesting approach to aspects of this process is being piloted by Joss Winn and Tony Hirst: http://writetoreply.org/jiscri. Using CommentPress – yet another exciting possibility out of the rich seam that is the WordPress stable (see also P2 and BuddyPress – not to mention my imminent post for JISC-PoWR about using a WordPress blog to archive WordPress blogs!).

    Reply

  4. Tony Hirst and I are planning on making a bid to #jiscri to adapt what we’re doing on WriteToReply to assist and develop the process of JISC funding calls, bidding, marking and reporting. WriteToReply is pretty bare-boned right now, but with a little funding, has the potential to offer a place to announce and host Calls, invite comment and discussion (i.e. Town Meeting events), form interest groups and partnerships around bids, draft bids (even submit bids??) and finally mark bids. Each aspect of this process could be anywhere between open or private (I favour open at every stage).

    We’ve not joined the two yet, but are thinking about how WriteToReply and http://learninglab.lincoln.ac.uk/jiscri social networking platform might be combined effectively to provide a community space focused around announcing, discussing, working on and marking calls and bids.

    I think everything you’ve noted above could be accommodated by our proposal, although I would hope that we might also affect change in the Call process, too, during the requirements gathering and implementation of the service.

    Reply

  5. Hi Joss

    Love your JISCRI network, congrats on taking the initiative to knock this up. The whole JISC community as one big Buddypress network – why not? (It’s even defaultly Orange!)

    Hope to check it out more next week, after an offline weekend’s cleared my head! (After all, real people don’t have time for social media ;)

    Reply

  6. Joss, the stuff you’ve been doing at writetoreply is really great stuff and the community owes you thanks for doing it; we also stand to learn a good deal. I look forward to you getting a bid together. The hard part can be having a good idea, and you’ve done that. The other thing you need to do is to convince the evaluators that you can see the idea through to completion. You’ve already got some good evidence there as well.

    Reply

  7. Just to add some more food for thought on this. Yesterday I attended an excellent Executive Briefing titled “Costing and funding digitisation projects” [CILIP]. As one of the key speakers, Alastair Dunning (JISC Digitisation Programme) advised attendees on what funding bodies are looking for (which applies generally beyond digitisation projects). He highlighted that ‘Peer-reviewers who do the marking often interpret criteria differently’, that ‘you can be unlucky with a good bid; expect to fail before succeeding’; and that the ‘winning teams show evidence of broader strategic thinking’. Funding bodies are increasingly looking for things like: ‘evidence of impact and use’ ie testimonies of potential users (‘not just a build it and they will come approach’); ‘linking up with others’ eg economies of scale; ‘strong leadership’; ‘considered approach to IPR’; ‘workflows and standards’; ‘integrating web 2.0′; and dealing well with practical matters (eg proper preparation/answer specifics of the bid/proactive on partners and strategies/other sources of funding/don’t over-emphasize the tools/technology in your bid, focus more on users and outputs).

    Reply

Leave a Reply

Your email address will not be published. Required fields are marked *

You may use these HTML tags and attributes: <a href="" title=""> <abbr title=""> <acronym title=""> <b> <blockquote cite=""> <cite> <code> <del datetime=""> <em> <i> <q cite=""> <strike> <strong>