The SHARD online training course has gone live today. It’s called “Data Preservation for Historians”, and is being made available by the Institute of Historical Research through their HistorySPOT learning and training platform for historians.

The SHARD (preServAtion of Research Data) project delivered its final report to the JISC last week. We have built a set of digital preservation training modules pitched at researchers and those with a non-specialist knowledge for the area of digital archives. We believe that embedding awareness of digital preservation issues into existing methods of data creation will help avoid data loss, and we offer practical jargon-free solutions that are designed to empower researchers.

SHARD was one of four JISC-funded projects on the theme of Preserving Research Data. The SHARD project address the simple fact that most material being produced by researchers is now in digital form. Research data is also produced in a variety of formats, and maintaining access to valuable resources in the short and long term is essential, in order to enable reuse and sharing over time. It might seem that there is already a lot of information about digital preservation available, but a lot of this advice is primarily aimed at practitioners in the area of digital preservation, not at researchers. The SHARD team, led by ULCC’s Patricia Sleeman, recognised the importance of developing training materials free from technical language, and applied directly to researchers using non-specialist language.

For further information about the project, see our SHARD blog.

Successful Institution reaches Stage 5This year I devised a tailored version of the AIDA assessment toolkit, which I hope is become something fit to be applied to the management of research data. This has made AIDA into something better, but my wider task is to contribute to an all-purpose Integrated Data Management Planning toolkit which is being developed by the DCC, and which will incorporate parts of other measurement toolkits such as DRAMBORA and DAF, both of which have been used much more widely than AIDA. The original AIDA was targeted at “all digital assets in a University”, which now I come to think of it is fairly ambitious. Encouragingly, the DCC project manager tells me “The IDMP toolkit is planning to take forward much of the overall structure of AIDA as we think it is extremely useful as a way to present the practical recommendations we’ll glean from the legacy data.” This suggests to me that the three-legged model, and the five stages of development (both devised by Ann Kenney at Cornell University), are proving their integrity and soundness.

I took my results to a Workshop in Bristol on November 3rd to give a presentation to the numerous Project Managers who are taking part in the JISC Managing Research Data programme. Besides my ally Dr Takeda at the IDMB Project who has supported me since January, it seems one or two others had tried out AIDA, or at least looked at it, and generally found it helpful or potentially helpful. My graphical expression of the five stages – to which I have added more layers of “semantic meaning” – seemed to go down well with Chris Rusbridge. When designing this new version, I layered in a lot of detail from numerous sources, not least of them being existing questionnaires and published guidance on managing research data; when you combine that with the original AIDA elements which included organisation-wide surveys based on Trusted Digital Repository models and digital preservation capabilities, you get quite a complex matrix. One project immediately spotted that in its current form, AIDA would take a very long time to complete.

Many useful things came up in discussion: (1) if you undertake an AIDA, who is going to complete the assessment? I’ve been clear from the outset that no-one person can do it all, and that you’d need to farm out bits of it or work collaboratively, but so far it’s been tested by records managers, some of whom have a good rapport with their IT managers. As regards research data in a University, who is best placed to help, and how many of them are needed? Perhaps the Finance department, the sysadmins, the people who run the IT procurement programme, and people who design and implement policies for the University. Plus, of course, taking into account the assessments from the Researchers themselves. I think this is certainly going to help us model the all-purpose IDMP tool, if we can be quite clear about who is responsible for providing answers, and evidence, for each element in each of the three legs. This would potentially translate into a wide range of user types who can log in to use the tool and perform the assessment.

Lesson (2) – the numerical scoring method I am currently aiming at may not be the best one for some users. Blueprint started to count up their responses and think about calculating averages, but then decided instead to go for a qualitative approach rather than a quantitative one. The reason for this is that the actual difference between the Five Stages has (in their experience) an error bar of 50%. So their results (though I haven’t seen them) are presumably more in the way of a prose narrative describing how well things are working, rather than a numerical score marking and grading the outcome. Where this leaves the Five Stages scale, I’m not sure, but it could probably still work as an indicator.

(3). My AIDA structure has a two-level split that allows assessment of the entire University and (underneath that) a Department, School, Research Group or Project. To this level, I may need to add another unit called ‘Centre’. Apparently a Centre in a University is a bit like a Department, except it specialises in a particular strand of research. When it comes to the actual research data the funding streams are different and more complex. This is good for the researchers, but it also makes it much harder to pin down ownership of the data, and who is ultimately responsible for it.

The theme of the day in Bristol was costs, benefits and sustainability. These are areas I think AIDA can help with in a basic way, but I also think they are better expressed through the matrix which Neil Beagrie is developing within his Keeping Research Data Safe (KRDS) framework. I took notes at one of the workshops where this matrix was discussed, and learned a lot more about research data in many contexts; my impressions might make another interesting post.

Last Monday (2009-11-23) saw DPC members travel to Edinburgh for a board meeting and for the annual general meeting of the company. We elected a new chair – Richard Ovenden – and offered our thanks to Bruno Longmore for the effective leadership he has offered as acting chair following the departure of Ronald Milne for New Zealand earlier this year.

We had a brief preview of the new DPC website, which promises to be a much more effective mechanism for the membership to engage with each other and the wider world, and confirmed recommendations emerging from a planning day earlier in November which should keep the DPC busy (and financially secure) for a few years to come.

Finally, we had an entertaining and thought-provoking talk from Professor Michael Anderson. Professor Anderson touched on many issues relating to digital preservation from his research career, past and present. He mourned the loss of Scottish census microdata from 1951 and 1961, painstakingly copied to magnetic tape from round-holed punch cards for 1951 and standard cards for 1961, which had to be destroyed when ONS realised the potential for inadvertent disclosure of personal information. More »