Bilag 7
Bilag 7
Benchmarking af forskningen
Resultatmål 13 i udviklingskontrakten er:
I løbet af 2006 identificeres sammenligningsparametre, som ITU forpligter sig til at opgøre og offentliggøre parametrene årligt fra 2007.
Der er i 2006 udarbejdet et forslag til it-benchmarking i samarbejde med Ålborg Universitet (Datalogi), Århus Universitet (Medie- og informationsviden- skab og Datalogi) samt DTU (dele af Institut for Informatik og Matematiske Modeller). Institutlederne for disse institutter nedsatte i forsommeren 2006 en arbejdsgruppe (kommisorium i appendiks A), som fremkom med et forslag i december 2006 (”Proposal for benchmarking criteria in computer science and other IT-departments”). Dette forslag indeholdt analyser af en række både kort og langsigtede muligheder for at opstille brugbare sammenligningspara- metre. De langsigtede forslag kræver yderligere analyser og administrative procedurer, hvorimod de kortsigtede alle kunne gennemføres i 2007 baseret på administrative processer, som enten allerede eksisterede eller let kunne etableres.
Baseret på arbejdsgruppens udspil lavede Xxxxxx Xxxxxxxxxx et konkret forslag for benchmarking i 2007, se appendiks B. Dette forslag er efterfølgende blevet accepteret af både Århus og DTU (Ålborg er blevet forsinket p.gr.a. af langvarig sygdom). Forslaget kan sammenfattes i en tabel på formen (de enkelte parametre er defineret i appendiks B).
Name of parameter | IT University | Institution Y | Institution Z |
Number of faculty | 42 | ||
STÅ production | 521 | ||
Number of refereed publications | 127 | ||
Amount of external funding (spent) | 17 Mil. DKK | ||
Volume of active Ph.D. students | 39 | ||
Average duration of Ph.D. study | 46 months | ||
Number of int.l Ph.D. students | 16 | ||
Self evaluation of dissemination | ITU has appeared 176 times in newspapers and our researchers have contributed with 138 public lectures and other public occasions. We consider this to be very good. | ||
Self evaluation of reputation | ITU had 43 applications for the |
6 Ph.D. scholarships that were announced in the fall of 2006. This is acceptable, but a small drop compared with previous years. There were 7 applicants for a full professorship position. This is an acceptable number, but a bit too low. Two of our research groups work closely with Turing award winners. In summary, we believe that ITU has an ok reputation, but our ambition should be to improve it. | |||
Volume of external student projects | 50 % of all MSc theses and projects. | ||
No. of ext. funded Ph.D. students | 16 |
Tallene er et første skøn baseret på bla. IT-Universitetets årsrapport. De kan ændre sig inden den samlede it-benchmarking for 2007 afsluttes.
Meadarbejderne på IT-Universitet er blevet orienteret løbende om udviklingen af benchmarking forslaget gennem forskningsgruppelederne, som også var med til at udpege Xxxxx Xxxxxxxxxx som ITU’s repræsentant i arbejdsgruppen. Alle medarbejderne var inviteret til et fælles møde d. 8. marts 2007, hvor Xxxxxx Xxxxxxxxxx orienterede om forslaget. Ca. 40 medarbejdere deltog. Der var opbakning til forslaget og flere ideer til, hvordan modellen kan videreudvikles og forbedres.
Konklusion
Forskningsproduktionen på IT-Universitetet er pæn. Der er i 2006 grund til at fremhæve produktionen af ph.d. kandidater og formidlingsaktiviteten.
Direktionen forventer, at ITU vil stå pænt på næsten alle parametre i den endelige benchmarking. Den vigtigste undtagelse bliver formodentlig, den eksterne forskningsfinansiering, hvor vi forventer at Århus Universitet vil stå særdeles stærkt.
18. april 2007/JSt
Appendix A
Institutlederne ved IMM på DTU, DAIMI og IMV på ÅU, Datalogi på AAU og IT-Universitetet har besluttet at nedsætte en arbejdsgruppe om "Benchmarking", som i løbet af efteråret 2006 skal komme med et konkret forslag til en model, der kan benyttes fra 2007.
Modellen til "Benchmarking" skal definere et lille antal nøgleparametre, der karakteriserer IT-forskningen ved hvert af de deltagende institutter. Modellen kan evt. suppleres med nogle få nøgletal om undervisningen f.eks STÅ eller kandidatproduktionen.
Foruden definitionen af nøgleparametrene skal arbejdsgruppen foreslå, opgørelsesmetoder for hvert af de deltagende institutter, f.eks. hvordan kan kvantitative størrelser beregnes ud fra eksisterende administrative data. Disse opgørelsesmetoder kan variere fra institut til institut, men arbejdsgruppen skal være enige om, at de resulterende nøgleparametre er sammenlignelige. Det er f.eks. ikke nødvendigt, at der benyttes samme publikationsbase hos alle deltagerne, når blot der er enighed om at optællingsmetoderne gør tallene sammenlignelige. Der bør lægges vægt på at opgørelsesmetoderne bliver så simple, at hverken forskere eller administrative medarbejdere pålægges et stort ekstraarbejde med at opgøre parametrene
Appendix B
Proposal for IT-benchmarking
Version JSt: april 18, 2007
This note is a concrete proposal for an IT-benchmarking running for the first time in 2007. It is based on the report “Proposal for benchmarking criteria in computer science and other IT-departments”. The report is written by a working group with representatives from:
• Department of Information and Media Studies, University of Aarhus.
• Department of Computer science, University of Aarhus
• Informatics and Mathematical Modelling, DTU
• Department of Computer Science, Aalborg University
• The IT University of Copenhagen
The terms “the report” and “the working group” is used in the following to refer to this report and the group who wrote it.
The IT University has committed itself to starting benchmarking in 2007 and invites other institutions in particular those who participated in the working group to join.
The working group proposed that an IT-benchmarking is based on the following seven dimensions:
- Publications
- External funding
- PhD-program
- Dissemination
- Reputation
- Interdisciplinary
- Innovation and collaboration with non-academia
For each of these dimensions there is chapter in their report providing some background and some concrete proposals for possible parameters (indicators) that can be used to describe that particular dimension.
The report has many interesting suggestions, some of these can be implemented rather easily using existing (or minor modifications of) administrative procedures. Others require further work both in terms of definition and supporting administrative procedures. Below is a concrete proposal for the 2007 benchmarking using parameters that can be collected with only minor extensions of existing procedures. The proposal below has many references to the report. To fully understand the justification for the parameters proposed below, one should read the report.
It is also proposed to continue the development of the IT-benchmarking during 2007 and define additional parameters to extend the benchmarking in the coming years.
Size
For some of the parameters the working group suggests that the parameter is related to the size of the unit (department, group …) participating in the benchmarking. Obviously the size of the unit influences all quantitative parameters, and it also has to be taken into account in the interpretation of all the other parameters. It is, therefore, suggested to include a separate parameter indicating the size of participating units in the benchmarking. It is proposed to use the number faculty/researchers as the size parameter, see appendix A for a precise definition of this parameter.
The working group did not consider teaching in their report; however, a single number indicating the amount of teaching done by the unit might be valuable. It is proposed to give a single number “STÅ production”; that is the number of STÅ reported from the unit at the October deadline of the previous year (e.g. October 1., 2006 for the 2007 benchmarking).
Proposed parameters for IT-benchmarking in 2007
Publications
The report strongly suggests that publications in conferences with high international prestige are considered as important as journal publications. There are no administrative procedures in place for classifying publications as suggested in the report (into three classes). It is, therefore, suggested to use just one class of publications for 2007 consisting of journal, refereed conference papers, books and book chapters. This number can easily be found by adding the numbers already reported to Rektorkollegiet by all universities (section J of the report to Rektorkollegiet: sum of J.1.1 and J.1.3).
At the IT University we only count publications appearing in a particular year (i.e. excluding papers that have been accepted, but not yet published). It is not particularly important that everybody does this in the same way, but it is important that the same procedure is used every year, to avoid counting some publications 2 (or 0) times.
Name of proposed parameter: Number of refereed publications
For the future extension of the benchmarking, it is proposed to follow the suggestion of classifying publications into a small number of levels, e.g. 3.
External funding
The working group suggests that this parameter is measured as the actual accounted spending of externally granted funds, including money from Danish and international funding sources and all private funding. This number is already part of the financial reporting.
Name of proposed parameter: Amount of external funding (spent)
For the future extension of the benchmarking, it should be considered to separate the funding into a small number of sources: e.g. national public funding, private national funding and international funding.
PhD-program
The working group suggests three quantitative parameters: volume, health, and international dimension. The group also suggested a fourth parameter, namely a list of places where the candidates get jobs. The report gives a detailed definition of these parameters. It is suggested that the three quantitative measures are used for 2007, and that work is started on collecting the employment data.
Name of proposed parameters: Volume of active Ph.D. students, Average duration of Ph.D. study and Number of international Ph.D. students.
For the future extension of the benchmarking, it is proposed that the employment of Ph.D. candidates is included. Either as a complete list or aggregated into a few groups such as: Danish universities, international research labs, industry and others.
Dissemination
The report suggest that this parameter is estimated using a self evaluation divided into statements about dissemination to international, national and local audiences. The report does not define how the self evaluation should be done. As a starting point, it is suggested that the self evaluation is based on the quantitative information already reported to Rektorkollegiet (section J of the report to Rektorkollegiet: sum of J.2.1 and J.2.2). As an example, the self evaluation could be something like:
Name of proposed parameter: Self evaluation of dissemination Department xxx has appeared 231 times in newspapers and our researchers have contributed with 74 public lectures and other public occasions. Of the 231, there were 79 in large international newspapers. We consider the national dissemination to be acceptable and recognize there is room for improvement. However, the number of international articles about our research has been exceptionally good this year.
For the future extension of the benchmarking, criteria for the self evaluation could be defined.
Reputation
The report suggest that this parameter is estimated using a self evaluation divided into statements about number of applicants for positions, prestigious collaborators and visitors. It is suggested to follow the report and require a short self evaluation, for example, something like:
Name of proposed parameter: Self evaluation of reputation Department has had 54 applications for iits 5 Ph.D. scholarships and an average of 7,4 applicants for faculty
positions. Two of our research groups wok closely with Turing award winners; however, this year we have only had 13 research visitors. In our view our reputation must be very good when we have so many good applicants for the positions we announce, this is confirmed by our prestigious collaborators.
For the future extension of the benchmarking, criteria for the self evaluation could be defined.
Interdisciplinary
It is proposed, to work further with the suggestions made by the working group before including a parameter on this aspect.
Innovation and collaboration with non-academia
The working group suggests further work to establish routines for collecting information about innovation and collaboration. In the meantime, it is suggested to use two simple parameters.
1. The number of formalized external student projects (co-supervision, thesis agreements and students working part-time on their thesis with collaborators)
2. The number of externally funded Ph.D.s (fractional funding included)
It is suggested to estimate the number of formalized external student projects by simply counting the number of projects where there is some written agreement with an external collaborator. This parameter requires a more precise definition and estimation procedure
Name of proposed parameters: Volume of external student projects and
Number of externally funded Ph.D. students.
For the future extension of the benchmarking, parameters for indicating innovation would be desirable. The working group has some suggestions that can be used as the basis for further work.
Summary
The parameters proposed above can be summarized in a table like:
Name of parameter | Institution X | Institution Y | Institution Z |
Number of faculty | |||
STÅ production | |||
Number of refereed publications | |||
Amount of external funding (spent) | |||
Volume of active Ph.D. students | |||
Average duration of Ph.D. study | |||
Number of int.l Ph.D. students | |||
Self evaluation of dissemination | |||
Self evaluation of reputation | |||
Volume of external student projects | |||
No. of ext. funded Ph.D. students |
Appendix A: Definition of faculty
The number of faculty includes all professors (full, associate and assistant), docents, amanuensis and postdocs regardless of their funding who has contributed to the teaching and research reported in the benchmarking parameters. One may, therefore, decide to exclude or include a subunit in the benchmarking by including (or excluding) the faculty and all their contribution. It is proposed that each unit participating in the benchmarking makes a list of the faculty included.
The number of faculty should not include part-time teaching staff, assistants, programmers, administrative staff or management.
9. november 2006/JSt