Community of Practice Archives | DORA https://sfdora.org/category/community-of-practice/ San Francisco Declaration on Research Assessment (DORA) Mon, 28 Oct 2024 21:36:13 +0000 en-US hourly 1 https://wordpress.org/?v=6.7.1 https://sfdora.org/wp-content/uploads/2020/11/cropped-favicon_512-1-32x32.png Community of Practice Archives | DORA https://sfdora.org/category/community-of-practice/ 32 32 Updates from the Asia-Pacific Funder Discussion Group 18/09/24 https://sfdora.org/2024/10/28/updates-from-the-asia-pacific-funder-discussion-group-18-09-24/ Mon, 28 Oct 2024 21:36:13 +0000 https://sfdora.org/?p=161687 19 representatives from 7 research funder organisations participated in the last quarterly  Asia-Pacific Funder Discussion Group hosted by DORA.  New DORA staff members, Liz Allen and Janet Catterall were introduced to the group. The position of  DORA Program Manager is currently vacant. DORA updates to the group included the upcoming implementation guide and guidance document,…

The post Updates from the Asia-Pacific Funder Discussion Group 18/09/24 appeared first on DORA.

]]>
19 representatives from 7 research funder organisations participated in the last quarterly  Asia-Pacific Funder Discussion Group hosted by DORA. 

New DORA staff members, Liz Allen and Janet Catterall were introduced to the group. The position of  DORA Program Manager is currently vacant.

DORA updates to the group included the upcoming implementation guide and guidance document, anticipated for early next year, and the imminent release of three toolkits, which emerged from the workshops DORA held in May 2024 with the  Elizabeth Blackwell Institute/MoreBrains project. These workshops sought to find ways to increase equality, diversity, inclusion, and transparency in funding applications. The toolkits will focus on simplifying funding call structures, changing application processes to reduce likelihood of bias in outcomes (e.g. recognising a broader range of research outputs, narrative CV formats etc) and improving training for reviewers and evaluators. The team is also producing three case studies about the funding application process.

Participants then engaged in a roundtable discussion where members shared their work from the past year, including collaborations, asked questions of the group and suggested topics they would like the group to cover in the future. Common themes that emerged from these discussions included:

  • Trialling new selection processes, fellowships and panels to foster greater inclusivity of underrepresented communities, particularly Aboriginal and Torres Strait Islander, Māori and Pasifika applicants and assessors, and exploring ways to engage with these communities.
  • Experimenting with the inclusion of the narrative cv option in grant applications. 
  • Exploring alternative metrics for research assessment and new KPIs.
  • Investigating different models that support strategic discretion in decision making to further equity and fairness
  • Condensing and simplifying the application process
  • Fostering a greater understanding of how artificial intelligence tools can be or are being utilised by applicants and assessors

The utility of artificial intelligence was further discussed in terms of the assessment process itself- can these technologies be used for the initial screen? To find new peer reviewers? To summarise panel results? How can an agency build such tools into the review process? Funders reported that AI tools had been trialled already for assigning peer reviewers and for aligning applications with assessors. Confidentiality is a big consideration so it is recommended that title and keywords only be used in prompts and not the abstract.

The last quarterly meeting for 2024 will feature a presentation from the Global Research Council RRA Working Group members Joanne Looyen (MBIE) and Anh-Khoi Trinh (NSERC)

Call for member presentations for 2025

The post Updates from the Asia-Pacific Funder Discussion Group 18/09/24 appeared first on DORA.

]]>
A Quality Assessment Framework for Research Design, Planning, and Evaluation: Updates from the Sustainability Research Effectiveness Program in Canada https://sfdora.org/2023/09/06/a-quality-assessment-framework-for-research-design-planning-and-evaluation-updates-from-the-sustainability-research-effectiveness-program-in-canada/ Wed, 06 Sep 2023 19:26:38 +0000 https://sfdora.org/?p=158784 Each quarter, DORA holds a Community of Practice (CoP) meeting for National and International Initiatives working to address responsible research assessment reform. This CoP is a space for initiatives to learn from each other, make connections with like-minded organizations, and collaborate on projects or topics of common interest. Meeting agendas are shaped by participants. If…

The post A Quality Assessment Framework for Research Design, Planning, and Evaluation: Updates from the Sustainability Research Effectiveness Program in Canada appeared first on DORA.

]]>
Each quarter, DORA holds a Community of Practice (CoP) meeting for National and International Initiatives working to address responsible research assessment reform. This CoP is a space for initiatives to learn from each other, make connections with like-minded organizations, and collaborate on projects or topics of common interest. Meeting agendas are shaped by participants. If you lead an initiative, coalition, or organization working to improve research assessment and are interested in joining the group, please find more information here.

Socially impactful research is multi-faceted, requiring transdisciplinary approaches that involve a wide range of actors, approaches, and pathways to influence change. However, addressing the epistemological and methodological variability of transdisciplinary research (TDR) has been challenging. There is a need for standards and methods to define and assess quality in TDR. To address this, Brian Belcher and his colleagues from the Sustainability Research Effectiveness Program at Royal Roads University, Canada conducted a systematic literature review that provides several definitions and ways of assessing research quality in a transdisciplinary context. During our May CoP meeting, Brian Belcher, Rachel Claus, and Rachel Davel presented some excerpts of their study to highlight how their Transdisciplinary Research Quality Assessment Framework (QAF) has advanced and been refined through testing.

The Linear Model of Research Impact assumes that the production and dissemination of new knowledge will eventually reach those who will find it useful, leading to uptake, scaling, and inevitable impact for society at large, where the world becomes a better place. In reality, there are multiple pathways to research influence. Researchers aiming for social impact take on a more active role in relationship-building through partnerships and networking, knowledge co-generation, capacity-building (particularly to do and use research), public discourse and policy processes, lobbying and negotiation, and more. Transdisciplinary approaches facilitate and generate value through these processes to support greater research impact. Yet, Belcher noted that there are limits to a project’s influence as it progresses through the research process. When passing the boundary from a project’s sphere of control (i.e., activities and outputs) into its sphere of influence (i.e., actors whom the project works with and through to influence changes in behaviours), there are more external processes and influences at play that can affect the realization of a project’s intended outcomes and ultimately  social, economic, and environmental impacts.

As background to the team’s latest revisions of the QAF, the QAF originates from a systematic review of literature to determine the most appropriate principles and criteria to define and assess TDR quality . The literature identified dominant themes that should be considered in TDR evaluation: engagement with problem context, collaboration and inclusion of stakeholders, heightened need for explicit communication and reflection, integration of epistemologies and methods, recognition of diverse outputs, focus on outcomes and impact, and reflexivity and adaptation throughout the process. The objective of the systematic literature review was to pinpoint suitable principles criteria to delineate and gauge research excellence, and subsequently structure these into a comprehensive assessment framework. The QAF was published in 2016, organized by the four principles of : Relevance, Credibility, Legitimacy, and Effectiveness.

Over the past several years, the team has conducted a series of case study evaluations of completed research projects where they applied the QAF to learn lessons about how TDR qualities can support the realization of outcomes. Through this testing, the team also identified several aspects of the QAF that could be improved. Some of the main changes include revisions to the principle and criteria names and definitions to support clarity, the addition of missing criteria, the removal of overlap in criteria definitions that led to double-counting, and replacement of the former rubric with practical guidance. The revised principles are defined as:

  • Relevance: the appropriateness of the problem framing, research objectives, and approach for intended users and audiences;
  • Credibility: the rigor of the research design and process to produce dependable and defensible conclusions;
  • Legitimacy: the perceived fairness and representativeness of the research process; and
  • Positioning for use: the degree to which research is likely to be taken up and used to contribute to outcomes and impacts.

A key emphasis of the latest version of the QAF is to assess and score each criterion against the project’s purpose. QAF scores can be visualized in spidergrams to support further analyses, such as identification of TDR qualities that are present or absent in projects (Image 1) as well as comparisons between projects (Image 2).

For example: Project A has a strong socially relevant problem but scores lower on the criteria pertaining to research objectives, communication, and explicit theory of change (Image 1). Project A satisfied most of the positioning for use criteria and relatively fewer of the credibility and legitimacy criteria.

Image 1: Spidergrams used to visualize how a project scores against the QAF’s principles of relevance, credibility, legitimacy, and positioning for use

This visualization can also be used to compare projects. For instance, Project A might be strong in addressing a socially relevant research problem, but Project B is much stronger in terms of proposing 1) relevant research objectives, 2) clearly defined problem context and 3) engagement with the problem context (Image 2).

When asked how this framework could be more impactful, Belcher said that involving stakeholders in the research process would help them understand how the research contributes to change as well as their role within the change process.

In addition to ex-post evaluation, the QAF enables a structured assessment of project proposals ex-ante to identify and allocate funding to projects that are relevant, credible, legitimate, and well positioned for use. It can also be used to appraise the weaknesses in a proposal from a transdisciplinary perspective. Belcher also mentioned that the QAF has been applied to some projects within a single traditional discipline that had several strong elements and contributed to social change processes. Additionally, some Canadian and international organizations have incorporated these principles and criteria into their research proposal guidelines. These include (1) the Pacific Institute for Climate Solutions (a collaboration of four universities in British Columbia), (2) the Natural Sciences and Engineering Research Council of Canada (NSERC; a Canadian federal funding agency), and (3) CGIAR (an international agricultural research group aiming to deliver science for food security).

Image 2: Comparison of two projects’ QAF scores for the Relevance principle

Conclusion

Belcher highlighted that the application of the QAF to research evaluation will add more transdisciplinary qualities that will achieve more impactful outcomes. However, more widespread application and testing to build the evidence base is needed, and this is possible when the framework is shared among other users. Expanded applications would enable users to identify challenges and opportunities to inform revisions.

A copy of the presentation can be accessed on the Sustainability Research Effectiveness team’s website.


Queen Saikia is DORA’s Policy Associate


References

  • Belcher, B. M., Rasmussen, K. E., Kemshaw, M. R., & Zornes, D. A. (2016). Defining and assessing research quality in a transdisciplinary context. Research Evaluation, rvv025. http://doi.org/10.1093/reseval/rvv025
  • Cash, D., Clark, W. Alcock, F., Dickson, N., Eckley, N., and Jager, J. (2002). Salience, Credibility, Legitimacy and Boundaries: Linking Research, Assessment and Decision Making. KSG Working Paper Series RWP02-046
  • Guthrie, S., Wamae, W., Diepeveen, S., Wooding, S., and Grant, J. (2013). Measuring Research: A Guide to Research Evaluation Frameworks and Tools. Rand Europe.
  • Ofir, Z., Schwandt, T., Duggan, C., and McLean, R. (2016). Research Quality Plus: A Holistic Approach to Evaluating Research. IDRC, Canada.
  • Stokes, D. (1997). Pasteur’s Quadrant: Basic Science and technological innovation. Brookings Institution Press 

Relevant resources shared on the call


.


 

The post A Quality Assessment Framework for Research Design, Planning, and Evaluation: Updates from the Sustainability Research Effectiveness Program in Canada appeared first on DORA.

]]>
Evaluation of researchers in action: Updates from UKRI and a discussion on the utility of CRediT https://sfdora.org/2023/08/08/evaluation-of-researchers-in-action-updates-from-ukri-and-a-discussion-on-the-utility-of-credit/ Tue, 08 Aug 2023 19:18:00 +0000 https://sfdora.org/?p=158609 Each quarter, DORA holds two Community of Practice (CoP) meetings for research funding organizations. One meeting takes place for organizations in the Asia-Pacific time zone and the other meeting is targeted to organizations in Africa, the Americas, and Europe. If you are employed by a public or private research funder and interested in joining the…

The post Evaluation of researchers in action: Updates from UKRI and a discussion on the utility of CRediT appeared first on DORA.

]]>
Each quarter, DORA holds two Community of Practice (CoP) meetings for research funding organizations. One meeting takes place for organizations in the Asia-Pacific time zone and the other meeting is targeted to organizations in Africa, the Americas, and Europe. If you are employed by a public or private research funder and interested in joining the Funder CoP, please find more information here.

Funding organizations play a key role in setting the tone for evaluation standards and practices. In recent years, an increasing number of funders have shifted their evaluation practices away from an outsized focus on quantitative metrics (e.g., H-Index, journal impact factors, etc.) as proxy measures of quality and towards more holistic means of evaluating applicants. At one of DORA’s March Funder Community of Practice (CoP) meetings, we heard how the UK Research and Innovation (UKRI) has implemented narrative style CVs for choosing promising research and innovation talent. At the second March Funder CoP meeting, we held a discussion with Alex Holcombe, co-creator of the Tenzing tool, about how the movement to acknowledge authors for the broad range of roles they play to contribute to a research project could also be applied to help funders in decision making processes.

During the first meeting, we heard from Hilary Noone, Tripti Rana Magar, and Hilary Marshall who discussed the implementation of narrative style CVs at funding organizations in the UK including the Cancer Research UK and National Institute of Health Research (NIHR). “Traditional” CVs can overlook the broad range of a researcher’s achievements and scholarly work, such as contributions to team science, mentorship, and “non-traditional” research outputs. Magar said that the concept of narrative CVs emerged around 2014 from the Nuffield Council on Bioethics with the goal of better recognizing and understanding the holistic contributions of researchers. In 2017, the Royal Society co-created the ‘Royal Society’s the Résumé for Researchers’ (R4R). In 2021, a more flexible version of the R4R was released: the Résumé for Research and Innovation (R4RI), which has been implemented by several major funders in the UK. Magar highlighted that the R4RI CV reduces emphasis on metrics and focuses on the quality of the contribution, provides space for applicants to list their full range of activities, and reduces bureaucracy by encouraging the adoption of a single framework.                   

The Joint Funders Group (JFG) are 54 funders (both UK-based and international) that support the wider adoption of R4RI-like Narrative CVs. The speakers also mentioned the Alternative Uses Group (AUG), another group hosted by UKRI that complements the efforts of the JFG by co-creating resources for recruitment, promotions, professional accreditation with the support of funders and universities. Both groups work together to accelerate cultural change and implement the R4RI-like Narrative CV. Additionally, resources relating to the R4RI-like CV are freely available in the Résumé Resources library, which includes several CV templates, training packages, starter guides for reviewers and applicants, and guidance on promotion practices.          

The speakers closed their presentation by introducing the Shared Evaluation Framework, a resource that the JFG and AUG have developed to support adoption of the R4RI-like CV. In addition to creating the Framework, UKRI is also developing an Evidence Platform that allows users of the R4RI-like CV to anonymously upload and share data relating to the adoption of the R4RI-like CV. The Evidence Platform will help the funders and academic institutions monitor the adoption, impact, and results of using the R4RI-like CV format. The JFG and AUG are also developing training packages for applicants, reviewers and staff, and are working with professional bodies to see if they could offer complementary modules on specific areas/ themes. If anyone is interested in testing the training packages, making recommendations, or volunteering to be part of the professional bodies workshop, please contact culture@ukri.org.

During the second meeting, Alex Holcombe from University of Sydney presented tenzing, a free and open-source tool created to support the adoption of Contributor Roles Taxonomy (CRediT). CRediT was introduced in 2012 to more accurately and thoroughly assign credit to authors for the specific roles they played on a particular research project. The CRediT system has been adopted by eLife, The BMJ, and MDPI. It offers a means to recognize scholarly contributions with greater granularity.

The landscape of research is expanding and demands a wide range of diverse technical inputs and ideas. Given this, there is also a need for more granular methods to recognize contributions or “who did what” on a particular project. Increased granularity in the recognition of author contributions can provide better insight into an author’s expertise and provide a mechanism to recognize and reward a broader range of types of contributions (e.g., insight from non-academic experts). On the call, we discussed how use of CRediT could be implemented by research funders to better recognize and reward different contributions in grant applications and awards, especially given that Wellcome was one of the founding organizations that created CRediT. Holcombe suggested that use of CRediT could be a means to provide funders an easier way to assess whether a proposed research project is within the umbrella of a researcher’s skillset or not.

Tenzing was launched in 2020 to help researchers to record their roles in a project from the start using the CRediT system. It is particularly helpful at the time of manuscript submission to a journal, as a researcher might face certain obstacles (such as author labor) during the process of accumulating information about every author that was engaged in the project from the beginning. However, there is no one-size-fit-all approach to all problems. Although the application of these tools seems clear for researchers and publishers, there is still much to learn about how CRediT could be implemented by funders directly.

During both meetings, the speakers discussed how different organizations are on their way to implement culture change, improving the recognition of the wide variety of different outcomes and outputs generated by researchers.

Queen Saikia is DORA’s Policy Associate

The post Evaluation of researchers in action: Updates from UKRI and a discussion on the utility of CRediT appeared first on DORA.

]]>
Empowering fair evaluation and collective impact: efforts to drive change from INORMS and CoARA https://sfdora.org/2023/07/18/empowering-fair-evaluation-and-collective-impact-inorms-and-coaras-efforts-to-drive-change/ Tue, 18 Jul 2023 14:44:03 +0000 https://sfdora.org/?p=158373 Each quarter, DORA holds a Community of Practice (CoP) meeting for National and International Initiatives working to address responsible research assessment reform. This CoP is a space for initiatives to learn from each other, make connections with like-minded organizations, and collaborate on projects or topics of common interest. Meeting agendas are shaped by participants. If…

The post Empowering fair evaluation and collective impact: efforts to drive change from INORMS and CoARA appeared first on DORA.

]]>
Each quarter, DORA holds a Community of Practice (CoP) meeting for National and International Initiatives working to address responsible research assessment reform. This CoP is a space for initiatives to learn from each other, make connections with like-minded organizations, and collaborate on projects or topics of common interest. Meeting agendas are shaped by participants. If you lead an initiative, coalition, or organization working to improve research assessment and are interested in joining the group, please find more information here.

Status quo of research evaluations

Research evaluation practices have the power to impact academic careers and reputations positively or negatively with their far-reaching implications for funding, awards, career advancements, and other prospects. Although the way research is evaluated has a critical part in academic lives, the traditional quantitative approaches to research assessment are increasingly recognized as inappropriate. The indicators presently in use are overly reliant on narrow publication-based quantitative metrics that fail to adequately capture the full range of research contributions while limiting equity and innovation. They do not consider factors such as the quality of the research, the rigor of the methodology, or the impact on society or openness/interoperability. Such metrics can lead to disadvantages for those working in less mainstream or interdisciplinary fields, or for those who belong from less-privileged demographic backgrounds than others. Relying solely on such indicators can result in an incomplete and potentially unfair assessment of a researcher’s contributions to their field.

With the goal to support more responsible research assessment processes, guidelines and recommendations such as the Declaration on Research Assessment (DORA), Leiden Manifesto for Research Metrics, the Metric Tide and other individual university guidelines came into action to set the standards for responsible assessment. In 2018, the International Network of Research Management Societies (INORMS), a collective of research management associations and societies worldwide, set out to build a structured framework to embrace the shift towards more responsible approaches to assessment. INORMS has various initiatives, projects, toolkits, and guidelines to promote evaluation practices that prioritize and foster fairness, openness, inclusivity, transparency, and innovations. Two of their key contributions are 1) designing the SCOPE Framework for Research Evaluation and 2) the More Than Our Rank (MTOR) initiative.

During the first quarterly DORA National and International Initiatives discussion group call of 2023, Elizabeth Gadd, Chair of INORMS Research Evaluation Group (REG), provided updates on the work of INORMS, MTOR, and the Coalition on Advancing Research Assessment (CoARA), where she serves as Vice-Chair.

SCOPE Framework: a structured approach for evaluations

In 2019, the first version of the SCOPE Framework was released by the INORMS Research Evaluation Working Group. This framework was developed to be used as a practical guide to successfully implement responsible research assessment principles and to facilitate the use of more thoughtful and appropriate metrics for evaluations in institutions and organizations.

In 2021, INORMS REG published an updated version of the SCOPE framework that included a five-step process to be followed by organizations during evaluations: Start with what is valued, consider Context, explore Options for measuring, Probe deeply, and Evaluate the evaluation. Each of these five steps has been elaborated on in the practical guide for easy adoption. The operating principles behind this five-stage process are:

  1. Evaluate only where necessary
  2. Evaluate with the evaluated
  3. Draw on evaluation expertise

To help research leaders and practitioners drive robust evaluation processes in their institutions the working group has publicly shared several resources online. The guidebook also presents details of the process of change through multiple case studies to learn from. Some of the use case examples, also mentioned by Gadd in her presentation, included: Joint UK HE funding bodies who put deliberate efforts to redesign the Research Excellence Framework (REF), Emerald Publishing who are consciously ensuring more diversity in their editorial board, etc. Additionally, there are SCOPE workshops for institutional research administrative leaders to learn how to systemically adopt this framework.

Some of the strengths of the SCOPE framework are its holistic step-by-step approach, flexibility, and adaptability to different disciplinary and institutional contexts. The framework can be customized to reflect the specific goals and priorities of different stakeholders and can be used to evaluate research at various levels of the research evaluation food chain.

More Than Our Ranks (MTOR) Initiative: an opportunity to recalibrate university rankings

Because evaluation processes can also profoundly impact the “reputation” and funding of academic organizations, INORMS launched the MTOR initiative, which is closely linked to the SCOPE framework and seeks to provide institutions with a means by which they can surface all their activities and achievements not captured by the global university rankings.

Gadd published “University rankings need a rethink” in 2020, which highlighted the key findings from their work on evaluating the ranking agencies based on community-designed criteria. They found that most “flagship” university rankings barely incorporated open access, equality, diversity, sustainability, or other society-focused agendas into their criteria for global rankings. This work sparked many discussions on how the current university ranking system may be inadequate and harmful because it does not “meet community’s expectations of responsibility and fairness”. So, there was a necessary change needed to bring a sense of accountability amongst rankers to determine what matters most for each university.

The INORMS REG believes that any institution, even a top ranker, has more to offer than what could be captured by parameters used in the current ranking systems. This was the foundational drive behind pioneering the MTOR initiative in October 2022, which encourages institutions to declare in a narrative way their perspectives on their unique missions, activities, contributions to society, teachings, etc., and explain why they are more than the overhyped university rankings. Gadd emphasized that signatory institutions are not required to boycott rankings altogether.­­­

The REG has also provided guidelines on their website for Higher Education Institutes (HEIs) to learn about how to participate in the MTOR movement and also for individuals from the community to contribute in encouraging their universities for being a part of the MTOR initiative. Organizations like Loughborough University, Keele University, Izmir Institute of Technology, and Queensland University of Technology (QUT) are some of the early adopters of MTOR. However, it was also discussed that one major challenge for institutions to participate in the MTOR movement is due to their financial dependence on global rankings for their fundings, etc. in the current system.

Finally, Gadd shared updates from CoARA, which is a network bringing together stakeholders in the global research community (viz., research funders, universities, research centers, learned societies, etc.) to enable systemic level reform for the use of responsible, and effective research assessment practices.

CoARA: a common direction for reforming research assessment

After its international iterative creation, facilitated initially by the European University Association (EUA), Science Europe and the European Commission, the Agreement on Reforming Research Assessment was published in July 2022. Organizations willing to publicly commit to find a path to improve their research assessments can sign the Agreement and also be eligible to be a part of the Coalition to be actively involved in decision-making processes within the CoARA. The Agreement, which built on the progress made by earlier responsible research assessment guidelines and principles (DORA, Leiden Manifesto, Hong Kong principles, etc.), consists of 4 core commitments: 1) recognizing diverse contributions during assessments, 2) using qualitative peer-review-based metrics over quantitative indicators for research evaluation, 3) avoiding uses of journal and publication-based metrics and 4) abolishing the use of university rankings during assessments. These four core commitments are accompanied by 6 supporting commitments related to building and sharing new knowledge, tools and resources, and raising awareness within the community. Within 5 years of becoming a signatory, organizations have to demonstrate the changes made for reforming research assessment at their institutions.

As a newly established association, CoARA had the first General Assembly meeting on 1st December, 2022 since when the secretariat role was handed over to the European Science Foundation – Science Connect (ESF-SC). CoARA has recently opened the first call for Working Groups and National Chapters in March 2023, and will update on more General Assembly meetings and many other activities, such as webinars, conferences, etc. to build the network stronger and initiate dialogues amongst the different CoARA stakeholders with relevant evaluation initiatives and community of practices. Gadd’s talk was followed by discussions on the working group’s probable focus areas, including peer reviews, responsible metrics, funding disparities, etc.

United collaborative efforts from the research community, including individuals, universities, funders, initiatives, etc., are vital to push forward and evolve responsible research assessment on a systemic level.

Sudeepa Nandi is DORA’s Policy Associate

The post Empowering fair evaluation and collective impact: efforts to drive change from INORMS and CoARA appeared first on DORA.

]]>
Projeto Métricas/Fapesp: A Collaborative Roadmap for DORA Implementation in Brazil https://sfdora.org/2023/01/24/projeto-metricas-brazil/ Tue, 24 Jan 2023 22:00:22 +0000 https://sfdora.org/?p=157064 Each quarter, DORA holds a Community of Practice (CoP) meeting for National and International Initiatives working to address responsible research assessment reform. This CoP is a space for initiatives to learn from each other, make connections with like-minded organizations, and collaborate on projects or topics of common interest. Meeting agendas are shaped by participants. If…

The post Projeto Métricas/Fapesp: A Collaborative Roadmap for DORA Implementation in Brazil appeared first on DORA.

]]>
Each quarter, DORA holds a Community of Practice (CoP) meeting for National and International Initiatives working to address responsible research assessment reform. This CoP is a space for initiatives to learn from each other, make connections with like-minded organizations, and collaborate on projects or topics of common interest. Meeting agendas are shaped by participants. If you lead an initiative, coalition, or organization working to improve research assessment and are interested in joining the group, please find more information here.

The scope for change in evaluation systems in Brazil

In order to make science work for national betterment, a country’s research institutions need to align themselves with societal impact, environmental responsibility, open science, and responsible research practices. Often, these are the values that academic institutions espouse. However, a narrow overreliance on inappropriate metrics as proxy measures of quality can fail to recognize and reward open practices, rigor, reproducibility, locally relevant research, and more. Towards the goal of creating and supporting more responsible evaluation systems in Brazil, the Sao Paulo State Research Foundation (FAPESP) founded and funded Projeto Métricas, a multi-institutional project involving the six major public universities in Sao Paulo. The aim of Projeto Métricas is to boost awareness about the appropriate use of bibliometrics and improve university governance and science communication. The project offers a number of awareness programs, workshops, and courses for professionals and researchers all across Brazil.

On November 8, 2022, Justin Axel-Berg and Jacques Marcovitch presented the work of Projeto Métricas at DORA’s National and International Initiatives for Research Assessment Community of Practice meeting. On the call, Justin Axel-Berg highlighted that although a few departments and institutions are improving their assessment practices responsibly, a majority of evaluation methods in Brazil are still highly orthodox. Broadly, they are quantitative and restrictive, both at the level of the Council (i.e., the national governing body of research and research funding agencies) for institutional-level evaluation and/or by the institutions in their hiring and promotion policies. In light of this, Projeto Métricas is currently working on monitoring and identifying the gaps in research assessment practices at public universities, and also mapping the impacts of ongoing reform efforts. As part of their project for the DORA Community Engagement Grant program, Projeto Métricas held a workshop and conducted a survey to help the development of a roadmap for supporting the implementation of DORA’s responsible research assessment principles at Brazilian institutions. Axel-Berg pointed out during the discussion that, although the number of individual and organizational DORA signatories in Brazil are comparable to those in Europe and the UK, only two research-focused Brazilian organizations have signed so far: The University of São Paulo (USP) and The State University of Campinas (UNICAMP). In an attempt to gain a broader understanding of the obstacles in committing to responsible research evaluation practices and to identify areas for intervention, Projeto Métricas took a 3-step approach. The details of their work can be found online

Course of action

In their first DORA Community Engagement Workshop, they held an exploratory quantitative and qualitative survey based on the five pillars of the SCOPE rubric. Respondents included individual DORA signers from USP and UNICAMP and were from a diverse range of academic career stages. Because of this, Projeto Métricas was able to gain a holistic understanding of the reasons behind the challenges to implementing responsible evaluation practices. Survey results suggested that the major barriers were: i) institutional resistance to change, ii) lack of training of evaluators, iii) dearth of society engagement and consultations for evaluations, and iv) senior colleagues being unaware of the advancements for responsible research. The workshop was then followed by a public event that was open to the wider Brazilian research community. During this event, the key points from the preliminary survey were discussed to identify the main action areas and the possible ways to resolve the issues. Lastly, a consolidated report, with the key findings from the workshop, the survey, and the public event, was evaluated by field experts within Brazil who helped to narrow down the results to three concrete target areas of action:

  1. Awareness of responsible evaluation practices: The community needs to be made more aware of responsible evaluation practices and initiatives (e.g., DORA, Leiden, etc.) that are working towards reform. Informing and empowering the community through discussions about impact-driven and responsible assessment would be the first step to understanding, interpreting, and reflecting on what performance indicators are useful and what indicators are being misused in Brazil. Reforming the traditional, fixed, and limited evaluation models that are currently in use will require a collective broadening of mindsets. Therefore, awareness programs should actively engage community members at various career stages. It is also important for universities and research institutes to incorporate community ideas in developing frameworks that are suitable for a wide variety of evaluations, including career advancement, recruitment, grants, etc. while aligning with personal, institutional, local, and national values.
  2. Training and capacity building of evaluators and participants in processes: Workshops, training programs, debates and discussions to be conducted regarding meaningful and conscious evaluation tools for evaluators to those being evaluated.
  3. Planning and execution of new evaluation practices: The evaluation methods must also be flexible and accommodating to dynamic evolutions in the academic system. Developing a well-structured plan of action, for short-, medium- and long-term goals, with consistent monitoring, is essential to the successful implementation of newer practices. This would enable universities and institutions agility into the future.

Underlying challenges and possibilities in the Brazilian context

Axel-Berg elaborated on some of the underlying features that are unique to Brazil and hindering its progress toward responsible research assessment reform. The Brazilian higher education system is highly heterogeneous in terms of the nature of the institutions, the fields of study, etc., which complicates the process of developing uniform evaluation policies. Compounding this, cultural factors contribute to resistance in changing mindsets and a lack of participation from government bodies and academic faculty and staff. Because the universities and academic institutions are bound by strict federal laws, there is not much room for change other than on an individual basis currently. Another unique challenge that Axel-Berg highlighted was the language and digitalization gaps in disseminating scholarly knowledge from Brazilian researchers. Strategies for overcoming some of these issues were discussed during the meeting and a number of suggestions were made, including: involving learned societies in the reform process to facilitate more wide-ranging communications in a context- and discipline-specific manner; enhancing trainings on structuring; substantiating and popularizing narrative CVs; retaining staff to support a strong institutional memory; and promoting institutional autonomy for making changes more easily.

Projeto Métricas is taking on these challenges one step at a time, and its immediate goal is to provide guidance and assistance at the individual level. Axel-Berg concluded his presentation by reiterating their long-term goals to bring about a more holistic change by incorporating not only Open Science and research integrity but also society-centric metrics and environmental responsibility as parameters into evaluations.

Sudeepa Nandi is DORA’s Policy Associate

The post Projeto Métricas/Fapesp: A Collaborative Roadmap for DORA Implementation in Brazil appeared first on DORA.

]]>
Research assessment reform in action: Updates from research funders in Canada and New Zealand https://sfdora.org/2022/08/18/research-assessment-reform-in-action-updates-from-research-funders-in-canada-and-new-zealand/ Thu, 18 Aug 2022 18:34:54 +0000 https://sfdora.org/?p=155297 Each quarter, DORA holds two Community of Practice (CoP) meetings for research funding organizations. One meeting takes place for organizations in the Asia-Pacific time zone and the other meeting is targeted to organizations in Africa, the Americas, and Europe. If you are employed by a public or private research funder and interested in joining the…

The post Research assessment reform in action: Updates from research funders in Canada and New Zealand appeared first on DORA.

]]>
Each quarter, DORA holds two Community of Practice (CoP) meetings for research funding organizations. One meeting takes place for organizations in the Asia-Pacific time zone and the other meeting is targeted to organizations in Africa, the Americas, and Europe. If you are employed by a public or private research funder and interested in joining the Funder CoP, please find more information here.

Research funding organizations are often thought of as leaders in the movement toward more responsible research evaluation practices. Often, the perception of “excellence” in research culture is filtered through the lens of who and what type of work receives funding. However, when a narrow set of indicators is used to determine who receives funding, the result can be a subsequent narrowing of academia’s perceptions of research excellence (e.g., journal impact factor (JIF), h-index). This places funders in the unique position of being able to help “set the tone” for research culture through their own efforts to reduce reliance on flawed proxy measures of quality and implement a more holistic approach to the evaluation of researchers for funding opportunities. Whether funders are seeking inspiration from their peers or insight on iterative policy development, the ability to come together to discuss reform activity is critical for achieving widespread culture change. At DORA’s June Funder Community of Practice (CoP) meetings, we heard how DORA is being implemented by the Natural Sciences and Engineering Research Council of Canada (NSERC) and the New Zealand Ministry of Business, Innovation and Employment (MBIE).

During the first meeting, Alison Janidlo, Nathalí Rosado Ferrari and Brenda MacMurray presented on the work that NSERC has done to implement DORA. NSERC is a national funding agency that supports researchers and students through grants and scholarships. Applications are evaluated through peer review.  In alignment with its existing good practices for research assessment (e.g., encouraging applicants to include outputs outside of journal articles and asking peer reviewers to focus on the quality and impact of outputs), NSERC signed DORA in 2019. Building on the previous version of its guidelines, NSERC developed a new assessment guidelines document that is more explicitly aligned with DORA principles. During the call, Janidlo, Rosado Ferrari and MacMurray described how the development of these new guidelines was informed by literature review, stakeholder engagement through committees, focus groups, and conference presentations. Key features of the guidelines include the promotion of NSERC’s support for research excellence, the explicit incorporation of DORA principles, and the emphasis that proxy measures of quality (e.g., JIF) must be avoided when evaluating research proposals.

Janidlo, Rosado Ferrari and MacMurray also described what is arguably one of the most important facets of implementing policy change: a reflection on what worked and lessons learned over the 2-year development process of their new guidelines. According to them, a key lesson was that it would have been useful for their community to have more clarity about the next steps for implementation at the time of the guidelines launch. The agency will also need to address the tricky question of how to track the impact of the new guidelines on funding decisions. In terms of strategies that were successful, the speakers highlighted that these new guidelines were developed iteratively and in consultation with a broad range of stakeholders within the NSERC community, allowing for greater buy-in with the new document. Additional successful strategies included listing forms of contributions and indicators of quality and impact in alphabetical order (to de-emphasize proxy measures of quality and impact), the language inclusivity of the new document (reflecting Canada’s official English and French bilingualism, consideration of the diverse postsecondary community and technical jargon used in the natural sciences and engineering domain), the use of the existing NSERC DORA Implementation Working Group to successfully liaise with stakeholders, and NSERC’s efforts to build awareness about DORA in both formal and informal meetings with stakeholders. To the final point, the NSERC team created a DORA email address for community members to send DORA-related questions and inquiries. The new guidelines were released May 6, 2022.

During the second meeting, Joanne Looyen and Farzana Masouleh presented on the work that the New Zealand MBIE has done to implement DORA. MBIE is a funding organization that offers contestable funds for a range of research. Looyen and Masouleh discussed the stewardship role that research funders hold and their responsibilities to support greater diversity in the research system. To fulfill this role, they emphasized that funders may need to rethink their existing guidelines to assess research “excellence”. Here, Looyen and Masouleh pointed out that DORA’s principles can be applied broadly to support more fair and consistent evaluation of funding proposals. To this point, MBIE is currently working with the Health Research Council and the Royal Society of New Zealand to develop a narrative-style CV for New Zealand researchers. The expectation is that researchers will be able to use this CV to apply for funds from any of the three agencies. This is important because interoperability of CV formats between organizations is an important concern often voiced by researchers. The speakers highlighted the specific benefits of using a more narrative-style CV for public funding applications, such as broadening the definition of what “excellence” is for Māori and Pacifica researchers, and better capturing the depth of their work. MBIE currently has two ongoing funding calls where it is introducing the narrative CVs: the Health Research Council Trial for the Ngā Kanohi Kitea (NKK) Community Fund, and the Annual Contestable Endeavour Fund. For example, NKK Community Fund applicants must submit a narrative-style CV that includes relevant experience to the proposed activity, contributions of knowledge generation, and contributions to iwi, hapū, or community.

Key considerations that were discussed included the importance of designing a narrative CV template that works with and for the New Zealand context. This means testing iterations of the template with research staff from New Zealand universities and research institutions, incorporating stakeholder feedback, and conducting a survey of applicants and assessors to gather data about their user experiences with the new template. Looyen and Masouleh concluded their presentation by returning the focus back to one of the key drivers of the MBIE’s reform efforts: increasing understanding of how research can contribute to the aspirations of Māori organizations and deliver benefits for New Zealand.

During both meetings, Funder CoP members discussed the actions that their peers have and are in the process of taking to implement a more holistic approach to the evaluation of researcher contributions. One of the key takeaways exemplified by both NSERC and MBIE is that there is no “one-size-fits-all” approach. Although the availability of examples of change are critical for drawing inspiration and demonstrating feasibility, ultimately the best policies and practices are those that fulfill the context-specific needs and goals of each funding organization’s community.

Haley Hazlett is DORA’s Program Manager.

The post Research assessment reform in action: Updates from research funders in Canada and New Zealand appeared first on DORA.

]]>
Changing the narrative: considering common principles for the use of narrative CVs in grant evaluation https://sfdora.org/2022/06/06/changing-the-narrative-considering-common-principles-for-the-use-of-narrative-cvs-in-grant-evaluation/ Mon, 06 Jun 2022 15:09:33 +0000 https://sfdora.org/?p=154997 Narrative CVs have become a potential avenue to recognize the broad range of a researcher’s scholarly contributions. They can be used as a tool to move toward a “quality over quantity” mindset in career evaluation, help reduce the emphasis on journal-based indicators, and better accommodate non-linear research career paths. To examine the merits and drawbacks…

The post Changing the narrative: considering common principles for the use of narrative CVs in grant evaluation appeared first on DORA.

]]>
Narrative CVs have become a potential avenue to recognize the broad range of a researcher’s scholarly contributions. They can be used as a tool to move toward a “quality over quantity” mindset in career evaluation, help reduce the emphasis on journal-based indicators, and better accommodate non-linear research career paths. To examine the merits and drawbacks of narrative CVs, DORA and the Funding Organisations for Gender Equality Community of Practice (FORGEN CoP) hosted a joint workshop in September 2021 for research funders. Based on the discussion from this initial workshop, a short report was released in December 2021 that summarized the key takeaways and recommended actions on the adoption of narrative CVs for grant evaluation. A key recommended action of the report was to create a shared definition of what a narrative CV is and to align on what objectives it hopes to achieve.

With this in mind, the February 2022 DORA funder discussion was dedicated to understanding the goals of using narrative CVs for grant evaluation, such as creating space for researchers to demonstrate broader contributions to research and documenting different talents. During the meeting, participants brainstormed on a range of topics: what a shared definition of a narrative CV might look like, objectives for its use in grant evaluation, and the value of a shared approach to monitor its effectiveness.

Toward a shared definition

Over the course of the discussion it became clear that more work is needed to develop a common narrative CV definition that accounts for differences across organizations and funding opportunities. One of the key challenges to developing a common definition was the idea that different organizations might have different understandings about the potential value that using narrative CVs might add to their assessment processes. Importantly, organizations may have distinct visions for where and how to use narrative CVs in their assessment processes. Participants agreed that a definition should be sufficiently flexible to adapt to various local contexts and cater to funders’ diverse needs and goals.

While considering how to address the challenges of creating a shared definition, one participant suggested the need to disentangle three key elements: what is a narrative CV, how might it be used, and how might it be implemented. Addressing the first element could include the creation of a flexible shared definition. From there, funders could address the second element and identify the expectations for use (e.g., what they expect a narrative CV to tell them that a bulleted CV cannot for a specific funding opportunity). Addressing the final element, implementation, could look like creating a practical and concrete plan to incorporate a narrative CV into their practices.

Because there are many types of narrative CV formats, participants also considered whether the conversation should be limited to formats that are completely narrative-based, such as the Resume for Researchers, or include hybrid-style CVs that feature “narrative elements.” Again, participants highlighted that building such flexibility into a shared definition could help address some of the challenges outlined above.

Brainstorming common objectives

To help build a foundation for effective use, participants were also asked to consider the objectives of a narrative CV. One participant noted that the use of narrative CVs might need to be dependent on the type of funding opportunity. An important foundational consideration might be “what is the most appropriate tool to achieve the goals of a specific funding opportunity.”

During this discussion, it was proposed that the general purpose of a CV is to document an individual’s professional trajectory. Some participants considered that a completely narrative CV may not offer a clear career timeline. One participant stressed that the goal of using alternative methods like narrative CVs for grant evaluation is ultimately to fund better research. Others added that another common objective of a narrative CV would be to allow applicants to provide evidence of a wider range of contributions, and give applicants the ability to explain why certain achievements are relevant to a specific proposal. We also heard that the power of using narrative CVs in grant evaluation lies in the idea that they can minimize the role journal prestige plays in the assessment of a candidate’s profile.

Finally, some participants expressed concern that a common misconception of narrative CVs is that they can be burdensome to create and review, and do not convey the same scientific rigor as quantitative indicators. During this discussion, other participants highlighted the importance of standardization as a mechanism to reduce the burden on applicants and reviewers. These discussions highlight the value of clear communication and training to ensure that both users and reviewers of narrative CVs understand what they are, how to create them, and how they should be used. To address this challenge, a recent example of an initiative to inform approaches to the use of narrative CVs is the UK Research and Innovation Joint Funders Group Résumé Library resources.

Value of collaboration to monitor effectiveness

There was strong consensus from the group that, where relevant, a collaborative approach to monitoring effectiveness would be valuable for funding organizations. For such an approach to work, it is important to take into consideration whether funders are using narrative CVs for grant evaluation in the same way. Building on that point, more than one type of investigation would be needed to accommodate the different uses of narrative CVs in varied types of funding opportunities.

There was also interest in collecting data across organizations with the aim of creating a longitudinal dataset on narrative CV use, which could help funders learn from the compound effect created by implementation across organizations. This dataset could include intra- or interorganizational qualitative and quantitative studies to monitor the use and effectiveness of narrative CVs.

Final takeaways

There is still work to be done to align on a shared definition, common objectives, and parameters for monitoring the effectiveness of narrative CVs. However, the discussions on this call demonstrated that interest is building in using narrative CVs as a qualitative method for research assessment, at least in part due to their potential to support a research culture that emphasizes meaningful research achievements over the use of flawed proxy measures of quality. Moving forward, fostering open and deep discussions with the community about narrative CVs and their implementation will be a key step toward the iterative development and use of narrative CVs as a tool for responsible research assessment.

DORA’s funder discussion group is a community of practice that meets virtually every quarter to discuss policies and topics related to fair and responsible research assessment. If you are a public or private funder of research interested in joining the group, please reach out to DORA’s Program Director, Anna Hatch (info@dora.org). Organizations do not have to be a signatory of DORA to participate.

Amanda Akemi is DORA’s Policy Intern

The post Changing the narrative: considering common principles for the use of narrative CVs in grant evaluation appeared first on DORA.

]]>
Cross-funder action to improve the assessment of researchers for grant funding https://sfdora.org/2022/01/19/cross-funder-action-to-improve-the-assessment-of-researchers-for-grant-funding/ Wed, 19 Jan 2022 21:29:37 +0000 https://sfdora.org/?p=153916 Funding organizations signal what is valued within research ecosystems by defining the criteria upon which proposals and applicants are assessed. In doing so, they help to shape the culture of research through the projects and people they support. Focusing on a limited set of quantitative criteria favors a narrow view of research and disincentivizes creativity…

The post Cross-funder action to improve the assessment of researchers for grant funding appeared first on DORA.

]]>
Funding organizations signal what is valued within research ecosystems by defining the criteria upon which proposals and applicants are assessed. In doing so, they help to shape the culture of research through the projects and people they support. Focusing on a limited set of quantitative criteria favors a narrow view of research and disincentivizes creativity and innovation. To counter this, many funding organizations are shifting to a more holistic interpretation of research outputs and achievements that can be recognized in the grant evaluation process through the use of narrative CV formats that change what is visible and valued within the research ecosystem. As communities of practice in research and innovation funding, the San Francisco Declaration on Research Assessment (DORA) funder’s group and Funding Organisations for Gender Equality Community of Practice (FORGEN CoP) partnered to organize a workshop focused on optimizing the use of narrative CVs, which is an emerging method used by research funders to widen the research outputs that can be recognized in research assessment.

DORA’s funder discussion group was launched in March 2020 to enable communication about research assessment reform. By learning from each other, funding organizations can accelerate the development of new policies and practices that lead to positive changes in research culture.

The focus on implementing narrative CV formats for grant funding grew organically within the discussion group. In 2019, the Royal Society in the United Kingdom introduced the Résumé for Researchers as an alternative to the traditional CV to recognize the diversity of research contributions through a concise, structured narrative. At the same time, a handful of research funders began to experiment with the use of narrative CVs in grant evaluation. During the first meeting of the DORA’s funders group, the Dutch Research Council and the Swiss National Science Foundation shared information about their pilot projects to implement narrative CVs. At subsequent meetings in 2020 and 2021, the Health Research Board Ireland, Luxembourg National Research Fund, and Science Foundation Ireland also shared updates on their implementation of narrative CVs.

FORGEN CoP, led by Science Foundation Ireland, was established in October 2019 and aims to share knowledge and best practice on gender equality in research and innovation funding. Mitigating gender bias within the grant evaluation process was identified as a primary area of focus.

Inequalities in the grant evaluation processes are more prevalent when assessments are focused on the researcher, as opposed to the research. With the conversation shifting to the assessment of people, the discussion naturally led to the use of narrative CVs and their assessment. Narrative CVs can be used as a method to move away from assessing productivity (quantity over time) to research achievements, focusing on the quality and relevance of these achievements in relation to the research proposed. They may also support researchers with non-linear career paths or career breaks by removing the focus on productivity which can be affected by periods of leave for care-giving responsibilities. During subsequent meetings, members shared their experiences implementing narrative CVs and their assessment methods. In a special FORGEN CoP seminar focused on how funders could mitigate the long-term gendered impacts of the COVID-19 pandemic, narrative CVs were highlighted as a method to address potential gender inequalities and/or a reduction in productivity that some researchers experienced throughout the pandemic.

During September 2021, DORA and the FORGEN CoP joined forces to address current knowledge gaps for the use of narrative CVs in grant evaluation and to improve their usefulness as a tool for responsible research assessment. We jointly organized a workshop, facilitated by Dr. Claartje Vinkenburg, which was convened as two events to enable global attendance. More than 120 participants from over 40 funding organizations and 22 countries joined us to explore strategies to mitigate potential biases of narrative CVs and monitor their effectiveness in the grant funding process. During these events, we heard from organizations implementing narrative CVs and experts who spoke about the influence of gender bias and language on research assessment. Researchers spoke to us about decision-making and process optimization for grant funding. In breakout sessions, we were able to identify useful areas of alignment for funding organizations and what studies are needed to gather evidence to improve the implementation of narrative CVs.

We are delighted to share a short report that captures our learnings from the workshop and identifies a course of action to improve narrative CVs as a tool for assessment. The workshop and report provide a foundation for the optimization of narrative CV formats upon which we plan to build.

We compiled a list of resources leading up to and during the workshop to help us understand the opportunities and challenges of using narrative CVs for grant funding, such as how bias might influence their evaluation (see below for the list of resources). As narrative CVs can be used for purposes other than grant funding, including the hiring and promotion of research and academic staff, we foresee that this report and the collated resources will also be useful to the wider academic community.

Narrative CVs have been the focus of other international funders’ fora in tandem with the work of DORA and FORGEN CoP. The Swiss National Science Foundation and the Open Researcher and Contributor ID (ORCID) organization established the CV Harmonization Group (H-group) in 2019 to improve academic CVs as a tool for responsible research assessment. In 2021, UK Research and Innovation (UKRI) launched a Joint Funders Group, an international community of practice of research funders, to develop a shared approach for the adoption of the Resume for Researchers, which UKRI also plans to implement.

Research assessment reform requires collective action, which is why DORA and FORGEN CoP have now partnered with the Swiss National Science Foundation and UKRI to build on this work. We are excited to announce that a second workshop for research funders is planned for February 2022 that aims to identify shared objectives for the use of narrative CVs in grant funding and determine how funding organizations can work collaboratively to monitor their effectiveness. If you work at a public or private research funding organization and would like to participate, please email info@sfdora.org.

Narrative CVs reduce emphasis on journal-based indicators, allow for the recognition of a variety of research contributions, and help to shift the focus from metricized research productivity to research achievements. Through the activities already completed and those planned, we aim to develop a path to collective action on the optimization and application of narrative CVs as a tool for responsible research assessment.

Anna Hatch is the DORA program director (info@sfdora.org)

Rochelle Fritch is the leader of FORGEN CoP and Scientific Programme Manager in Science Foundation Ireland (diversity@sfi.ie)

Resources

Speakers are highlighted in bold

Narrative CVs: Supporting applicants and review panels to value the range of contributions to research
Elizabeth Adams, Tanita Casci, Miles Padgett, and Jane Alfred

Measuring the invisible: Development and multi-industry validation of the Gender Bias Scale for Women Leaders
Amy B. Diehl, Amber L. Stephenson, Leanne M. Dzubinski, David C. Wang

Quality over quantity: How the Dutch Research Council is giving researchers the opportunity to showcase diverse types of talent
Kasper Gossink-Melenhorst

Getting on the same page: The effect of normative feedback interventions on structured interview ratings
Christopher J. Hartwell, and Michael A. Campion

The Structured Employment Interview: Narrative and Quantitative Review of the Research Literature
Julia Levashina, Christopher J. Hartwell, Frederick P. Morgeson, Michael A. Campion

The predictive utility of word familiarity for online engagements and funding
David M. Markowitz and Hillary C. Shulman

What Words Are Worth: National Science Foundation Grant Abstracts Indicate Award Funding
David M. Markowitz

Engaging Gatekeepers, Optimizing Decision Making, and Mitigating Bias: Design Specifications for Systemic Diversity Interventions
Claartje J. Vinkenburg

Selling science: optimizing the research funding evaluation and decision process
Claartje J. Vinkenburg, Carolin Ossenkop, Helene Schiffbaenker

Are gender gaps due to evaluations of the applicant or the science? A natural experiment at a national funding agency
Holly O Witteman, Michael Hendricks, Sharon Straus, Cara Tannenbaum

Studying grant decision-making: a linguistic analysis of review reports
Peter van den Besselaar, Ulf Sandström, Hélène Schiffbaenker

Gender, Race, and Grant Reviews: Translating and Responding to Research Feedback
Monica Biernat, Molly Carnes, Amarette Filut, Anna Kaatz

When Performance Trumps Gender Bias: Joint vs. Separate Evaluation
Iris Bohnet, Alexandra van Geen, Max Bazerman

Initial investigation into computer scoring of candidate essays for personnel selection
Michael C. Campion, Michael A. Campion, Emily D. Campion, Matthew H. Reider

How Gender Bias Corrupts Performance Reviews, and What to Do About It
Paolo Cecchi-Dimeglio

Inside the Black Box of Organizational Life: The Gendered Language of Performance Assessment
Shelley J. Correll, Katherine R. Weisshaar, Alison T. Wynn, JoAnne Delfine Wehner

The Gender Gap In Self-Promotion
Christine L. Exley, Judd B. Kessler

How Do You Evaluate Performance During a Pandemic?
Lori Nishiura Mackenzie, JoAnne Wehner, Sofia Kennedy

Why Most Performance Evaluations Are Biased, and How to Fix Them
Lori Nishiura Mackenzie, JoAnne Wehner, Shelley J. Correll

The Language of Gender Bias in Performance Reviews
Nadra Nittle

Exploring the performance gap in EU Framework Programmes between EU13 and EU15 Member States
Gianluca Quaglio, Sophie Millar, Michal Pazour, Vladimir Albrecht, Tomas Vondrak, Marek Kwiek, Klaus Schuch

How Stereotypes Impair Women’s Careers in Science
Ernesto Reuben, Paolo Sapienza, Luigi Zingales

Grant Peer Review: Improving Inter-Rater Reliability with Training
David N. Sattler, Patrick E. McKnight, Linda Naney, Randy Mathis

The post Cross-funder action to improve the assessment of researchers for grant funding appeared first on DORA.

]]>
Navigating system biases in decision-making https://sfdora.org/2021/09/16/navigating-system-biases-in-decision-making/ Thu, 16 Sep 2021 12:31:56 +0000 https://sfdora.org/?p=152889 Bias influences the decisions that impact academic careers, from peer review and publication to hiring and promotion. With these ongoing and systemic issues in mind, DORA’s June and July funder discussions focused on navigating system biases in decision-making. Ruth Schmidt, Associate Professor at the Institute of Design of the Illinois Institute of Technology, gave a presentation…

The post Navigating system biases in decision-making appeared first on DORA.

]]>
Bias influences the decisions that impact academic careers, from peer review and publication to hiring and promotion. With these ongoing and systemic issues in mind, DORA’s June and July funder discussions focused on navigating system biases in decision-making. Ruth Schmidt, Associate Professor at the Institute of Design of the Illinois Institute of Technology, gave a presentation on how funders might address systems biases that affect funding decisions.

Address cognitive biases by reforming the system

Biases influence decisions throughout all phases of the funding and assessment process: attracting applicants, entry into the applicant pool, judging applicants, representation at the leadership level, and engaging in new perspectives. According to Schmidt, the way an opportunity is presented or framed can prevent potential applicants from seeing it as a viable option. For example, specific language choices used in job solicitations—such as “killer sales instinct” or “superstar”—may prevent women from applying. Another example is “Status Quo Bias,” in which inherently biased “quality” indicators can become entrenched, leaving little room for new or innovative means of assessing applicants.

Schmidt noted that traditional approaches to addressing these issues, like implicit bias training at the individual level, are generally ineffective. She emphasized, instead, the need to create new conditions and systems for decision-making as a way to more effectively reduce bias. Addressing bias at a systems level lessens the reliance on individuals to reduce bias. In practice creating new systems might look like revising institutional processes for assessment. At each phase of the funding process, there are opportunities for systems level improvements:

  1. Framing: Attracting applicants
  2. Submission: Entry into the applicant pool
  3. Review: Judging applicants
  4. Selection: Representation at the leadership level
  5. Advocacy: Engaging in new perspectives
Image Credit: Ruth Schmidt, Illinois Institute of Technology

Commonly occurring systems-level biases

By understanding existing challenges stemming from biases, funders can address them appropriately. So Schmidt outlined commonly occurring systems-level biases for funders to keep in mind when considering reform:

1. Question the use of quantitative metrics and indicators

Quantitative indicators are viewed as a method to make quick, easy comparisons or seemingly credible determinations about value and are therefore appealing in situations where time and resources are scarce. Even though numbers carry biases, quantitative indicators seem more objective and create a sense of clarity about relative worth. In light of this, funders should be deliberate and informed when relying on quantitative indicators. According to Schmidt, potential ways to reduce the focus on quantitative indicators when assessing applicants could include explicitly asking for narrative content, inserting optionality into the application, or designating advocates among the reviewers to take pro/con positions regardless of a candidate’s bibliometric history.

2. Resist normalizing risk aversion

Schmidt discussed the challenge that risk aversion presents when considering whether to reform assessment practices to address biases. Examples of perceived risks might include the fear that reforming practices will take too much time and effort, or the fear that new practices will turn out to be ineffective. Additionally, risk aversion is amplified when decision-makers themselves are rewarded for adhering to old practices and norms. Along these lines, when attempting to manage risk, it is easy to overemphasize the reliability of past data that may not be as applicable to current situations. Older, more biased ways of assessing funding applications, for example, may be perceived as more credible or legitimate due to familiarity. Here Schmidt pointed out that it is always easier to not change rather than change. Therefore, it may be necessary to “lower the activation energy” required to implement change. This can be done by starting out with smaller, less risky, and more achievable changes to build toward larger and more risky changes.

3. Avoid neglecting a portfolio view

Schmidt highlighted the benefits of taking a portfolio view toward assessment, like capturing a more holistic picture of the candidate’s qualifications. There is value in looking at patterns to approach assessment with a deeper, more holistic view. Cluster hires, for example, can ensure building a critical mass of talent that takes the pressure off individuals to be the sole representative. Incorporating a portfolio view can reveal positive and negative aspects of an application that are obscured by assessing traits individually, and also help assessors recognize when a tendency to look for qualified applicants may inadvertently be selecting people all cut from the same mold.

4. Avoid prioritizing “shiny objects”

When assessing quality, it is easy to overestimate institutional affiliations or pedigree, past awards, and accolades as measures of current worth. In this case, historical success can reinforce the notion that association equals credibility. This assumption can sometimes distract reviewers from taking a deeper, more holistic look at applicants. A potential option to help remove these proxy substitutes for worth may be blinded applications (e.g., concealed applicant names, institutions where degrees were obtained, journal names). Doing this can help to more accurately evaluate attributes without the bias of name-recognition and prestige. Importantly, in fields of research that are smaller or more specialized, the efficacy of blinded applications may be limited due to the level of familiarity within the small group.

5. Question entrenched narratives about what makes sense and who belongs

Schmidt concluded by discussing the importance of recognizing and understanding implicit narratives that an organization, institution, or funder might unintentionally convey. For example, a wall of portraits of school deans who are homogeneous in terms of race and gender presents an implicit narrative about who succeeds, the school environment, and acceptance demographics.

Along similar lines, personal experiences can also introduce biases. For example, it is more comfortable to approve of a particular applicant if they look similar to the reviewer, remind the reviewer of themselves at a younger age, or have a similar backstory as the reviewer. This form of bias, a form of “anchoring,” unintentionally contributes to homogeneity. Anchoring can be minimized by purposefully creating diverse teams of reviewers with a diverse set of perspectives across seniority, expertise, experience, race, and gender. Additionally, it is important to simply recognize and begin to question when there are implicit narratives hidden in plain sight.

DORA’s funder discussion group is a community of practice that meets virtually every quarter to discuss policies and topics related to fair and responsible research assessment. If you are a public or private funder of research interested in joining the group, please reach out to DORA’s Program Director, Anna Hatch (info@dora.org). Organizations do not have to be a signatory of DORA to participate.

Haley Hazlett is DORA’s Program Manager.

The post Navigating system biases in decision-making appeared first on DORA.

]]>
Updates on Recognizing and Supporting Indigenous Research from New Zealand and Australia https://sfdora.org/2021/04/27/updates-on-recognizing-and-supporting-indigenous-research-from-new-zealand-and-australia/ Tue, 27 Apr 2021 18:11:00 +0000 https://sfdora.wpengine.com/?p=151637  “When quantitative measures have an outsized impact on how people are rewarded, it can increase the temptation to focus on a narrow set of activities and reduce investment in other meaningful, but less rewarded, achievements.” Using traditional bibliometric standards of “quality” to inform research assessment can have a deleterious effect on how and what types…

The post Updates on Recognizing and Supporting Indigenous Research from New Zealand and Australia appeared first on DORA.

]]>
 “When quantitative measures have an outsized impact on how people are rewarded, it can increase the temptation to focus on a narrow set of activities and reduce investment in other meaningful, but less rewarded, achievements.” Using traditional bibliometric standards of “quality” to inform research assessment can have a deleterious effect on how and what types of research are valued, as stated in DORA’s Unintended Cognitive & Systems Biases brief.

The 2019 review of the Australian and New Zealand Standard Research Classification (ANZSRC) found a lack of visibility, and therefore recognition, of Aboriginal, Torres Strait Islander, Māori, and Pacific Peoples research, which impeded policy development. It also reported a reduced ability for people in Indigenous communities to access Indigenous-focused research and data. The result of the 2019 ANZSRC review was a new classification system with divisions created for Indigenous research in consultation with Indigenous research communities in Australia and New Zealand. The 2019 ANZSRC review indicates the importance of ongoing efforts in Australia and New Zealand to recognize and support Indigenous research, which were discussed at DORA’s second Asia-Pacific Funder Meeting on March 8, 2021. In this meeting, representatives from some of the public and private funders of research in New Zealand and Australia exchanged information about new and ongoing Indigenous research policies and funding opportunities.

Research that specifically holds value for local and Indigenous populations may have a lower citation impact than global or regional baselines, as found in the 2020 study The state of Aboriginal, Torres Strait Islander, Māori and Pacific Peoples research. In the current funding climate, lower citation impact translates to greater difficulty garnering research funding. To address this challenge, the Australian National Health and Medical Research Council (NHMRC) discussed its reforms to incorporate research impact into their new track record framework and redefine ‘quality’ by asking researchers to include their best publications with an explanation for why those publications are impactful. NHMRC emphasized the importance of including different fields of research in the assessment and funding process.

In addition to making efforts to move away from bibliometric indicators, the need for current and ongoing mechanisms to better acknowledge Indigenous researchers and research were also discussed. An example at the institutional level is the Carumba Institute at Queensland University of Technology (QUT). The Carumba Institute is specifically dedicated to prioritizing Indigenous Australian research, developing Indigenous Australian researchers, and also supports the QUT-published International Journal of Critical Indigenous Studies. Research conducted within the Carumba Institute includes the development of Indigenous people’s education, academic professional development, and organizational leadership. At a national level, NHMRC’s Road Map 3 helps chart the direction for Indigenous health and medical research investment. NHMRC also recently called for submissions of research priorities in Aboriginal and Torres Strait Islander health. Additionally, improving the health of Aboriginal and Torres Strait Islander peoples is a Strategic Priority in the NHMRC Corporate Plan 2020-2021. Key actions of this plan include establishing a National Network for Aboriginal and Torres Strait Islander Health Researchers. This network will bring together Indigenous health research groups and their support networks to build the capacity and capability of Aboriginal and Torres Strait Islander health researchers. NHMRC also added that it specifically supports research that will provide better health outcomes for Aboriginal and Torres Strait Islander peoples and is committed to allocating at least 5% of the Medical Research Endowment Account annually to Aboriginal and Torres Strait Islander health research. Finally, the tripartite agreement between the Canadian Institutes of Health Research (CIHR), the Health Research Council of New Zealand (HRC), and the NHMRC is an important commitment to collaborate on mutual health research priorities for these three countries.

Similarly, HRC is refining key pieces of policy related to equity and prioritization of funding for Māori health research. Over the past few years, in addition to ring-fenced funds targeting Māori health research, the HRC has been progressively implementing requirements for Māori health advancement to be embedded within all biomedical and applied research. Applicants are asked to specifically explain how their research will support Māori-health advancement and this is assessed as part of peer review. In terms of structure, HRC also discussed their dedicated Māori Health Research Committee, whose members are recruited from among qualified Māori health researchers. The Māori Health Research Committee is responsible for funds distribution and advising Council on how to proceed with Indigenous-related strategy and policy creation. Additionally, it was noted that a positive outcome of HRC Indigenous funding initiatives over the past years has been an increase in New Zealand universities creating positions to support Indigenous researchers. Although HRC currently has infrastructure in place, it was noted that they are constantly looking to improve through iterative feedback and are planning to work to explore the relevance of narrative CVs in HRC’s context. Additionally, the Ministry of Business, Innovation and Employment includes an Equity, Diversity and Inclusion Capability Fund that specifically seeks to support research organizations to grow an equitable and inclusive workforce.

DORA’s funder discussion group is a community of practice that meets virtually every quarter to discuss policies and topics related to fair and responsible research assessment. If you are a public or private funder of research interested in joining the group, please reach out to DORA’s Program Director, Anna Hatch (info@dora.org). Organizations do not have to be a signatory of DORA to participate.

Haley Hazlett is DORA’s Policy Intern

The post Updates on Recognizing and Supporting Indigenous Research from New Zealand and Australia appeared first on DORA.

]]>