DORA https://sfdora.org/ San Francisco Declaration on Research Assessment (DORA) Fri, 20 Dec 2024 06:08:00 +0000 en-US hourly 1 https://wordpress.org/?v=6.7.1 https://sfdora.org/wp-content/uploads/2020/11/cropped-favicon_512-1-32x32.png DORA https://sfdora.org/ 32 32 DORA releases new engagement & outreach policy for organizational signatories https://sfdora.org/2024/12/20/dora-releases-new-outreach-policy/ Fri, 20 Dec 2024 05:56:12 +0000 https://sfdora.org/?p=161668 Today, DORA is announcing an update to its policy for organizational signatories. What is the new policy? Beginning on January 1, 2025, organizations and institutions that sign DORA will need to submit a public statement at the time of signing to be approved. Previously, organizations were asked to post a public statement when they signed…

The post DORA releases new engagement & outreach policy for organizational signatories appeared first on DORA.

]]>
Today, DORA is announcing an update to its policy for organizational signatories.

What is the new policy?

Beginning on January 1, 2025, organizations and institutions that sign DORA will need to submit a public statement at the time of signing to be approved. Previously, organizations were asked to post a public statement when they signed DORA, but signatures would be approved without them, on the expectation that public statements would be forthcoming.

Why is the policy changing?

In November 2022, DORA announced its original outreach policy. This policy had a built-in “sunset clause” that required DORA leadership to review the policy in December, 2023 based on the learnings from the previous year. A task force from the DORA Steering Committee was struck to review the policy. As part of that review, DORA staff reviewed the websites of organization signatories and found that very few posted the requested public statement. This suggested that the first version of this policy was not having the desired effects, such as insuring that organizations’ communities are informed about implementation of DORA principles.

The task force also noted that organizations similar to DORA had more stringent requirements for members, and organizations were still willing to join.

DORA has now existed for ten years. While agreements to principles are still valuable and necessary parts of consciousness raising and awareness, it is time for organizations to move towards implementing DORA principles. We believe public statements are a useful starting point for organizations to articulate where they are in their reform practices and how they aspire to improve them.

What about organizations that have already signed DORA?

We will continue to encourage these organizations to post a statement. We will be reaching out to as many existing signatories as possible to suggest to them that they devise and post a public statement by January 1, 2026 and submit it to DORA staff. DORA staff will update the database as new public statements are received.

How do I know who has posted a statement?

The DORA signatory page will be updated to show the link to the public statement alongside the name of the organization.

What should be in a public statement?

Below are five steps to guide the writing of a statement:

  1. Identify your institution and describe the institution’s goals or priorities regarding research assessment. For example, “Our institution is a research extensive university that seeks to engage our undergraduates in research, and to ensure faculty who do so are rewarded for that work,” or “Our institution values research that impacts our regional community and is accessible in their preferred language, and we seek to ensure research outputs are not biased against non-English publications.”
  2. Describe DORA and link to the DORA website. For example “The San Francisco Declaration on Research Assessment (DORA) is a global initiative dedicated to improving research assessment practices”. You may also include a DORA signatory badge.
  3. Indicate when the institution signed DORA and how DORA’s principles align with institutional goals. For example, “The institution signed DORA in July, 2024. Reducing our reliance on journal Impact Factors will allow our researchers a wider array of publishing options in our regional language.”
  4. Describe any current plans regarding research assessment and DORA implementation, with links or documents if possible (e.g., consultations, policies, strategic plans, training). For example, “The Department will be creating a task force to review its assessment practices, and is tasked with delivering a strategic plan by the end of the calendar year.”
  5. Provide contact information to relevant individual(s) or office(s) responsible for research assessment and DORA implementation. For example, “The implementation of DORA in the college is being administered by the Dean’s office. Questions can be directed to the Associate Dean at <email address>.”

Some examples of previous statements are below. Because they were published before this guidance, not all examples contain all of the elements listed above, but they fulfilled the original mandate to provide a public recognition of the support for DORA on their own website.

Examples of statements by universities:

Examples of statements by journals or publishers:

Examples of statement by a funding agency and organizations:

The post DORA releases new engagement & outreach policy for organizational signatories appeared first on DORA.

]]>
Clarivate’s actions regarding eLife: DORA’s response https://sfdora.org/2024/11/25/clarivates-actions-regarding-elife-doras-response/ Mon, 25 Nov 2024 08:42:35 +0000 https://sfdora.org/?p=161874   Publishing requires constant innovation and renewal in order for it to remain relevant. eLife has disrupted the traditional model of scholarly publishing: since its inception, innovation and academic-led publishing have been at the core of eLife‘s policies and processes. Presently, their peer review model requires submissions to be preprinted prior to peer review, followed…

The post Clarivate’s actions regarding eLife: DORA’s response appeared first on DORA.

]]>
 

Publishing requires constant innovation and renewal in order for it to remain relevant. eLife has disrupted the traditional model of scholarly publishing: since its inception, innovation and academic-led publishing have been at the core of eLife‘s policies and processes. Presently, their peer review model requires submissions to be preprinted prior to peer review, followed by the publication of the  papers’ reviews alongside the article together with an eLife Assessment as “a Reviewed Preprint”. In this publishing model, there is no binary determination of acceptance or rejection after peer review. This approach addresses the fact that articles submitted, reviewed, and rejected at one journal tend to ultimately get published elsewhere (and consequently indexed), often unchanged.  eLife’s model has provided valuable innovation in peer review and hands control back to the authors of the research.

The recent announcement by Clarivate that they have suspended indexing of eLife from the Web of Science Science Citation Index Expanded (SCIE), and by association from being eligible for a Journal Impact Factor, highlights the overt challenges to disrupt and innovate in the scholarly publishing system.  Clarivate have indicated this is because they want to only index a curated feed of papers from eLife rather than all the papers that undergo peer review, regardless of outcome. Since eLife has been put on hold, Chinese authors have either stopped submitting and others have withdrawn, indicating the strong dependence of the Journal Impact Factor on author perceptions in China. In some jurisdictions, including China, journal articles must be indexed in Web of Science to “count”.  This move may also limit the discoverability of eLife’s articles.

This development  reinforces how a commercial entity such as Clarivate, can, through its ownership of scholarly databases and indices, hold the academic community to ransom. Clarivate’s announcement is disappointing as it both punishes innovation in peer review and disregards the important role of authors in deciding how and where their research should be published.

As funders and institutions increasingly move away from using single metrics to assess research(ers), the role of Journal Impact Factors is becoming increasingly irrelevant. We know, for example, that funder journals such as Open Research Europe, Wellcome Open Research and Gates Open Research, and indeed all the F1000 titles, have never had a Journal Impact Factor and do not need it to show the impact that they have within their communities.

eLife has long been a supporter of DORA and was an early signatory. Our view is that the innovative initiatives by eLife and others are crucial to ensuring that scholarly communication continues to evolve in a variety of ways to meet the changing needs of the research ecosystem in the 21st century. We are concerned by the action that Clarivate is taking, but not because of the possibility that eLife will not be eligible for a Journal Impact Factor but because Clarivate can use its market dominance to shut down innovation.

We therefore support eLife and encourage it to continue its innovation and encourage other journals to consider doing the same.

 

The post Clarivate’s actions regarding eLife: DORA’s response appeared first on DORA.

]]>
DORA Initiatives Meeting: Academic evaluation in Uruguay and updates from group members https://sfdora.org/2024/11/11/dora-initiatives-meeting-academic-evaluation-in-uruguay-and-updates-from-group-members/ Mon, 11 Nov 2024 05:04:03 +0000 https://sfdora.org/?p=161814 Each quarter, DORA holds a Discussion Group meeting for National and International Initiatives working to address responsible research assessment reform. This community of practice is a space for initiatives to learn from each other, make connections with like-minded organizations, and collaborate on projects or topics of common interest. Meeting agendas are shaped by participants. If…

The post DORA Initiatives Meeting: Academic evaluation in Uruguay and updates from group members appeared first on DORA.

]]>
Each quarter, DORA holds a Discussion Group meeting for National and International Initiatives working to address responsible research assessment reform. This community of practice is a space for initiatives to learn from each other, make connections with like-minded organizations, and collaborate on projects or topics of common interest. Meeting agendas are shaped by participants. If you lead an initiative, coalition, or organization working to improve research assessment and are interested in joining the group, please find more information here.

During the DORA National and International Initiatives Discussion Group meeting on August 13, 2024, members of the group discussed changes in research policy happening at each of their respective organizations. The group also heard a presentation from Fernanda Beigel, Chair of the UNESCO advisory committee for Open Science (2020-2021), Principle Researcher at the National Scientific and Technical Research Council (CONICET), Argentina, and Chair of Argentina’s National Committee for Open Science. Beigel, who was joined by Celia Quijano Herrera, Maria Soledad Gutierrez Parodi, and Erika Teliz Gonzalez of the Uruguay National Council for Innovation, Science and Technology (CONICYT), presented on her 2024 report Un estudio de la evaluación académica en Uruguay en perspectiva reflexiva or A study of academic evaluation in Uruguay from a reflective perspective. The report and supporting materials are available in Spanish and the executive summary is available in English here.

A study of academic evaluation in Uruguay from a reflective perspective was commissioned based on a public call for proposals by CONICYT out of an interest in evaluating the evaluation of researchers in Uruguay and supporting new institutional assessment practices. This work provides: 1) an important foundational understanding of academic incentives (or disincentives) in Uruguay, and; 2) a set of evidence-based recommendations for how to implement more responsible research assessment practices in Uruguay.

Beigel began her presentation by providing context on Uruguay: Uruguay is a relatively small country of over 3 million citizens, with 1.84 researchers per 1000 inhabitants. One public university (the University of the Republic Uruguay) accounts for 75% of the national research output. In Uruguay, there is extensive overlap between national-level research assessment and institution-level research assessment. Beigel’s work provides a unique perspective and insight into the landscape of assessment and reform in the context of a small country with extensive national oversight over research assessment. This is important because the level of interconnectivity between institutional and national assessment policy determine the key challenges, logistic considerations, and approach that must be taken into consideration when advocating for or reforming assessment practices.

Beigel outlined the different major systems of research assessment in Uruguay and their relationship to one another: the University of the Republic Full Dedication Regime (a university-level assessment system known as RDT) and the National System of Researchers (a national-level assessment system known as SNI), the latter of which is integrated into the Uruguayan Curriculum Vitae system (CVUy). Within these assessment systems, reviewers retain a high degree of autonomy to make decisions regarding faculty promotion. The report includes analyses between the evaluation systems and professional trajectory, methods of knowledge circulation (e.g., publications, books, etc.), combining those data with interviews and focus groups with “members of evaluation committees, officials, academic-scientific referents, and researchers.”

The report includes twenty recommendations for each of these systems across a range of topics. The recommendations for the production indicators and circulation of knowledge include:

  • 9. Broaden the notion of scientific production to include diverse profiles and value both traditional publications and technological production, technical contributions, artistic productions, social reports with public policy recommendations.
  • 10. Promote publication in quality scientific journals, in open access diamond, edited in the country and in Latin America, stimulating quality communication circuits and expansion of audiences.
  • 11. Value the tasks of academic editing (journal management, participation in editorial teams) in the permanence and promotion of the country’s academic evaluation systems.

Beigel, Quijano Herrera, Gutierrez Parodi, and Teliz Gonzalez also discussed the challenges inherent to implementing reform when assessment standards are set at a national level. While a long-term goal is changing national policy to incorporate the report recommendations, they highlighted that an immediate and critical starting point would be the autonomous reviewer panels that make assessment decisions. Sharing resources on the misuse of quantitative indicators and best practices in research assessment with these reviewer panels will be an important step towards building buy-in and support for reform.

We also heard updates from other discussion group member organizations, which are briefly summarized below:

Haley Hazlett was DORA’s Program Manager.

The post DORA Initiatives Meeting: Academic evaluation in Uruguay and updates from group members appeared first on DORA.

]]>
A publication’s “what” should count more than its “where”: why we should waive journal titles. https://sfdora.org/2024/11/10/a-publications-what-should-count-more-than-its-where-why-we-should-waive-journal-titles/ Mon, 11 Nov 2024 04:48:25 +0000 https://sfdora.org/?p=161810 Adrian Barnett, Queensland University of Technology The winter months can get cold in Belfast, the largest city in Northern Ireland where the Titanic was designed, built and launched in the early 1900s. Not seriously cold of course, never sub-zero depths of cold that were the undoing of the Titanic on its maiden voyage, but a…

The post A publication’s “what” should count more than its “where”: why we should waive journal titles. appeared first on DORA.

]]>
Adrian Barnett, Queensland University of Technology

The winter months can get cold in Belfast, the largest city in Northern Ireland where the Titanic was designed, built and launched in the early 1900s.

Not seriously cold of course, never sub-zero depths of cold that were the undoing of the Titanic on its maiden voyage, but a ‘nippy’ cold… a cold that comes with biting winds and rain that means only the bravest of souls venture outside in the winter months without the insurance policy of a good coat.

It’s a type of cold that resulted in it being a big outlier for winter deaths in our analysis of multiple cities across the world, with a death rate that was much higher than cities with a similar climate such as Gothenburg.

So it was with some delight that my mum, a proud Belfast girl, recently told me she was getting a new boiler installed courtesy of the city’s council – a small win that means this winter should be warmer than the last one.

“Wow, how did that happen?” I asked, as she shared the good news in one of our regular calls.

“I don’t know,” she said, “but it’s nice to be getting something for a change”.

Thinking about how my parents were getting a free boiler, I wondered if our analysis of winter deaths had played a part.

I remember hearing that our results caused some concern in Northern Ireland and led to government reports that examined the issue. Without knowing for certain, it seems that these investigations led to policies to improve housing in Northern Ireland, and a new boiler for my folks.

These are the happy moments that highlight how your research can make a positive difference. We did the work, published it, it was read, and then acted on, in a long chain of events that took many years.

How crucial was the journal? 

The publication in the Journal of Epidemiology and Community Health was a vital link in the chain, as having a peer reviewed and easily accessible version of our work was necessary for its uptake.

But would the paper have had the same impact if it was published in another journal? I think it would.

A “better” journal might have got more media attention, and our findings might have been picked up faster by policy makers. Conversely a “lesser” journal may have slowed its uptake, although it would still be findable by anyone doing a thorough review.

Where versus What

The journal is important, but it is a means to an end rather than the end itself. Alas for many researchers the journal is now the end game, with an enormous focus on “where” to publish rather than “what”, with the “where” meaning the highest journal impact factor.

This mania for impact factors is driven by fellowship and hiring committees who often rely on the journal impact factor as a heuristic of research quality. These committees use shortcuts because researchers are now publishing so much that there’s no time to read every candidate’s work. Hence the “where” stands in for the “what”.

But policy makers care about the “what”. For example, in our recent study of journal impact factors a researcher commented that policy makers “Don’t care about journal impact factor, they only want you to give them a half-page summary.” For real impact, researchers should publish where their target audience is most likely to read it.

Goodbye to some of that

I am now so disillusioned with the research world’s focus on journal prestige that I recently removed all the journal names from my CV. I want any interest in my work to be based on the paper’s title, not any perceived entitlement from the “top” journals.

I’ve been called a hypocrite for publishing this change in Nature, which is a key target for “where” papers. But where better to reach those who are focused on the “where”?

Removing journal titles from publication lists is a simple step that could be used in all forms of research assessment.

Funders and universities could ask for CVs without journal titles in their fellowship and job applications. A determined committee member could still find the impact factors, but it should eventually sink-in that it is what people have achieved that matters.

Google Scholar could remove journal titles from users’ profiles. All papers on Scholar are hyperlinked, so readers don’t need to know the journal if they want to read it. Academia is clinging to an antiquated and fiddly system of writing out the full reference with the journal title, volume, issue, etc.

That warm feeling

I enjoyed explaining to my parents how my research potentially helped them get a new boiler, especially as it’s usually hard to explain what I do.

I didn’t get into research for personal benefit. Like most researchers I started as idealistic about how data and evidence can improve the world. Unfortunately, many researchers have become sidetracked by self-serving competitions. They need to remember why they started their career and forget the journal rankings that they were happily ignorant of when their career began.

Adrian Barnett is a statistician at Queensland University of Technology.

The post A publication’s “what” should count more than its “where”: why we should waive journal titles. appeared first on DORA.

]]>
Narrative CVs: How do they change evaluation practices in peer review for grant funding? https://sfdora.org/2024/11/05/narrative-cvs-how-do-they-change-evaluation-practices-in-peer-review-for-grant-funding/ Wed, 06 Nov 2024 04:05:30 +0000 https://sfdora.org/?p=161807 Judit Varga & Wolfgang Kaltenbrunner Research on Research Institute (RoRI); Centre for Science & Technology Studies, Leiden University. Contact: w.kaltenbrunner@cwts.leidenuniv.nl This blog post reports some preliminary findings from a project designed to investigate the evaluative use of narrative CVs. When funding organizations organize review panels to assess grant applications, reviewers need to agree on what…

The post Narrative CVs: How do they change evaluation practices in peer review for grant funding? appeared first on DORA.

]]>
Judit Varga & Wolfgang Kaltenbrunner Research on Research Institute (RoRI); Centre for Science & Technology Studies, Leiden University.

Contact: w.kaltenbrunner@cwts.leidenuniv.nl

This blog post reports some preliminary findings from a project designed to investigate the evaluative use of narrative CVs. When funding organizations organize review panels to assess grant applications, reviewers need to agree on what constitutes “quality” and how to compare different applicants and their submissions. Historically, reviewers have tended to facilitate this process by making use of quantitative metrics, such as the number of publications or the prestige of the journals and institutions associated with an applicant. These indicators, theorized as so-called “judgment devices” by researchers like Musselin (2009) and Hammarfelt & Rushforth (2017), help reduce the complexity of the decision-making involved in comparing candidates and their suitability to carry out projects by breaking them down to a more simple, quantitative comparison.

However, there is growing concern that relying too heavily on these traditional markers might be doing more harm than good. By focusing on numbers and proxies for academic prestige, reviewers may be losing sight of the achievements and quality of work of an applicant in a broader sense. Narrative CVs are designed to encourage them to consider the achievements and competence of a candidate in suitable detail and in the context of their proposed projects. At the same time, very little is known about the practical effects and real-word use of narrative CVs by reviewers in funding panels.

To remedy this, researchers at the Research on Research Institute (RoRI) have co-designed a research project in collaboration with the Dutch Research Council (NWO), the Swiss National Science Foundation (SNSF) and the Volkswagen Foundation. The project is currently ongoing and draws mainly on participant observation in consecutive review panel meetings as well as interviews with reviewers. The quotes presented in this blogpost document discussions within peer review panel meetings at NWO and SNSF.

Multiplying forms of excellence

Our findings so far suggest that the introduction of narrative CVs can trigger debates about the nature of scientific excellence in review panels. We encountered multiple moments where the use of the narrative CV format prompted reviewers to gradually broaden the range of achievements they valued in applicants, partly depending on the outlook of the respective project proposals. In one representative situation, a reviewer in the SNSF sciences panel expressed their surprise at the fact that the applicant had foregrounded the collaborative nature of their work in the CV, instead of focusing on publications. The reviewer initially scored the applicant lower as a result:

“One surprising thing about achievements [the narrative aspects of the SNSF CV]: there are significant ones, from the postdoc there are publications, but somehow [they are] not described as an achievement, I was wondering why (…) Maybe the candidate thought it was better to emphasize other achievements, like the collaborative nature of work, but anyway this is why I gave [a lower score].”

Later, following a discussion among panelists about how to interpret the proposal and the submitted narrative CV, another reviewer began to explicitly characterize the applicant’s profile as a ‘team player’. A subtle but important shift appeared to have taken place in the evaluative reasoning: Rather than assessing the applicant against a singular default ideal of a scientist whose standing can be inferred from the quantity of high-impact publications, a reviewer introduced a frame of reference where collaborative qualities and the ability to facilitate joint work in a laboratory context were legitimate criteria. The question then was, is this the right profile for the proposed grant and the research project?

“In conclusion, a strong candidate, I was a bit too harsh, especially on the project […] the profile is a bit ambiguous to me, but I’m happy to raise [the points] […] I think the candidate is a team player in the lab, you can interpret it positively or negatively but that’s the profile.

This situation is a particularly clear example of a dynamic that we recurrently observed throughout all of our case studies, namely a gradual pluralization of the notion of excellence over the course of the panel meetings.

Resistance

The above example illustrates evaluative learning in the form of rethinking publication-centric assessment criteria in light of narrative CVs and related guidelines. Yet on a number of occasions, some reviewers explicitly doubled down on those more ‘traditional’ criteria. For example, NWO instructed reviewers to omit naming concrete journals in which an applicant has published, to avoid that they infer the quality of a publication from the perceived prestige of the venue in which it had been published. In line with this, in the first review round of the NWO social sciences panel, reviewers did not mention any journals by name. Yet in the second round, one reviewer evoked journal prestige twice when assessing (two different) applicants.

“As for the applicant, I can’t judge the quality of publications, but the applicant published in Nature Genetics, maybe someone can tell me if it’s good but “everything with Nature sounds very good to me” [laughs a bit], I was very impressed with the candidate.”

When discussing another applicant, the same reviewer again made a reference to the applicant’s publications in prestigious journals as a proxy for their quality:

“Quality candidate. 5 publications in Nature Scientific Reports and other prestigious journals. […]

This comment sparked some confusion, as reviewers failed to locate the publication mentioned. After a while, the NWO program officer who helped chair the panel meeting cautioned that the perceived prestige of the publication venue should not be taken into account as a factor in the evaluation in the first place. Yet rather than giving up, the reviewer noted this comment with a disapproving gesture and continued the effort to locate the publication in question.

We propose that in order to make sense of such situations and devise practical strategies for handling them in future panel meetings, it is important to disentangle the different motivations reviewers might have for doubling-down on publication-centric evaluation criteria, even when they are explicitly cautioned not to use them. Sometimes, they might do so simply because they feel it makes sense in the context of a given application, for example projects aiming primarily for traditional academic impact. Yet on other occasions, resistance to the narrative format might be better understood as a response to what reviewers perceive as an unjustified intervention by funders and other reform-minded actors. After all, narrative CV formats can be seen not simply as a well-intentioned attempt to improve the fairness of evaluative decision-making in peer review, but also a threat to the autonomy of reviewers and academic communities to define notions of quality.

Story-telling skills as a new bias?

An important concern for many observers appears to be the emphasis narrative CV formats place on writing skills and the ability or willingness to present oneself in the best possible light. These cultural competences and inclinations may be unequally distributed among different groups of applicants, for example at the disadvantage of applicants with working class backgrounds, female applicants, or applicants from different cultural backgrounds. Yet typically, discussions about bias this may create focus solely on the input on the applicant’s side, and they implicitly presuppose that a highly positive self-representation is always a good thing. We instead found that reviewers may react negatively when they feel that applicants exaggerate their achievements. During panel meetings, reviewers flagged cases where they thought applicants exaggerated their achievements.

For example, in the social sciences panel of SNSF, a reviewer felt that an applicant had grossly overstated their achievements:

“[The Scientific_Chair reading the evaluation of a Reviewer]: [The Reviewer] had problems with the tone of the CV as well as the proposal, [they contained] self aggrandising statements about having invented a new field.

In another situation, another reviewer explicitly admitted “to be turned off” by an applicant using similarly hyperbolic language in their narrative CV, noting that it was “not grounded in science.”

Conversely, a situation we observed in the natural sciences panel shows that reviewers do appreciate enthusiastic narrations, but the fundamental requirement is for the narratives to be credible:

“Reviewer: (…) also the description of academic career is credible and enthusiastic.”

In sum, whilst narrative CVs might require applicants to write more than traditional CVs, this does not mean that they will appreciate academic self-aggrandizement or inflationary rhetoric. Instead, it appears that narrative elements place the emphasis on a new form of credibility in the relation between biographical self-representation and the achievements of a peer, which we suggest requires continued study.

Conclusions

This blog post documents in equal measure the success and challenges of narrative CV formats, and also the demand for more research on its practical use. It is clear even on the basis of our preliminary observations that narrative CVs do on many occasions stimulate productive reflections on the meaning of excellence in specific contexts, thus multiplying its meaning and perhaps contributing to challenging the terminology of excellence. We also feel that attention to nuance is crucial for understanding resistance to narrative CVs. Some forms of resistance might well provide input for further development of narrative CV formats. Where resistance is more related to a (perceived) struggle about reviewer autonomy, a different type of response will be required – one that addresses questions of power relations between scientists and institutions and funders in a more explicit way. Our finding that reviewers tend to react negatively to self-aggrandizing language in narrative CVs in turn caution us that evaluation reform is a moving target that can only be studied as it unfolds.

As should have become clear, narrative CVs are not an easy fix for peer review. They instead prompt reviewers and the  institutions who introduce them to ask fundamental questions of fairness and fit of quality criteria in peer review afresh. While not ‘efficient’ in a practical sense, we feel that this disruption to established routines of evaluative problem-solving is a crucial benefit in its own right.

The academic community can benefit from narrative CVs, particularly if the questions and complexities they raise are embraced as opportunities for discussions. For example, the findings presented in this blogpost signal opportunities to further discuss notions of  excellence, values in  academic culture and governance, training about narrative CVs for applicants, and CV design in light of the potential biases the new format may introduce. However, this process requires careful management, drawing on curious and innovative ideas for academic futures: In the absence of this, given the time-constrained nature of review meetings and academic life, it can be all too easy to glide over the opportunities and challenges afforded by narrative CVs.

References

Hammarfelt, B. & Rushforth, A.D. (2017). Indicators as judgment devices: An empirical study of citizen bibliometrics in research evaluation. Research Evaluation 3(1): 169–180.

Musselin, C. (2009). The Markets for Academics. New York: Routledge.

The post Narrative CVs: How do they change evaluation practices in peer review for grant funding? appeared first on DORA.

]]>
Updates from the Asia-Pacific Funder Discussion Group 18/09/24 https://sfdora.org/2024/10/28/updates-from-the-asia-pacific-funder-discussion-group-18-09-24/ Mon, 28 Oct 2024 21:36:13 +0000 https://sfdora.org/?p=161687 19 representatives from 7 research funder organisations participated in the last quarterly  Asia-Pacific Funder Discussion Group hosted by DORA.  New DORA staff members, Liz Allen and Janet Catterall were introduced to the group. The position of  DORA Program Manager is currently vacant. DORA updates to the group included the upcoming implementation guide and guidance document,…

The post Updates from the Asia-Pacific Funder Discussion Group 18/09/24 appeared first on DORA.

]]>
19 representatives from 7 research funder organisations participated in the last quarterly  Asia-Pacific Funder Discussion Group hosted by DORA. 

New DORA staff members, Liz Allen and Janet Catterall were introduced to the group. The position of  DORA Program Manager is currently vacant.

DORA updates to the group included the upcoming implementation guide and guidance document, anticipated for early next year, and the imminent release of three toolkits, which emerged from the workshops DORA held in May 2024 with the  Elizabeth Blackwell Institute/MoreBrains project. These workshops sought to find ways to increase equality, diversity, inclusion, and transparency in funding applications. The toolkits will focus on simplifying funding call structures, changing application processes to reduce likelihood of bias in outcomes (e.g. recognising a broader range of research outputs, narrative CV formats etc) and improving training for reviewers and evaluators. The team is also producing three case studies about the funding application process.

Participants then engaged in a roundtable discussion where members shared their work from the past year, including collaborations, asked questions of the group and suggested topics they would like the group to cover in the future. Common themes that emerged from these discussions included:

  • Trialling new selection processes, fellowships and panels to foster greater inclusivity of underrepresented communities, particularly Aboriginal and Torres Strait Islander, Māori and Pasifika applicants and assessors, and exploring ways to engage with these communities.
  • Experimenting with the inclusion of the narrative cv option in grant applications. 
  • Exploring alternative metrics for research assessment and new KPIs.
  • Investigating different models that support strategic discretion in decision making to further equity and fairness
  • Condensing and simplifying the application process
  • Fostering a greater understanding of how artificial intelligence tools can be or are being utilised by applicants and assessors

The utility of artificial intelligence was further discussed in terms of the assessment process itself- can these technologies be used for the initial screen? To find new peer reviewers? To summarise panel results? How can an agency build such tools into the review process? Funders reported that AI tools had been trialled already for assigning peer reviewers and for aligning applications with assessors. Confidentiality is a big consideration so it is recommended that title and keywords only be used in prompts and not the abstract.

The last quarterly meeting for 2024 will feature a presentation from the Global Research Council RRA Working Group members Joanne Looyen (MBIE) and Anh-Khoi Trinh (NSERC)

Call for member presentations for 2025

The post Updates from the Asia-Pacific Funder Discussion Group 18/09/24 appeared first on DORA.

]]>
Encouraging innovation in open scholarship while fostering trust: A responsible research assessment perspective https://sfdora.org/2024/08/01/encouraging-innovation-in-open-scholarship-while-fostering-trust-a-responsible-research-assessment-perspective/ Thu, 01 Aug 2024 15:39:51 +0000 https://sfdora.org/?p=161267 The following post originally appeared on the Templeton World Charity Foundation blog. It is reposted here with their permission.  Emerging policies to better recognize preprints and open scholarship Research funding organizations play an important role in setting the tone for what is valued in research assessment through the projects they fund and the outputs they…

The post Encouraging innovation in open scholarship while fostering trust: A responsible research assessment perspective appeared first on DORA.

]]>
The following post originally appeared on the Templeton World Charity Foundation blog. It is reposted here with their permission. 

Emerging policies to better recognize preprints and open scholarship

Research funding organizations play an important role in setting the tone for what is valued in research assessment through the projects they fund and the outputs they assign value to. Similarly, academic institutions signal what they value through how they assess researchers for hiring, promotion, and tenure. An increasing number of research funding organizations and academic institutions have codified open scholarship into their research assessment policies and practices. Examples include Wellcome, the Chan Zuckerberg Initiative, the Ministry of Business, Innovation & Employment of Aotearoa New Zealand, the University of Zurich and the Open University.

This shift is accompanied by policies that recognize preprints as evidence of research activity (e.g. NIHJapan Science and Technology AgencyWellcomeEMBO, and some UKRI Councils). Some funders are now formally recognizing peer-reviewed preprints at the same level of journal articles, such as EMBO and many of the cOAlition S funders. A preprint is a scholarly manuscript that the authors upload to a public server but has not (yet) been accepted by a journal (it is usually the version submitted to a journal if the authors do decide to take it further for journal publication). It can be accessed without charge and, depending on the preprint server, is screened and typically posted within a couple of days, making it available to be read and commented on. Because preprints offer a means outside of journals to share research results, they have the potential to support responsible assessment by decoupling journal prestige from assumptions on the quality of research findings. Preprints also enable sharing of a range of outputs and results that may not be attractive to a journal (for example, research that is technically sound but has a limited scope, or null/negative findings). Because they are also usually free to post, preprints can also help reduce author-facing cost-associated barriers often associated with traditional open access publications, although these services are typically therefore reliant on ongoing grant funding to maintain their sustainability.

One of the most recent examples of a substantial policy change is the Bill and Melinda Gates Foundation’s March 2024 announcement of its upcoming Open Access Policy in 2025. The 2025 policy will introduce two changes for grantees: grantees will have to share preprints of their research, and the Gates Foundation will stop paying article processing charges (APCs). As with many shifts towards open access policies, these changes were motivated by several factors, including the Gates Foundation’s desire to provide journal-agnostic avenues for research assessment and to empower their grantee authors to share different versions of their work openly on preprint servers and without the costs of APCs. Reducing the costs associated with publishing research and increasing readers’ accessibility to research via preprint servers also supports more equitable access to research products.

The Gates Foundation used ten years of existing data to inform its decision to refine its Open Access Policy, has engaged actively with community dialogue, and has made it clear that this is a first step on a longer path to better evaluate research on its own merits and increase accessibility to research. Notably, the Gates Foundation took this step after taking into account the existing shortcomings of current open access models that rely on APCs that effectively limit global access to research outputs. Given that the Gates Foundation is “the wealthiest major research funder to specifically mandate the use of preprints,” this approach is groundbreaking in its emphasis on preprints and its shift away from spending on APCs. It has also placed a spotlight on these issues and catalyzed discourse around trust in preprints. Policy changes like this indicate a willingness among research funders to take steps toward change and move away from recognizably flawed processes. This is an important step, since flawed processes are often retained because fixing them or adopting new processes is perceived as too high effort and too high risk (also known as the status quo bias).

Overcoming the status quo and tackling new challenges

Overcoming the status quo bias is difficult, but not impossible. Indeed, a common concern around changing research assessment processes to include new ways of sharing knowledge is taking a leap into the unknown. Because these new policies are on the leading edge of change, there are gaps in our knowledge around their effects on research culture and assessment. For example, will assessors penalize researchers who include preprints in their CVs or will research culture shift what it values?

Another key question centers on how preprints will impact our concept of traditional manuscript peer review processes. Traditionally, journals select a panel of peer reviewers, ideally field experts, who voluntarily review manuscript submissions for free. These detailed reviews inform an editor’s decision on whether to publish a manuscript. Generally, preprints are only lightly checked before being made public, after which anyone can read and comment on them and provide peer feedback. One common concern is that preprints are only subject to light checking before being made public, though it is important to note that issues with rigor and reproducibility exist within current peer-review publication systems. Preprint peer feedback holds the potential for positive change, opening up the opportunity for authors to receive a wide range of community input and making it easier to spot issues early.

One step to foster trust in preprints will be to create a shared understanding of what preprint “review” is. What qualifies as review in the context of a preprint was recently defined via expert consensus as “A specific type of preprint feedback that has: Discussion of the rigor and validity of the research. Reviewer competing interests declared and/or checked. Reviewer identity disclosed and/or verified, for example, by an editor or service coordinator, or ORCID login.” Additionally, there are a growing number of preprint review services available, for example VeriXiv (created through a partnership between the Gates Foundation and F1000), Peer Community In and Review Commons who have all created infrastructure and pipelines to verify preprints and facilitate structured and invited expert peer review of preprints, post publication. They provide journal-independent assessment of the preprint, typically using various forms of open peer review practices, making it more transparent, fostering accountability and enabling reviewers to be rewarded for their contributions to the field. However, some have raised concerns about whether greater transparency increases risk of retaliation, particularly for early career researchers, although recent evidence suggests that more research is needed to determine if repercussions occur.

Questions like these are legitimate and highlight the value of the organizations that are actively seeking to answer them, like the Research on Research Institute, which studies the results of research policy reform using the same scholarly rigor that reform efforts are trying to foster in the academic ecosystem. Organizations like ASAPbio are working to address concerns around the agility of preprint servers to correct or retract preprints and to support rigorous and transparent preprint peer review processes.

In the meantime, fear of unintended consequences is not reason enough to avoid trying to improve research incentives and the culture associated with it. The changes that research funders are implementing to recognize and incentivize open scholarship practices are on the leading edge of reform efforts, pushing research culture forward in new ways that aim to address existing burdens caused by APCs and journal prestige. As with all policies that aim to shift assumptions around what can and should be valued in research, gaps in knowledge will need to be filled through iteration, open dialogue with groups that new policies will impact, and careful study of how new policies change research culture.

Responsible research assessment and open scholarship are interconnected

Responsible research assessment: An umbrella term for “approaches to assessment which incentivise, reflect and reward the plural characteristics of high-quality research, in support of diverse and inclusive research cultures.” –RoRI Working Paper No.3

As well as progress in the reform of research assessment, a further fundamental change in the research ecosystem over the past decade has been the emergence of open scholarship (also known as open science or open research)¹. The UNESCO 2021 Recommendation on Open Science outlined a consensus definition of open science that comprises open scientific knowledge (including open access to research publications), open dialogues with other knowledge systems, open engagement of societal actors, and open science infrastructures. It is an inclusive movement to “make multilingual scientific knowledge openly available, accessible and reusable for everyone, to increase scientific collaborations and sharing of information for the benefits of science and society, and to open the processes of scientific knowledge creation, evaluation and communication to societal actors beyond the traditional scientific community.” This definition captures the broad nature of open scholarship: it is both a movement to change how scholarly knowledge is shared, and to address global inequities in scholarly culture itself.

DORA (Declaration On Research Assessment) is a global non-profit initiative that actively works with the scholarly community to support responsible research assessment for hiring, promotion, tenure, and funding decisions. DORA is part of a global movement that aims to reform research assessment equitably including expanding the definition of what gets assessed, and changing the way the assessment takes place. Reducing emphasis on flawed proxy measures of quality such as the Impact Factor or h-index, broadening the type of work that is rewarded, and challenging assumptions about quality and excellence are critical facets of the movement towards responsible research assessment. However, these core concepts do not exist in a vacuum (see Venn diagram by Hatch, Barbour and Curry).


The concepts of (i) research assessment reform, (ii) open scholarship, and (iii) equality and inclusion cannot be treated separately. They interact strongly and in many complex ways – presented only in broad outline here – and are converging to create a research culture that is centred on people (practitioners and beneficiaries) and on values that embody the highest aspirations of a diverse world.

The concepts of research assessment reform, open scholarship, and equality and inclusion cannot be treated separately.


Responsible research assessment is intricately linked with open scholarship, and also with equity and inclusion initiatives. Biases and assumptions about research quality can determine who is assessed and how they are assessed, and decoupling research products from journal prestige is an important step to address these biases.

Greater transparency and accessibility enables the recognition of a broader range of scholarly outputs (including datasets, protocols, and software). In alignment with the aims of open scholarship, DORA seeks to address the “publish or perish” culture by recognizing and rewarding transparency, rigor, and reproducibility. Ultimately, enabling and rewarding rigor and transparency serve to foster trust both within academia and with the broader public.

The intersection between these movements is apparent in many policies and practices being adopted at academic institutions around the world. Reformscape, a database of research assessment reform at academic institutions, contains over twenty examples of how institutions are incorporating aspects of open scholarship into their hiring, promotion, and tenure practices. Many of DORA’s in-depth case studies of institutional research assessment reform include mention of the institution codifying open scholarship into their practices.

Fostering public trust through open scholarship requires a systems approach

A critical part of research culture is trust. There are many ways to build trust. Traditional peer reviewed journals emphasize building trust by invoking expert opinion. Open scholarship emphasizes building trust through transparency: making all stages of the research process, from conception to data collection to analysis and review visible to all.

The two approaches are not mutually exclusive. Peer reviewed journals have created policies to promote openness, and several forms of open scholarship have sought ways to solicit expert opinions. However, peer review has been used as the main signal of trustworthiness for so long that it can be difficult to convince researchers that having an article pass peer review isn’t necessarily a definitive signal to say that the article is trustworthy. Consequently, many have not been convinced about the value of sharing research ahead of peer review, and have been concerned that removing that vetting would open the gates to flawed research and increase the risk of misinformation.

In practice, a couple of studies (here and here) have suggested that the distinctions between peer reviewed journal articles and preprints can be minimal. Preprints have gained increased acceptance from researchers who are posting more preprints, and from reporters who are writing more stories based on preprints (although more work needs to be done to ensure that disclaimers about the lack of peer review are always added).

Understanding what scholarly communication can be viewed as trustworthy was, is, and always will be a complex task for both experts and non-experts alike. Experts are expected to gain this through advanced training and experience. Non-experts might benefit from increased media literacy, a subject that is taught to less than half of US high school students.

Call to Action

Reforming research assessment requires new policies and practices that embrace diverse scholarly outputs, reduce the emphasis on journal prestige as an implicit indicator of research quality, and evaluate research based on its intrinsic value. As we expand the type of work that we recognize to include preprints and other “non-traditional” outputs, we can foster trust in these new outputs by 1) recognizing and rewarding transparency, rigor, and high-quality review, and 2) by developing resources to foster and support responsible preprint review. For example, a growing number of bibliographic indexers are starting to index preprints (e.g., Europe PMCPubMed Central) and there are several efforts to index and link reviews to preprints (e.g. ScietyCOAR Notify). There are also a number of efforts underway to develop a range of consistent trust signals and markers. Alongside these efforts lies the crucial task of consistently educating and communicating about innovations in publishing and open scholarship practices to cultivate public trust and literacy.

Change on this scale is not immediate, nor should it be. DORA has long advocated for the strategy of iteratively fine tuning policies over time using data and input from their target communities. As more and more research funders and institutions test new ways of rewarding researchers for their open scholarship practices, it is important to seize the opportunity for careful review, refinement, and fostering an open dialogue on what works and what doesn’t.

Zen Faulkes is DORA’s Program Director
Haley Hazlett is DORA’s Program Manager

Acknowledgements: The co-authors would like to thank DORA co-Chairs, Ginny Barbour and Kelly Cobey, and DORA Vice-Chair, Rebecca Lawrence for their editorial input on the piece.


¹ The term “open scholarship” will be used throughout this piece for consistency. Different organizations also use the terms “open research” and “open science” to describe broad policies that encourage openness, though often all three terms generally have a holistic focus on fostering a culture of openness, transparency, and accessibility. DORA uses “open scholarship” to better encapsulate all scholarly disciplines.

The post Encouraging innovation in open scholarship while fostering trust: A responsible research assessment perspective appeared first on DORA.

]]>
DORA Initiatives Meeting: CLACSO-FOLEC on responsible assessment and open science https://sfdora.org/2024/06/18/dora-initiatives-meeting-updates-from-clacso-folec/ Tue, 18 Jun 2024 18:06:43 +0000 https://sfdora.org/?p=160954 Each quarter, DORA holds a Community of Practice (CoP) meeting for National and International Initiatives working to address responsible research assessment reform. This CoP is a space for initiatives to learn from each other, make connections with like-minded organizations, and collaborate on projects or topics of common interest. Meeting agendas are shaped by participants. If…

The post DORA Initiatives Meeting: CLACSO-FOLEC on responsible assessment and open science appeared first on DORA.

]]>
Each quarter, DORA holds a Community of Practice (CoP) meeting for National and International Initiatives working to address responsible research assessment reform. This CoP is a space for initiatives to learn from each other, make connections with like-minded organizations, and collaborate on projects or topics of common interest. Meeting agendas are shaped by participants. If you lead an initiative, coalition, or organization working to improve research assessment and are interested in joining the group, please find more information here.

During the DORA National and International Initiatives Discussion Group meeting on May 14, 2024, members of the group discussed changes in research policy happening at each of their respective organizations. The group also heard a presentation from Laura Rovelli, coordinator of the Latin American Forum for Research Assessment (FOLEC) from the Latin American Council of Social Sciences (CLACSO). Rovelli discussed the importance of open infrastructures and evaluation systems in research, noting the challenges faced in Latin America in accessing information from the US and Europe due to the privatization of research information.

Rovelli stressed the importance of open research information having transparent evidence and inclusive data, values that are supported by the Barcelona Declaration on Open Research Information. This declaration was created in 2023 by a group of over 25 research information experts and promotes research openness and sustainability of infrastructures for open research information.

Based on a recent collective blog post published in Leiden Madtrics, Rovelli discussed why inclusion and diversity require multiple information sources, and made the argument for:

  • Increasing the relevant documents, especially from under-represented areas
  • Ensuring “Richness of metadata,” as a single source cannot support all research information
  • Preserving a variety of information sources such as histories, languages etc.

Rovelli also touched on how responsible research assessment and open science are interconnected and strengthening one can help improve the other. She discussed FOLEC’s support of instituting a decentralized and federated research information source, noting that data sources should have both regional/local and international repositories.

Other ongoing projects at CLACSO include the AGoRRA Project with RoRI UKRI, and a new publication with UNESCO, Debates actuales y reformas en curso en la evaluación responsable de la investigación en América Latina y el Caribe (English translation: “Current debates and ongoing reforms in the responsible evaluation of research in Latin America and the Caribbean”), discussing responsible research assessment reforms in national evaluation systems in Latin American countries.

We also heard updates from other organizations, which are briefly summarized below:

  • INORMS Research Evaluation Group (INORMS REG) has recently created a forum event about responsible research assessment, and has been holding community calls for the More Than Our Rank initiative.
  • University of Leeds highlighted that they have signed the Knowledge Equity Network; a declaration that promotes collaborative sharing of knowledge globally, aims to address pressing challenges, and supports a culture of openness, diversity, and inclusion within the higher education sector.
  • The organization Recognition & Rewards conducted their Recognition & Rewards culture barometer survey, which yielded data on “the state of affairs concerning a different way of recognising and rewarding academic work within institutions.” A report will be written and published with the results. They have also organized a recognition and rewards festival with workshops that will be held in November, 2024.
  • Universities Norway is part of a CoARA working group that is conducting a survey on how academic careers are assessed. They also highlighted NOR-CAM: “A toolbox for recognition and rewards in academic careers.”
  • Projeto Metricas presented on DORA principles to the Ministry of Education (MEC) as part of its celebration of 20 years of the current evaluation system, noting they are providing consultancy to the Ministry for the redesigning of the system. They also presented about DORA values at Fiocruz, the largest public network of biomedical institutions in Brazil.
  • As for DORA, we have released a guidance document on research indicators, a report on supporting responsible assessment practices in pre-award funding processes, and a report on monitoring the effectiveness of narrative CVs.

Suggested Reading List

Casey Donahoe is DORA’s Policy Associate

The post DORA Initiatives Meeting: CLACSO-FOLEC on responsible assessment and open science appeared first on DORA.

]]>
DORA reaches 25,000 signatures https://sfdora.org/2024/06/03/dora-reaches-25000-signatures/ Mon, 03 Jun 2024 21:56:45 +0000 https://sfdora.org/?p=160879 DORA has hit a new milestone by reaching the 25,000 signature mark.  Each of those thousands of signatures represents an individual or organization that supports the principles of the original San Francisco Declaration on Research Assessment, drafted in 2013. Most signatures are from individuals, with QQ percent from various organizations, including universities, departments and libraries…

The post DORA reaches 25,000 signatures appeared first on DORA.

]]>
DORA has hit a new milestone by reaching the 25,000 signature mark. 

Each of those thousands of signatures represents an individual or organization that supports the principles of the original San Francisco Declaration on Research Assessment, drafted in 2013. Most signatures are from individuals, with QQ percent from various organizations, including universities, departments and libraries within universities, academic publishers, funding agencies, and more.

Signing the Declaration was the first way that DORA engaged with the research community. 

“DORA’s signers provided evidence the academic community wanted researcher assessment to change,” said Anna Hatch, DORA’s original Program Director. “The number of signers was one motivator to expand our vision for the declaration. As a result, DORA transformed into an active initiative that supports the development of responsible researcher assessment by creating opportunities for its community to learn from each other and work together.”

While DORA has since become a much larger initiative that provides tools and resources to support improved methods of research assessment, the list of signatories remains an important part of the organization.

Ginny Barbour and Kelly Kobey, DORA’s current Co-chairs said, “The number and diversity of signers from across the world – and that they continue to increase – demonstrates how much reform of research assessment resonates globally. The initial Declaration was a concrete call for change that is now being implemented through DORA’s many resources and activities. The 25,000 landmark of signers demonstrates the continuing important role that DORA has as a rallying point for both organizations and individuals.”

The declaration can be signed by filling out an online form for support on the DORA website.

The post DORA reaches 25,000 signatures appeared first on DORA.

]]>
DORA Newsletter May 2024 https://sfdora.org/2024/05/07/dora-newsletter-may-2024/ Tue, 07 May 2024 10:00:07 +0000 https://sfdora.org/?p=161192 Announcements Reformscape in Full Swing Since its release in January, Reformscape has been serving the research community in providing 230 documents encouraging openness & transparency. New documents and information continue to be added and are always publicly available. Upgrades have also been made to make the platform more user-friendly. You can now search for responsible…

The post DORA Newsletter May 2024 appeared first on DORA.

]]>

Announcements


Reformscape in Full Swing

Since its release in January, Reformscape has been serving the research community in providing 230 documents encouraging openness & transparency. New documents and information continue to be added and are always publicly available. Upgrades have also been made to make the platform more user-friendly. You can now search for responsible research assessment resources based on type, making it easy to find:

  • Action plans
  • Policies to reform hiring, promotion or tenure
  • Outcomes of new policies
New guidance released

DORA’s opposition to the overuse of the Journal Impact Factor is well known, but the original declaration did not specifically address other forms of indicators that are sometimes used as proxy measures of quality in research assessment. In a new guidance document, we examine the potential problems of not only the Journal Impact Factor, but the h-index, altmetrics, and various other citation measures. None of these indicators are without problems, but the guidance provides five principles to help reduce some of the concerns.

This guidance is available on the DORA website and Zenodo. For questions about this guidance, email info@sfdora.org.

New Narrative CV Report

DORA is pleased to announce a new report on the implementation and monitoring of narrative CVs for grant funding. This report was created in collaboration with FORGEN CoP, Science Foundation Ireland, the Swiss National Science Foundation, UK Research and Innovation, and the University of Bristol, Elizabeth Blackwell Institute for Health Research. The report summarizes takeaways and recommended actions from a joint workshop held in February 2022 on identifying shared objectives for and monitoring the effectiveness of narrative CVs for grant evaluation. More than 180 people from over 30 countries and 50 funding organizations participated.

Read the report

New report on improving pre-award processes

The processes that take place before research is submitted for funding (pre-award processes) serve as important scaffolding to support equitable and transparent research assessment. This report summarizes the key recommendations from DORA’s Funder Discussion Group symposia and workshops to improve pre-award processes, which were held in collaboration with the Elizabeth Blackwell Institute for Health Research (EBI) at the University of Bristol and the MoreBrains Cooperative.

Read the report

Building on this work, we are pleased to also announce that DORA, EBI, and MoreBrains are continuing their collaboration and are developing a new project to look at how three of the recommendations could be implemented. In May 2024, we will host two workshops that will bring DORA’s Funder Discussion Groups together with research administrators and managers to generate tools and guidance that address practical implementation of these recommendations.

DORA seeks new steering committee member from Asia

DORA is looking for Steering Committee members based in Asia. To be considered for this position, please complete this self-nomination form by May 31, 2024.

Approaching 25,000 signatories

There are nearly 25,000 individuals and organizations that have recognized a need to promote responsible research assessment. Every day the number of signatories for DORA increases and we anticipate reaching 25,000 signatories by summer. Signing the San Francisco Declaration on Research Assessment signifies a dedication to the principle that scientists should be evaluated on their individual achievements and quality of their work rather than on journal-based metrics, such as Journal Impact Factor. We are excited to reach the 25,000 mark and share this milestone with the research community!

The post DORA Newsletter May 2024 appeared first on DORA.

]]>